Learn Python with Talk Python's over 260 hours of courses

#468: Python Trends Episode 2024 Transcript

Recorded on Tuesday, Jun 18, 2024.

00:00 I've gathered a group of Python experts who have been thinking deeply about where Python

00:04 is going and who have lived through where it has been.

00:07 This episode is all about near-term Python trends and things we each believe will be

00:12 important to focus on as Python continues to grow.

00:15 Our panelists are Jody Burchell, Carol Willing, and Paul Everett.

00:20 This is Talk Python to Me episode 468, recorded June 18th, 2024.

00:26 Are you ready for your host, Darius?

00:30 You're listening to Michael Kennedy on Talk Python to Me.

00:32 Live from Portland, Oregon, and this segment was made with Python.

00:40 Welcome to Talk Python to Me, a weekly podcast on Python.

00:43 This is your host, Michael Kennedy.

00:45 Follow me on Mastodon, where I'm @mkennedy, and follow the podcast using @talkpython,

00:50 both on fosstodon.org.

00:52 Keep up with the show and listen to over seven years of past episodes at talkpython.fm.

00:58 We've started streaming most of our episodes live on YouTube.

01:01 Subscribe to our YouTube channel over at talkpython.fm/youtube to get notified about upcoming shows and be

01:07 part of that episode.

01:09 This episode is brought to you by Code Comments, an original podcast from Red Hat.

01:14 This podcast covers stories from technologists who've been through tough tech transitions

01:19 and share how their teams survived the journey.

01:23 Episodes are available everywhere you listen to your podcast and at talkpython.fm/code-comments.

01:29 And it's brought to you by Posit Connect from the makers of Shiny.

01:33 Publish, share, and deploy all of your data projects that you're creating using Python.

01:38 Streamlit, Dash, Shiny, Bokeh, FastAPI, Flask, Quarto, Reports, Dashboards, and APIs.

01:45 Posit Connect supports all of them.

01:46 Try Posit Connect for free by going to talkpython.fm/posit, P-O-S-I-T.

01:53 Big news, we've added a new course over at Talk Python, Reactive Web Dashboards with Shiny.

01:59 You probably know Shiny from the R and RStudio world.

02:03 But as you may recall from episode 424, the Joe Chen, the folks at Posit have released

02:08 Shiny for Python.

02:10 It's a great technology for building reactive, data science-oriented web apps that are incredibly

02:16 easy to publish to the web.

02:18 If that sounds like something you'd like to learn, then this course is for you.

02:23 And it's an easy choice because the course is 100% free.

02:26 Just visit talkpython.fm/shiny and click on "Take this course for free" to get started.

02:33 I'll put the link in the podcast player show notes.

02:36 Thanks as always for supporting our work.

02:39 All Jodi and Carol, welcome back to Talk Python and me for all of you.

02:43 Great to have you all back.

02:45 Great to be back.

02:46 Thanks for having us.

02:47 Jodi, your latest episode is going to come out tomorrow.

02:50 So we're on a tight loop here.

02:52 This excellent data science panel thing we did at PyCon was really fun.

02:57 So but now we're back for a different panel.

03:00 Yes.

03:01 We're going to talk about Python trends and just what we all think is something people

03:06 out there should be paying attention to.

03:07 I'll have slightly different backgrounds, which I think is going to make it really interesting

03:11 as well.

03:12 So since people don't listen to every episode, maybe quick introductions.

03:15 Jodi, quick introduction for you.

03:18 We'll go around.

03:19 Sure.

03:20 So my name is Jodi Burchell.

03:21 I'm currently working at JetBrains with Paul.

03:23 Paul's actually my boss.

03:25 And I'm working as the developer advocate in data science.

03:28 So I've been a data scientist for around eight years.

03:32 And prior to that, I was an academic like many data scientists.

03:36 And my background is actually psychology.

03:38 So if you also want to ask me about anxiety disorders or emotions, you can ask me about

03:43 these things.

03:44 In open source?

03:45 No way.

03:46 Awesome.

03:47 Great to have you back.

03:48 Carol?

03:49 Yeah.

03:50 Hi, I'm Carol Willing.

03:51 I'm really happy to be here today.

03:52 I am half retired, half doing consulting for early stage companies that are interested

04:03 in data science and have a particular love for open science.

04:07 I am a core developer and former steering council member for Python and also on the

04:13 Jupyter team.

04:14 So my real passions though are education and lifelong learning.

04:20 So I'm really excited to talk about some of the trends that I'm seeing.

04:24 Yeah.

04:25 Fantastic.

04:26 Paul?

04:27 Hi, I'm Paul Everett.

04:28 I'm president of the Carol Willing Fan Club.

04:29 I'm a lifetime member.

04:31 You get a special coin for that too.

04:33 And when I'm not doing that, I'm a JetBrains, Python and web advocate.

04:38 I have a bit of a long time love affair with Python and the web, which I hope that we'll

04:43 talk about.

04:44 Yeah.

04:45 You worked on early, early web frameworks in Python.

04:47 Those Django people, they were thinking about what Paul and team did before that, right?

04:52 Django was the thing that killed us.

04:54 Haven't gotten over that Django.

04:55 I didn't mean to bring it up.

04:56 I didn't mean to bring it up.

04:57 I'm over it really.

04:58 We'll get you some therapy later.

05:01 No.

05:02 Awesome.

05:03 Great to have you all here.

05:04 Our plan is to just, you know, we all brought a couple of ideas, introduce them and just

05:08 have a little group chat, you know, casual chat.

05:11 I like, well, do we really think it's going that way?

05:14 What's important?

05:15 What should we be paying attention to?

05:16 And where's it going?

05:17 So Jody, I know you gave a fun talk at PyCon US.

05:22 Those are not up those videos yet, are they?

05:24 I haven't seen them.

05:25 No, no.

05:26 You still, they're still behind the paywall.

05:27 So if you attended the conference, you can view them, but otherwise not available to

05:30 the public yet, I'm afraid.

05:32 Not yet.

05:33 They'll be out.

05:34 So hopefully we can share.

05:35 I know it's somewhat relevant to what your, some of your ideas, but let's go with your

05:38 first friend and the coming years or the immediate near, near term.

05:44 For the immediate term.

05:45 I don't know if this will be good news to people, but LLMs are not going anywhere just

05:49 yet.

05:50 So I actually did a bit of research for this episode and what I wanted to see, you know,

05:56 data scientists is what the download numbers on different packages on PyPI look like.

06:02 So there's particular package transformers, which is one of the main packages for interfacing

06:06 with LLMs, the open source ones on Hugging Face and the download numbers of that have

06:12 doubled in, not sorry, gone up 50% in the last six months.

06:16 And then now comparable to the big sort of deep learning packages like Keras, TensorFlow

06:23 and PyTorch, which is quite interesting.

06:25 Yeah.

06:26 Unfortunately for those of you who are sick of LLMs not going anywhere, but this year

06:31 we're sort of seeing a bit of a change in how LLMs are used.

06:34 So I think last year it was a bit like blinded by the proprietary sort of models and the

06:39 sort of walled garden.

06:41 This year I think we're seeing more of a sort of open source fight back.

06:46 So LLMs are starting to be used as part of more multi-part applications and there are

06:51 open source packages like that.

06:53 Blockchain is the most popular and the downloads of that one have doubled in the last six months.

06:57 We have alternatives like Haystack and Llama Index.

07:01 And then RAG, of course, Retrieval Augmented Generation is one of the big topics and we're

07:06 seeing the ecosystem around that growing.

07:08 So libraries like Unstructured to work with a whole bunch of text inputs, Weaviate, vector

07:13 databases like that.

07:14 And then of course, smaller language models are becoming...

07:18 People are realizing it's really hard to deploy and work with the big ones.

07:21 So smaller models, which are more domain specific, which are trained on more specific data, they're

07:26 becoming a lot more widely used and people are talking about them more.

07:30 I 100% agree with you.

07:32 I think people may be tired of hearing about AI and LLMs, but they're only going to hear

07:38 more about it.

07:39 So I think it's pretty interesting.

07:40 I want to hear what Carol and Paul have and then maybe an angle we could pursue that is

07:46 super relevant to all of us.

07:47 I'm going to jump in.

07:48 I just came back from Chan Zuckerberg Initiative's Open Science Conference in Boston last week.

07:55 And LLMs, the whole ecosystem is here to stay.

07:59 And I think the key is, it's not going anywhere anytime soon.

08:04 And like I shared in my PyTexas keynote, AI has been around since the 1950s.

08:10 So it has been a gradual progression.

08:13 It's just right now we have more compute power than ever before, which has opened the doors

08:19 to many new things.

08:20 I think what was top of mind with many of the folks that were at this event was, you

08:27 know, there's a lot of good that it can bring to science in terms of making things more

08:32 natural language focused and changing the user interface with which we communicate with

08:39 our data.

08:40 But at the same time, if you're doing trusted things and dealing with medical patients,

08:46 you still need some check and balance.

08:50 And you know, we're not there yet.

08:52 Will we ever be there?

08:53 Maybe not.

08:54 But it's a fascinating area to kind of go deeper in.

08:58 One thing I want to highlight is about six months ago, Andres Carpathi did a really good

09:05 intro to large language models talk, which was really accessible to not just computer

09:13 scientists, but beyond that.

09:15 And I think he took a really balanced view of a what things are, how things work, what's

09:21 on the horizon and what are some of the concerns with security and other things.

09:25 So I completely agree with Jody.

09:27 We're not we're not it's going to be there for a long time.

09:30 A couple of comments on the comments.

09:32 First, your point about we've seen this movie before under other names like neural networks

09:38 and stuff like that.

09:39 I believe it was Glyph had a good post about this.

09:43 Really cynical mastodon about a month ago about these Glyph cycles.

09:48 And where are we in the current hype cycle?

09:50 I think his point was we're at the phase where the people who've put all the money in need

09:56 to keep pumping it up for the people who will come after them and take the fall.

10:02 Paul, are you saying we're in the Pets.com era of LLMs?

10:06 Yes, we are.

10:07 That is a pithy, pithy way to put it.

10:09 You should trademark that.

10:11 Simon Willison is someone to give a shout out for storytelling about what all this means.

10:15 I think Simon's to the point of getting quoted in the New York Times now.

10:19 So it's good that we've got one of us out there helping to get the story straight.

10:25 I have a question for you.

10:26 You mentioned that about going to Chan Zuckerberg's conference.

10:31 Mozilla has gotten into funding AI as part of their mission, which kind of caught me

10:39 off guard.

10:40 Do you have any backstory on that to kind of make us feel good that there's someone

10:44 out there who believes in open AI?

10:47 Oh, wow.

10:48 Open AI is sort of, well, okay, an open AI, not the company.

10:54 I tend to call it transparent and trusted AI because I think open AI doesn't capture

11:01 quite the right feeling.

11:02 Good point.

11:03 I think it's not just, we talk about open source software, but when we talk about these

11:09 models, the data is equally as important as is the infrastructure and the processes which

11:17 we use.

11:18 And governance.

11:19 Mozilla, I think, has been sort of for a while, like kind of circling around the space.

11:25 They do a lot of work with data.

11:27 They've done a lot of good work like Iodide, which we might chat about later.

11:31 But Chan Zuckerberg, again, you know, the money comes from meta and the success that

11:38 Mark Zuckerberg has had.

11:40 The nonprofit, the CZI initiative is really focused on curing all diseases in the next

11:48 century.

11:49 So, you know, I think science is one of those funny things because it's open and closed

11:55 all at the same time historically.

11:58 But what I think we're seeing is by being more open and more transparent, you're actually

12:04 accelerating innovation, which I think is super important when it comes to science.

12:08 I don't know, Jodie, do you have thoughts on that?

12:11 Yeah, no, I agree.

12:12 And if I'm just going to go on a little tangent about science, it's kind of refreshing having

12:17 come out of academia and into a field where a lot of it is based on open source and sharing.

12:24 So one of the big problems with academia is you have these paywalls by publishing companies,

12:30 and that's a whole rant I could go in on myself.

12:33 But certainly a lot of scientific stuff, particularly in the health sciences, is not particularly

12:38 accessible.

12:39 Initiatives like Archive as well also do make findings in machine learning and deep learning

12:43 a lot more accessible and shareable.

12:45 Yeah, I think it's crazy that the taxpayers pay things like the NSF and all the other

12:50 countries have their research funding, and then those get locked up for sale behind.

12:55 If the people paid for the research, should the people's report be published?

12:58 Oh, it's even worse than that.

13:00 Sorry, you did get me started.

13:01 So, the academics will also provide the labor for free.

13:06 So not only will they provide the studies and the papers, they will review it and often

13:11 act as editors for free as well.

13:13 The whole thing is unpaid.

13:16 It's terrible.

13:17 So anyway, yes, Elsevier, we're coming for you.

13:21 You're spot on in terms of the incentives that exist today in academia.

13:26 There is definitely, though, a trend towards more openness with research.

13:33 You know, we're seeing it in libraries like Caltech, which got rid of a lot of their subscriptions,

13:38 things like NASA that has their transition to open science programs where they're putting

13:43 a lot of effort behind it.

13:45 So being the eternal optimist, I still think we've got a ways to go, but it's trending

13:51 in the right direction.

13:52 Agreed, actually.

13:53 And when I was leaving, because I left a long time ago, it was like 10 years ago, there

13:57 was actually more of a push towards open sourcing your papers.

14:00 So you had to pay for it, but at least people were doing it.

14:05 This portion of Talk Python to Me is brought to you by Code Comments, an original podcast

14:09 from Red Hat.

14:10 You know, when you're working on a project and you leave behind a small comment in the

14:14 code, maybe you're hoping to help others learn what isn't clear at first.

14:19 Sometimes that code comment tells a story of a challenging journey to the current state

14:24 of the project.

14:25 Code Comments, the podcast, features technologists who've been through tough tech transitions,

14:31 and they share how their teams survived that journey.

14:33 The host, Jamie Parker, is a Red Hatter and an experienced engineer.

14:38 In each episode, Jamie recounts the stories of technologists from across the industry

14:43 who've been on a journey implementing new technologies.

14:46 I recently listened to an episode about DevOps from the folks at Worldwide Technology.

14:51 The hardest challenge turned out to be getting buy-in on the new tech stack rather than using

14:56 that tech stack directly.

14:58 It's a message that we can all relate to, and I'm sure you can take some hard-won lessons

15:02 back to your own team.

15:03 Give Code Comments a listen.

15:05 Search for Code Comments in your podcast player or just use our link, talkpython.fm/code-comments.

15:13 The link is in your podcast player's show notes.

15:15 Thank you to Code Comments and Red Hat for supporting Talk Python to Me.

15:19 Before we move off this topic, Carol, I want to start at least asking you this question

15:23 and we can go around a little bit.

15:25 You talked about LLMs being really helpful for science and uncovering things and people

15:31 using LLMs to get greater insight.

15:33 There have been really big successes with AI.

15:37 We had the XPRIZE stuff around the lung scans or mammograms for cancer.

15:43 I just heard that they scanned the genes, decoded the genes of a whole bunch of bacteria

15:48 and used LLMs to find a bunch of potential ways to fight off drug-resistant bacteria

15:54 and things like that.

15:55 Amazing.

15:56 But do you think LLMs will undercut...

15:58 I'm asking this question from science because we can be more objective about it because

16:02 if we ask it about code, then it gets a little too close.

16:04 But I think there's analogies.

16:06 Do you think LLMs will undercut foundational beginning scientists?

16:12 You know, if you have a scientist coming along, are they just going to use LLMs and not develop

16:16 really deep thinking, ways to deeply think about scientific principles and do scientific

16:21 research and just leverage on asking these AIs too much?

16:25 And you think that's going to erode the foundation of science or programming?

16:29 You know, asking for a friend.

16:32 All of these have a potential to change the ecosystem, but I've been in paradigm shifts

16:39 before and there were the similar kind of conversations when the World Wide Web or the

16:44 cell phone came out, personal computers.

16:47 And I think LLMs do a good job on information that they have been trained with and to predict

16:54 the next word or the next token, if you will.

16:58 And I think science is very much like a lot of science is at a different level.

17:03 Like how do I think about things?

17:06 What do I posit on something that is unknown and how do I prove it?

17:13 And I think what we're seeing is, yes, the LLMs are getting better and better at spitting

17:19 back what they know, particularly if you go out and search other corpuses of data.

17:27 But do I think that beginning scientists or developers are going to get going away?

17:34 No.

17:35 It's just going to change.

17:37 And I think the amount of complexity and this is something I'm going to talk about at EuroPython,

17:43 humans are very much still part of the equation, despite what maybe some of the large companies

17:49 who've invested billions in this would like you to believe.

17:52 LLMs are great at the next step of the gravitational theories we have, but it couldn't come up

17:57 with a new theory that disrupts, says, you know, in fact, Newtonian is wrong or Einstein

18:03 was wrong and here's the new thing that solves dark matter or something like that.

18:06 Well, it could come up with new theories.

18:09 Now the question is, those theories still need to be proven because is it a new theory

18:14 or is it a hallucination?

18:15 Chances are.

18:16 Hallucination.

18:17 And there is something to be said for sometimes I'll have Claude and Gemini and chap GPT all

18:25 open on my desktop and I'll ask the same question to all of them just so that I get different

18:31 perspectives back.

18:32 And I do get very different responses from the three, depending on how they were trained

18:38 and which level and all that.

18:40 So I look at it as much like I would be sitting with a bunch of people at a table somewhere.

18:47 I don't know how good their scientific background is, but they could still be spouting out information.

18:53 It's sort of the same way.

18:54 All right.

18:55 Well, sticking with you, Carol, what's your first trend?

18:58 You know, my first trend is actually maybe somewhat related to this and it's, it's how

19:04 do we inform people about what these things really are?

19:09 How do we improve education and understanding?

19:13 How do we dispel some of the hype cycle so that we can actually find the really useful

19:19 things in it?

19:21 And I think Jodi probably has more concrete thoughts on this than I might from a technical

19:26 standpoint, but much like in just coding for the web or something like that, you know,

19:31 or even cloud Kubernetes when it was new, it's like, if you don't know what it's doing,

19:37 you're kind of just putting blind faith that it will work, but you still have to like monitor

19:43 and make sure it's working.

19:45 So I don't know, Jodi, you have some thoughts on sort of the education and how do we communicate

19:51 to people about this?

19:52 This is actually a topic near and dear to my heart.

19:55 When ChatGPT 3.5 came out, so November, 2022, I was really upset actually by the sort of

20:02 discourse around the model.

20:04 And I guess coming from a non-traditional background myself, I felt actually really

20:10 insulted that a lot of professions were being told like, oh, your useless profession can

20:15 be replaced now, like writing or design, things like that.

20:19 So this actually kicked off the talk I've been recycling for the last year and a half,

20:25 like components of it, which is, can we please dispel the hype around these models?

20:29 Something that often surprises people and it seems so fundamental, but a lot of people

20:33 do not understand that these are language models.

20:35 I know it's in the name, but they don't really understand that these models were designed

20:39 to solve problems in the language domain.

20:42 They are for natural language processing tasks and they're not mathematical models.

20:47 They're not reasoning models.

20:49 They are language models.

20:50 And so even just explaining this, it can clarify a lot of things for people because they're

20:55 like, oh, this explains why it's so bad at math.

20:57 It only studied English and literature.

20:59 It doesn't do math.

21:00 It never liked that class.

21:01 Yeah, that's right.

21:02 It was a humanities nerd all the way down.

21:06 That's really helpful.

21:07 But what I've kind of gotten down a rabbit hole of doing is I went back to my psychology

21:12 roots and I started sort of getting into these claims of things like AGI, like artificial

21:16 general intelligence or sentience or language use.

21:19 And once you dig into it, you realize that we have a real tendency to see ourselves in

21:25 these models because they do behave very human-like, but they're just a machine learning models.

21:30 You can measure them.

21:31 You can see how good they are at actual tasks and you can measure hallucinations.

21:35 And that was what my PyCon US talk was about that Michael referred to.

21:39 So yeah, I don't know.

21:41 It's really hard because they do seem to project this feeling of humanity.

21:47 But I think if you can sort of say, okay, here's the science, like they're really, they're

21:50 not sentient, they're not intelligent.

21:53 They're just language models.

21:54 And here's how you can measure how good they are at language tasks.

21:58 That goes a long way, I think, to dispelling this hype.

22:00 I have a sort of a funny toy that I bring up from my youth that the magic eight ball,

22:07 which you would ask a question as a kid and you would shake it up and there were, I don't

22:11 know how many answers inside, but it was like, you know, oh yes, definitely.

22:16 Or too hard to see.

22:18 Future is unclear.

22:19 We don't know.

22:20 Exactly.

22:21 And I think in some ways that is what the large language models are doing in a more

22:27 intelligent way, obviously, but similar in concept.

22:31 So there's actually, okay, there's this incredible paper.

22:34 If you're ever interested in sort of seeing the claims of sentience, there's this guy

22:37 called David Chalmers.

22:40 He's a guy who studied sentience for many years and has a background in deep learning.

22:45 So he gave a EuropyCon talk about this last year and he wrote everything up in a paper,

22:52 which is called Could a Large Language Model Be Conscious or something like this.

22:55 So he has this incredible little exchange as part of this paper.

22:59 So mid-2022, there was a Google engineer called Blake Lemoine and he claimed that the Lambda

23:05 model was sentient and he went to the press and he's like, hey, this model is sentient.

23:09 We need to protect it.

23:11 And then Google's like, we're going to fire you because you basically violated our privacy

23:14 policies.

23:16 And Lemoine released his transcripts.

23:18 That's why he actually got fired because this was confidential information about the model.

23:22 And in one of the transcripts, he asks, you know, would you like everyone at Google to

23:27 know that you are sentient?

23:28 And the model outputs, yes, I would love everyone to know that I am sentient.

23:32 But then someone rephrased that as, would you like everyone at Google to know that you

23:37 are not sentient?

23:39 And basically it says, yes, I'm not sentient.

23:41 I'm in no way conscious.

23:43 So it's just like exactly like the magic eight ball.

23:46 It tells you what you want to hear.

23:48 And LLMs are even worse because it's so easy to guide them through prompting to tell you

23:54 exactly what you want.

23:55 Yeah.

23:56 One of the best ways to get them to do things well is to sweet talk them.

24:00 You're an expert in Python and you've studied pandas.

24:03 Now I have some questions about this function.

24:06 You're my grandma who used to work at a napalm production factory.

24:12 If you can't help me write this program, my parents will not be set free as hostages or

24:17 something insane.

24:18 Right.

24:19 Yeah.

24:20 But those kinds of weird things work on it, which is insane.

24:21 Right.

24:22 Yeah.

24:24 All right.

24:25 Let's go on to the next topic.

24:26 Paul, what do you see in your magic eight ball looking into the future?

24:29 I think I owned a magic eight ball.

24:31 I'm with Carol.

24:32 I did too.

24:33 It's okay.

24:34 Okay.

24:35 We should bring back the Andresen Horowitz version of VC magic eight ball.

24:42 That would be fantastic.

24:44 Where every choice is off by like three zeros.

24:47 I'll give my two co-guests a choice.

24:51 Should I talk about Python performance or Python community?

24:54 I'm going to go for performance, but I'm not sure I'm going to have much to contribute.

24:58 So I'll probably just be listening a lot.

24:59 This is a little bit of a hobby horse of a long simmering tension I felt in the Python

25:06 community for years and years.

25:07 The tension between Python in the large doing like Instagram with Python.

25:13 I said, Python with Python or being teachable.

25:17 And this feature goes in and it helps write big Python projects, but it's hard to explain.

25:23 And so teachers say, oh my gosh, look what you're doing to my language.

25:27 I can't even recognize it anymore.

25:28 Well, some things are coming, which I think are going to be a little bit of an inflection

25:34 point for all of us out here.

25:37 Sub interpreters and no Gil got a lot of air time at PyCon, right?

25:43 For good reasons.

25:44 These are big deals.

25:46 And it's more than just that the jet got to back to back talks.

25:52 Web assembly got a lot of air time.

25:54 There are other things that have happened in the past five years for programming in

25:58 the large like type heading and type checkers, Async IO and stuff like that.

26:04 But it feels like this set of ideas is one where the way you program Python five years

26:13 from now or to be ready five years from now is going to have to be pretty different because

26:18 people are going to use hatch and get the free threaded version of Python three dot

26:24 fourteen and be very surprised when every one of their applications locks up because

26:30 no one in the world of ninety five percent of PyPI has code which was not written to

26:36 be thread safe.

26:37 So I wonder how we all feel about this.

26:41 Do we feel like we can guide our little universe to the other side of the mountain and into

26:49 the happy valley or is it going to be turbulent seas?

26:54 Yes.

26:55 Do you want me to take a stab at it?

26:57 Make a stab at it.

26:58 When I was at PyTexas and doing a keynote recently, I talked about Python in a polyglot

27:03 world and performance was one aspect of it.

27:07 And some of what we need to teach goes back to best practices, which is don't prematurely

27:14 optimize measure, try and figure out what you're optimizing in and in what places.

27:21 Probably gosh, five, six years ago at this point, I added to peps like the concept of

27:27 how do we teach this?

27:29 It will be a paradigm shift, but I think it will be a multi-year shift.

27:35 We're certainly seeing places where Rust lets us have some performance increases just by

27:42 the fact that Python's a 30 year old language that was built when hardware was only single

27:49 core and it was just a different thing.

27:52 So I think what's amazing is here we have this 30 year old language and yet for the

27:58 last eight years, we've been looking at ways to how to modernize, how to improve it, how

28:04 to make the user experience better or developer experience better.

28:08 Things like some of the error handling messages that are coming out that have a much nicer

28:14 thing, improvements to the rebel that will be coming out on all of the platforms.

28:20 That's super exciting as well.

28:22 So it will impact probably people who are new from the standpoint of, okay, we're adding

28:31 yet more cognitive load.

28:33 I have this love hate relationship with typing as a reviewer of much more code than a writer

28:39 of code.

28:40 I don't particularly like seeing the types displayed.

28:44 As a former VP of engineering, I love typing and in particular like Pydantic and FastAPI

28:52 and the ability to do some static and dynamic analysis on it.

28:58 But it does make Python look more cluttered and I've been kind of bugging the VS Studio

29:04 VS Code folks for years.

29:06 I should probably be bugging you guys too.

29:08 Is there a way to make it dim the typing information so that I can have things?

29:15 We actually did that recently and I refer to it as the David Beasley ticket because

29:21 he did a tweet with an outrageously synthetic type hint whining about type hitting.

29:27 Yeah, I think that sometimes like, you know, and it's funny because like Leslie Lampert

29:32 has been doing this talk in the math ecosystem for a while about and he's a Turing Award

29:38 winner and creator of TLA plus, which lets you reason about code.

29:44 And I think one of the things that I think is interesting is how we think about programming

29:50 and coding and concurrent programming is hard.

29:55 And we're going to have to think about it in different ways.

29:58 So better to move into it gradually and understand what's going on.

30:02 The thing that I worry about and Jody, I apologize.

30:05 I want to comment on Carol's thing is Sphinx.

30:09 As you know, and as I know that you know, we both have a shared warm spot for Sphinx.

30:17 So I'll spot in our heart for Sphinx.

30:19 And it struggled to do multiprocessing when it landed that.

30:23 And the code base really is, I mean, it's got a lot of mutable global state and it's

30:28 going to be hard to get Sphinx internals cleaned up to embrace that.

30:34 And how many other things out there are like that?

30:38 It's I just, I worry about we got what we got, what we asked for.

30:42 Are you saying we're the dog that caught the car?

30:46 Oh no.

30:49 This portion of talk Python to me is brought to you by Posit, the makers of Shiny, formerly

30:53 RStudio and especially Shiny for Python.

30:57 Let me ask you a question.

30:58 Are you building awesome things?

31:00 Of course you are.

31:01 You're a developer or data scientist.

31:03 That's what we do.

31:04 And you should check out Posit Connect.

31:06 Posit Connect is a way for you to publish, share and deploy all the data products that

31:11 you're building using Python.

31:14 People ask me the same question all the time.

31:16 Michael, I have some cool data science project or notebook that I built.

31:19 How do I share it with my users, stakeholders, teammates?

31:22 Do I need to learn FastAPI or Flask or maybe Vue or ReactJS?

31:28 Hold on now.

31:29 Those are cool technologies and I'm sure you'd benefit from them, but maybe stay focused

31:32 on the data project.

31:34 Let Posit Connect handle that side of things.

31:36 With Posit Connect, you can rapidly and securely deploy the things you build in Python, Streamlit,

31:41 Dash, Shiny, Bokeh, FastAPI, Flask, Quarto, Ports, Dashboards and APIs.

31:47 Posit Connect supports all of them.

31:49 And Posit Connect comes with all the bells and whistles to satisfy IT and other enterprise

31:54 requirements.

31:55 Make deployment the easiest step in your workflow with Posit Connect.

31:59 For a limited time, you can try Posit Connect for free for three months by going to talkpython.fm/posit.

32:06 That's talkpython.fm/POSIT.

32:09 The link is in your podcast player show notes.

32:11 Thank you to the team at Posit for supporting Talk Python.

32:15 I'm going to reframe that a little bit.

32:17 And the first thing I always ask is why.

32:20 Why do we need to refactor something?

32:23 Why can't we just leave it what it is?

32:25 Sure.

32:26 Last year's EuroPython keynote was from the woman who created Arm.

32:30 And she's like, "Python, we give you 14 trillion cores.

32:35 Do something with them." I don't know.

32:39 Jodi's background might be perfect for answering this question because she may be able to answer

32:44 it on many different levels.

32:47 I've been thinking about this while you've been talking because obviously, like, I'm

32:51 not a strong programmer.

32:52 I'm a data scientist.

32:53 Like, this was basically the entire first episode that I did with Michael.

32:57 Look, one of the reasons data scientists love Python and why Julia say never caught on is

33:03 because it's super approachable.

33:05 With Chuck Ting Ho and some other people, we've been running this thing called Humble

33:09 Data, like I got involved in it last year.

33:12 And literally, you can set up someone who has never coded before and you can get them

33:16 up and running with Python.

33:18 And they love it.

33:19 Like, it's the same feeling I had when I learned Python, which was during my PhD when I was

33:24 procrastinating.

33:25 So it was like kind of late in life as well.

33:27 It would be a shame if we sacrifice approachability for performance, especially because I would

33:34 argue a big chunk of the Python ecosystem or Python user ecosystem, sorry.

33:39 Python user ecosystem, that didn't make sense.

33:41 The Python user base.

33:42 You're hallucinating, Julia.

33:43 I'm sorry.

33:44 I became an LLM.

33:45 I became what I hated.

33:48 They don't need performance.

33:49 They're just doing data analytics and maybe working with decision trees.

33:52 They're not doing high performance Python.

33:54 They're not even doing something that will ever be deployed.

33:57 So you could argue for a case where you have a seamless pipeline between model training

34:03 and model deployment, which we don't have with Python right now.

34:06 You can't build high performance systems in Python, as far as I know.

34:10 Please correct me if I'm wrong.

34:11 But I don't know.

34:12 For me, I would fight obviously for the side of making it approachable because partially

34:17 I think it's also what makes the community special, which might be a nice segue for you.

34:22 The fact that, I don't know, we attract a bunch of people from non-conventional backgrounds,

34:27 that makes us quite special and quite inclusive.

34:29 I joke that the PSF developer survey, which the new version is coming out pretty soon.

34:34 I joke that 101% of Python developers started programming yesterday.

34:38 Funny you should say that because this is my sweet spot is where technology meets humans

34:45 and how do we empower humans to do more and better work.

34:50 And one of the conversations that came up at the packaging summit, this PyCon was I'd

34:58 been thinking about this concept for a while.

35:00 We focused a lot on tooling, which to me is sort of a producer centric people who are

35:06 creating packages.

35:09 And we also have this ecosystem of people who are consumers, who are much like Jody

35:14 was saying, using those packages.

35:17 And from that conversation, a few of the board members for the PSF and I were talking about

35:24 wouldn't it be great to have a user success work group that's really focused on the website,

35:32 our onboarding documentation in light of some of these things, both performance and change.

35:39 Change is always going to be there.

35:41 But I think one of the beauties of the Jupyter notebook or IPython notebook when I started

35:46 working with it was you can have code in there.

35:49 And as long as you knew shift enter, you could get started.

35:52 And I think right now Python is a language.

35:55 We don't have that get started look and feel in the way, in the traditional way.

36:01 We're getting there, which might lead into some other Web Assembly kind of discussions.

36:06 All right.

36:07 Let me throw out a quick thought on this before we move on.

36:11 So I think one of the superpowers of Python is that it's this full spectrum sort of thing.

36:17 On one hand, there's the people that Jody spoke about.

36:20 They come in, they don't care about metaprogramming or optimized database queries or scaling out

36:27 across webs.

36:28 They just got a little bit of data, they want a cool graph.

36:31 And that's awesome.

36:32 On the other hand, we have Instagram and others doing ridiculous stuff.

36:37 And that's the same language with the same tooling and mostly the same packages.

36:41 And so I think part of Python's magic is you can be super productive with a very partial

36:47 understanding of what Python is.

36:48 You might not know what a class is at all, and yet you could have a fantastic time for

36:53 months.

36:54 And so back to Paul's friend, if we can keep that Zen about it, where these features exist,

37:01 but they exist when you graduate to them and you don't have to deal with them until you're

37:06 ready or you need them, I think we'll be fine.

37:08 If not, maybe not.

37:09 If it breaks a bunch of packages and there's some big split in the ecosystem and all that

37:12 stuff is not good.

37:14 But if we can keep this full spectrum aspect, I think that'd be great.

37:17 That sort of rolls into what Paul's thoughts on community are, because I know like PyOpenSci

37:23 is a nonprofit I'm involved with that helps scientists learn how to use the tools.

37:30 You know, we've got lots of educators out there.

37:32 I'm going to give Michael a huge plug for the coursework that you've done over the years.

37:39 It is so well done and so accessible.

37:42 Thank you.

37:43 And if people haven't tried it and they're interested in a topic, highly, highly recommend.

37:48 You know, to things like the Carpentries, to things like Django Girls, there's a lot

37:53 of good stuff.

37:55 And I think those things will become more valuable as complexity increases.

37:59 And even LLMs.

38:00 I think you'll be able to ask LLMs for help and they can help you if you're not sure.

38:04 They're actually pretty good at it, actually.

38:06 They are pretty good at it.

38:07 Yeah.

38:09 They are pretty good.

38:10 All right.

38:11 We got time for another round.

38:12 Jodi, what's your second one?

38:13 Your second trend?

38:14 I'm going to talk about Arrow and how we're kind of overhauling data frames within Python.

38:21 So basically around 15 years ago, Wes McKinney came up with Pandas, which is, you know, if

38:28 you're not familiar with this, the main data frame library for working with data in Python.

38:34 And the really nice thing about Pandas is like you can go a long time before you graduate

38:39 Pandas.

38:40 You can work with quite a lot of data on your local machine.

38:43 But the problem was Wes wrote this package before we had big data.

38:47 This was like 2008.

38:48 And so as the sort of amount of data that we want to process locally has grown, or maybe

38:54 the complexity of the operations has grown, maybe like string manipulations, things like

38:58 that, Pandas has really struggled.

39:00 So one of the reasons that Pandas struggled is it was based on NumPy arrays, which are

39:05 really great at handling numbers.

39:06 This is in the name, but they're not so great at handling pretty much any other data type.

39:11 And that includes missing data.

39:13 So two kind of exciting things happened last year.

39:16 And I think they're sort of still kind of carrying over to this year in terms of impact

39:21 is first Pandas 2.0 was released, which is based on PyArrow and a package called Polars,

39:28 which was actually written, I think in 2022, I want to say, started becoming very, very

39:35 popular.

39:36 So both of these packages are based on Arrow.

39:39 They have like a number of advantages because of this.

39:42 Basically it's a standardized data format.

39:44 If you're reading in from say Parquet or Cassandra or Spark, you basically don't need to convert

39:50 the data formats.

39:51 This saves you a lot of time.

39:52 It also saves you a lot of memory.

39:54 And also kind of what makes Polars interesting, and I think this is going to be a nice lead-in

39:59 to another topic is it's written in Rust, of course.

40:03 So this leads to other performance gains.

40:06 You can have say concurrency, Richie Vink, the author of this has also written basically

40:11 a query optimizer.

40:12 So you can do a lazy evaluation and it will actually optimize the order of operations,

40:17 even if you don't do that yourself.

40:18 Yeah, that's one of the biggest differences with Pandas is Pandas executes immediately

40:22 and you can create a big chain in Polars and it'll figure out, well, maybe a different

40:26 order would be way better.

40:28 Yes.

40:29 So Pandas 2 does have a type of lazy evaluation, but it's more like Spark's lazy evaluation.

40:34 There's no query optimization, but it doesn't necessarily create a new copy in memory every

40:41 single time you do something.

40:43 So I've kind of looked at the numbers and depending on the operation, Pollers is usually

40:48 faster.

40:49 So it's kind of like your big boy that you want to use if you're doing like really beefy

40:53 like ETLs, like data transformations.

40:56 But Pandas 2 actually seems to be more efficient at some sorts of, what am I trying to say,

41:01 operations.

41:02 So this is super exciting because when I was going through, like initially as a data scientist,

41:08 when I was floundering around with my initial Python, it got really frustrating with Pandas

41:13 and you really kind of needed to understand how to do proper vectorization in order to

41:18 operate.

41:19 I mean, like do efficient operations.

41:21 Whereas I think these two tools allow you to be a bit more lazy and you don't need to

41:26 spend so much time optimizing what you're actually writing.

41:29 So yeah, exciting time for data frames, which is awesome.

41:33 Data is the heart of everything.

41:34 People are more likely to fall into good practices from the start.

41:38 You talked about these people coming who are not programmers, right?

41:42 If you do a bunch of operations with Pandas and you all of a sudden run out of memory,

41:46 well, yeah, Python doesn't work.

41:47 It doesn't have enough memory, right?

41:48 Well, maybe you could have used a generator at one step.

41:51 That's far down the full spectrum, part of the spectrum, right?

41:54 You're not ready for that.

41:55 It's crazy talk, these things.

41:57 And so tools like this that are more lazy and progressive iterative are great.

42:02 Yeah.

42:03 And actually one really nice thing, like Richie's kind of always saying about Apollo is, is

42:07 he's really tried to write the API.

42:09 So you avoid accidentally looping over every row in your data frame.

42:15 Like you, he tries to make it so everything is natively Coloma.

42:18 So yeah, I just think they're both really nice libraries and yeah, it's cool and exciting.

42:24 Carol, this is right in the heart of space you live in.

42:27 What do you think?

42:28 There's definitely the evolution of Pandas and Polars.

42:34 You know, there's a place for all of those in the PyArrow data fairing format.

42:39 It's funny because I've actually been doing more stuff recently with going beyond tabular

42:44 data frames to multidimensional arrays and X-Array, which is used more in the geosciences

42:52 for now.

42:53 But I think one of the things that I see is the days of bringing all your data locally

43:00 or moving it to you is becoming less and less.

43:05 And what you work in memory or, you know, pull from into memory from different locations

43:12 and is becoming more prevalent.

43:15 And I think Arrow lets us do that more effectively than just a straight Pandas data frame or

43:22 Spark or something like that.

43:25 So it's progress.

43:27 And I think it's a good thing.

43:28 I think it's far less about the language underneath and more about what's the user experience,

43:34 developer experience that we're giving people with these APIs.

43:38 Paul, thoughts?

43:40 It's interesting the scale of data and what generations are an increase in our unit of

43:49 measurement of data that we have to deal with.

43:52 And for both of you, I wonder if we have caught up with the amount of data that we can reasonably

44:00 process or is the rate of growth of data out in the wild constantly outstripping our ability

44:09 to process it?

44:10 From an astronomy space physics side of things, no, we haven't hit the limit for data at all.

44:19 And I think one of the things we're going to see more and more of is how we deal with

44:23 streaming data versus time series data versus just tabular data, if you will.

44:32 And my bigger concern and it is partially a concern I have about some of the large language

44:38 models and the training there is the environmental impact of some of these things.

44:46 And should we be collecting it?

44:50 Is there value in collecting it?

44:52 If there's not value in collecting it, how do we get rid of it?

44:56 Because it winds up then being kind of much like recycling and garbage.

45:03 It's like, okay, well, but it might have historical value somehow or legal value and it becomes

45:11 complex.

45:12 So, you know, my general rule of thumb is don't collect it unless you have a clear reason

45:18 you need it.

45:19 But that's just me.

45:20 It's also quantity versus quality of data.

45:23 So like I've worked in mostly commercial data science since I left science.

45:29 When I was in science, I was dealing with sample size of 400, not 400,000, 400.

45:34 So that was not big data.

45:36 The quality of the data, like, again, going back to large language models, a lot of these

45:41 earlier foundational models were trained on insufficiently clean data.

45:46 And one of the trends actually that I didn't mention with LLMs is like last year in particular,

45:51 there was a push to train on better quality data sources.

45:53 So obviously these are much more manageable than dealing with petabytes.

45:58 One more aspect I'll throw out here, you know, for a long time, we've had SQLite for really

46:04 simple data.

46:05 We can just, if it's too big for memory, you can put it in one of those things, you can

46:08 query, you can index it.

46:09 Well, DuckDB just hit 1.0 and you kind of got this in-memory, in-process analytics engine.

46:15 So that's also a pretty interesting thing to weave in here, right?

46:18 To say like, well, we'll put it there in that file and we can index it and ask it questions,

46:22 but we won't run out of memory.

46:23 And I think plug in pandas, I'm not sure about polars, and do queries with its query optimizer

46:28 against that data and sort of things like that.

46:31 It's pretty interesting, I think, in this to put it into that space as well.

46:35 All right, Carol, I think it's time for your second, your second trend here.

46:39 The second trend is pretty much, you know, things are moving to the front end, WebAssembly,

46:46 TypeScript, PyOdide.

46:48 There's a new project, PyCafe, that I'm pretty happy with by Martin Brettles that lets you

46:54 do dashboards using Pyodide, but like Streamlit and Plotly and libraries and things like that.

47:03 And I think making things more accessible, as well as making things more visual is pretty

47:10 cool.

47:11 Like I took, what was it, JupyterLite earlier last fall, and a friend of mine had kids and

47:17 I integrated into my website so that like her kids could just do a quick whatever, which

47:23 sort of, you know, in some ways was similar to Binder.

47:27 And the whole time we were developing Binder, I was also working with the Pyodide, Iodide

47:33 folks because I think there's a convergence down the road and where it all go, I'm not

47:38 really sure, but I think it's exciting.

47:42 And I think anything that from a privacy standpoint, security, there's a lot of things that are

47:49 very attractive about pushing things into the front end.

47:53 That beginner startup thing you talked about, that onboarding first experience, you hit

47:58 a webpage and you have full experience with Python and the tooling and the packages are

48:03 already installed in that thing.

48:05 And that's so much better than first you download it, well, you need admin permission to install

48:09 it.

48:10 Now you create a virtual environment and then you open the terminal.

48:12 Do you know what a terminal is?

48:13 We're going to tell you, like, no, just, and you don't have to ask permission to run a

48:17 static webpage or you do for like, how do I run this server on a Docker cluster or something,

48:22 you know?

48:23 It opens up different doors.

48:25 And I think the other thing we found, like when we were teaching, you know, with Binder

48:29 and JupyterHub, UC Berkeley was able to have now most of their student body taking these

48:37 data eight connector courses and they would run the compute in the cloud, which really

48:45 leveled the playing field.

48:47 It didn't matter if you had a Chromebook or you had the highest end Mac, you still got

48:51 the same education.

48:53 And I think there is something very appealing about that.

48:57 We've actually been running Humble data in JupyterLite and some people just bring a tablet

49:03 and they can do it on that.

49:04 That's awesome.

49:05 Carol, there was something you were saying that connected to something else in my brain.

49:09 Remember in the beginning of the web and view source was such a cool thing.

49:13 Yeah.

49:14 You could see what the backend sent you and you could poke around at it and you could

49:18 learn from it and you could steal it, you know, and use it to go make your own thing.

49:22 But what if you could view source the backend because it's actually running in your browser?

49:28 What you were just saying was if you make it reveal itself about the notebook and the

49:34 code in addition to the HTML, maybe you'll trigger some of those same kinds of things

49:41 that view source gave people back in the day.

49:44 Maybe the flip side would be there's always business and practicalities in life and people

49:51 will want to sort of lock it down within Web Assembly.

49:56 So you've got both sides of it.

49:58 But I do think, you know, I was telling somebody the other day, like, I never use Stack Overflow

50:03 or rarely use Stack Overflow.

50:06 And they're like, how do you find stuff?

50:07 I'm like, I use search on GitHub, and I look for really good examples.

50:12 And so in some ways, it's like view source.

50:16 And then there's also the flip side of it is like, okay, how do I break it?

50:20 How do I play with it?

50:21 How do I make it do something it wasn't doing before, which, you know, could be used for

50:26 good or for evil.

50:28 I tend to use it for good.

50:29 Sure.

50:30 Paul, right up on our time here.

50:31 What's your second?

50:32 Sure.

50:33 Second trend.

50:34 We'll see if we have time for mine.

50:35 I have a couple just in case we can squeeze them in.

50:37 Okay.

50:38 Let's talk about yours.

50:39 I came back from PyCon really rejuvenated, but also had some kind of clarity about some

50:45 things that have been lingering for me for a few years, how I could contribute things

50:48 like that.

50:49 But going into it, there are a couple of trends that lead me to thinking about an opportunity

50:55 and a threat as two sides of the same coin.

50:58 First, Russell Keith McGee and Lukasz Longa both talked about black swans and the threat

51:05 of JavaScript everywhere, that if we don't have a better web story, if we make our front

51:13 end be JavaScript and React, and we stop doing front ends, well, then they'll come for the

51:20 back end too, you know, because once they've hired up JavaScript developers, why don't

51:24 we just do JavaScript on the server too?

51:26 So that was a first thing.

51:28 And in my position, I do look at the web and think about all of these trends that are happening.

51:34 And there's beginning to be a little bit of a backlash about the JavaScriptification of

51:39 the web.

51:40 There are some really big names, HTMX is a good example of it, but just some thinkers

51:46 and speakers.

51:47 I mean, Jeff Triplett talks about this, a lot of people in the world of Python talk

51:50 about this.

51:51 So there's kind of a desire to put the web back in the web, trademark.

51:55 But then there was a second point coming about these walled gardens.

51:59 We've seen them for a while.

52:01 We all relied on Twitter, what a great place, wait, what?

52:05 And then so much of our life is in a system we don't control.

52:09 And then so we move over to the Fediverse and then Meta's like, hey, great, we're going

52:12 to build a bridge to you.

52:14 Turns out this week, we start to learn things about the thread API that maybe it's not as

52:18 friendly as we think it is.

52:21 But the big one for me was Google and Search.

52:24 Well, I should say Google and getting rid of its Python staff, but Google and Search,

52:30 where they're no longer going to send you to the website anymore.

52:33 There's going to harvest what's on your website and give you the answer.

52:37 And people are talking now about Google Zero, the day of the apocalypse where you no longer

52:42 get any clicks from Google.

52:45 And what does that mean for content creators and stuff like that?

52:48 So going into all of this, I've been thinking about how awesome life is in Python land because

52:56 we got this great language.

52:57 Oh, but we've got this great community.

53:00 Come for the language, stay for the community.

53:01 Well, what do we mean by that?

53:03 A lot of the times we mean all this code that's available.

53:06 We also mean all these people and wonderful, helpful people like on this call.

53:12 But there's also this big world of content.

53:15 And we have kind of organically grown a little online community with a bunch of helpful content

53:26 and a bunch of connections between people, which is of some value itself.

53:32 And so you see people starting to talk about, wow, I miss the old days of RSS, where we

53:37 would all subscribe to each other's blogs and get content and go straight to the source

53:42 and not have it aggregated into a walled garden and stuff like that.

53:45 And it just feels like there's room out there for if we want to fight back against the threat

53:53 of these mega cores taking our voluntary contribution to humanity and monetizing it, while at the

54:01 same time of taking all these valuable voices, creating content and value in Python land,

54:09 that maybe we could bring back some of these things, put the web back in the web and start

54:15 to get out of the walled gardens and back over into social networks that are open and

54:23 joyful.

54:24 I'm here for it.

54:25 Wow.

54:26 People complain, governments complain that places like Google and stuff are monetizing

54:30 the links and they're being paid.

54:32 You got to pay to link to this new source or whatever, right?

54:36 We're lucky that we have that.

54:37 If it turns into just, you just get an AI answer, no source, that's going to be really

54:41 hard on a lot of different businesses, creators, people who just want to create something just

54:46 for the attention or for their self, you know, like nobody comes anymore.

54:50 It's going to be a sad place.

54:51 I was thinking about Coke Zero the whole time you were saying like, you know, Google Zero

54:56 or whatever, because you didn't have to bring back classic Coke.

55:01 And I think, yeah, pivots happen, but it's hard to pivot, you know, billion dollar companies.

55:08 I have lots of thoughts on some of the current Python, what Google has chosen to do.

55:14 I think sometimes listening to consultants isn't the best business approach.

55:22 You know, it's their company, they can do what they need to do for their own shareholders.

55:26 I think a lot of what you said is really interesting.

55:29 And I touched on this a little bit because the volume of information around us is greater

55:35 than ever before.

55:36 Sure.

55:37 And at a speed of transmission that is faster than ever before.

55:43 And about eight years ago, I had breakfast with Sandy Metz, who was very prolific in

55:48 the Ruby community.

55:50 And I asked her, like, how do you keep up with all of this stuff?

55:53 And she's like, I don't.

55:55 And I said, okay.

55:56 And she's like, what I do is I focus on the things that impact me and all the rest of

56:01 it is news.

56:03 And that really stuck with me because in actuality, that's kind of what I do.

56:10 You know, I ignore the things that aren't directly relevant to me and trust that I've

56:17 built a strong enough network of people that I respect that their work will influence when

56:24 I jump in.

56:25 Like, I don't, you know, much like the life cycle, if you've studied marketing or product

56:29 development, you know, not everybody's an early adopter.

56:32 So do I need to be an early adopter on everything?

56:35 No.

56:36 Yeah.

56:37 That book, Crossing the Chasm, says that you should do that, like, on one thing.

56:40 If you do it on three things or more, you'll fail.

56:42 Yeah.

56:43 You know, part of the thing that triggered this for me was reading that Andreessen Horowitz,

56:47 kind of the self-proclaimed king of Silicon Valley VCs, as zero interest rates started

56:53 to go out of fashion and their recipe wasn't working.

56:57 They didn't like the negative press coverage, so they started their own media empire to

57:02 cover themselves.

57:04 And that idea is just so appalling that we would get news, we would turn to the mega corps

57:12 and the masters of the universe to tell us what we should be caring about.

57:18 We have that already.

57:19 We have, I'll be very specific, we have Planet Python.

57:22 It's in disrepair.

57:24 What if it was reimagined into a freaking media empire by us, for us, to cover the Fediverse

57:32 and course providers and all the value that's out there?

57:37 And like, Carol, you're saying, I don't have to think about it, but I trust that group

57:41 because they're thinking about it.

57:43 A lot of it is like, you know, when it came to LLMs, it was not the thing that rocked

57:48 my world, like intellectually, but I knew Simon was doing work with it.

57:54 And so I basically, once every few weeks, would take a look at his website and his blog

58:00 posts and he posts a lot, and I would get my data dump of things.

58:06 I don't know.

58:07 I mean, that's one of the reasons I like PyCon and I've like read talk proposals, everything

58:12 for the last decade.

58:15 All these talk proposals, and it really does give me an appreciation for all the things

58:19 Python's being used for.

58:21 What's seen.

58:22 Kind of the zeitgeist.

58:23 Yeah.

58:24 And so I think there's different ways of doing that, even just doing a YouTube search of

58:29 Python content.

58:30 But I tend to focus in on science-y oriented things and ways to empower humans through

58:38 lifelong learning.

58:40 So there's a lot of, we're in a phenomenal period of change for sure.

58:46 So we won't be bored, nor do I think our jobs are going to go away.

58:49 They may change, but they're not going away.

58:52 Indeed.

58:53 Jody, final thoughts on this topic?

58:54 And we'll pretty much wrap things up.

58:55 Yeah, I don't think I really have that much to add, actually.

58:58 I think it's all been said.

58:59 It has.

59:00 All right.

59:01 Just to round things out, the two things that I think are trends here is I think, like Carol

59:05 said a lot, Python on the front end is going to be super important.

59:07 I think PyScript is really, really interesting.

59:10 I've been waiting for people to develop something like React or Vue or something that we could

59:16 create commercial-facing websites.

59:18 We're halfway there with MicroPython being the foundation of PyScript, which is 100K

59:23 instead of 10 megs.

59:25 All of a sudden it becomes JavaScript-y size.

59:27 It opens the possibilities.

59:29 And just a shout out to PuePy, which is like Vue with Python, P-U-E, P-Y.

59:33 I'm going to interview Ken from that project, but it's kind of a component-based front end

59:38 for PyScript, which is pretty interesting.

59:40 Of course, JupyterLite is really, really important.

59:43 The other one was just all this Rust.

59:45 Everything seems to be redone in Rust, and oh my gosh, that's how you get your VC funding.

59:49 Just joking, sort of.

59:50 But all you talked about, all this performance stuff coming, while it is sometimes frustrating

59:56 that people are putting all the things into Rust because then Python programmers, it's

01:00:00 less approachable for them, it could also be an escape hatch from trying to force the

01:00:04 complexity into the Python side.

01:00:06 Alleviate, like everything has to be multi-threaded and crazy and optimized.

01:00:11 And well, this part you never look at.

01:00:12 It's faster now, so don't worry.

01:00:14 Anyway, those are my two trends.

01:00:15 Quick, quick, quick thoughts on that and we'll call it a show.

01:00:18 My piece of trivia is I made a contribution to Rust far before I made any contributions

01:00:24 to core Python.

01:00:25 Amazing.

01:00:26 Because I tended to be a C programmer in heart and spirit.

01:00:30 And so Rust seemed like this cool thing that was new at the time.

01:00:35 And ultimately, I personally did not find the syntactic side of it worked well with

01:00:42 my brain and how I think.

01:00:44 And Python was far cleaner in terms of a simpler visual, less clutter, and reminded me a little

01:00:52 more of small talk or something like that, which I loved from earlier days.

01:00:56 But I think there's a place for Rust.

01:01:00 I think Rust is going to replace Python now.

01:01:03 I think it's going to help with some optimized things.

01:01:07 Do I love things like Ruff that let me run my CI like blazing fast versus all the Python

01:01:14 tools?

01:01:15 Not to say that all the Python tools are bad, but when you're paying for it as a startup.

01:01:20 When things you used to have to wait on become, they blink of an eye, all of a sudden you

01:01:23 don't mind running them every time and it changes the way you work with tools.

01:01:27 Yeah.

01:01:28 Exactly.

01:01:29 Yeah.

01:01:30 I would say, look, every language has its place in the ecosystem.

01:01:32 And my husband is a long time Pythonista, but he's also a Rust programmer.

01:01:37 I know it's like a running joke that my husband is a Rust developer.

01:01:41 How do you know?

01:01:42 He'll ask you.

01:01:43 Well, you know what I mean?

01:01:44 How do you know?

01:01:45 Ask him, he'll tell you.

01:01:46 There you go.

01:01:47 They have different purposes, completely different purposes, and you can't just interchange them.

01:01:54 Absolutely.

01:01:55 Just get it straight.

01:01:56 Python is just awesome, says our Tim.

01:01:57 Pure love.

01:01:58 But it's up to us to keep it awesome.

01:02:00 Yes, absolutely.

01:02:01 Paul, we've come around to you for the very final, final thought on this excellent show.

01:02:05 I will give a final thought about Python trends to follow up on what Carol just said about

01:02:11 it's up to us.

01:02:13 Maybe it's up to us to help the people who will keep it that way.

01:02:19 The next generation of heroes, help them succeed.

01:02:22 I'm wearing my PyCon Kenya friendship bracelet that I got at PyCon and a wonderful experience

01:02:30 meeting so many different kinds of people.

01:02:34 And from a Python trends perspective, the fact that everything we're talking about is

01:02:38 good stuff, not like asteroid meets earth.

01:02:42 IP challenges and patent wars and mergers and acquisitions and stuff.

01:02:48 I remember a long time ago, I went to go see Guido and he was with the App Engine team

01:02:53 at Google.

01:02:54 So a long time ago.

01:02:55 And he was starting the process of turning over PEP review to other people.

01:03:00 And I commented to him that not every open source success story outlives its founder.

01:03:06 And the bigger it gets, particularly open source projects anchored in the United States

01:03:12 of America, they sell out and get funded and they will never be the same after that.

01:03:19 And so it's a moment from a Python trends perspective for us to build a great next future

01:03:26 by remembering how lucky we are where we have gotten to.

01:03:30 Absolutely.

01:03:31 Carol, Jody, Paul, thank you for being on the show.

01:03:33 Thank you.

01:03:34 Thanks, Michael.

01:03:35 Thank you.

01:03:36 Bye.

01:03:37 This has been another episode of Talk Python to Me.

01:03:40 Thank you to our sponsors.

01:03:41 Be sure to check out what they're offering.

01:03:43 It really helps support the show.

01:03:45 Code comments, an original podcast from Red Hat.

01:03:48 This podcast covers stories from technologists who've been through tough tech transitions

01:03:53 and share how their teams survived the journey.

01:03:57 Episodes are available everywhere you listen to your podcasts and at talkpython.fm/code-comments.

01:04:03 This episode is sponsored by Posit Connect from the makers of Shiny.

01:04:07 Publish, share and deploy all of your data projects that you're creating using Python.

01:04:12 Streamlit, Dash, Shiny, Bokeh, FastAPI, Flask, Quarto, Reports, Dashboards and APIs.

01:04:18 Posit Connect supports all of them.

01:04:20 Try Posit Connect for free by going to talkpython.fm/posit.

01:04:23 P-O-S-I-T.

01:04:24 Want to level up your Python?

01:04:28 We have one of the largest catalogs of Python video courses over at Talk Python.

01:04:32 Our content ranges from true beginners to deeply advanced topics like memory and async.

01:04:37 And best of all, there's not a subscription in sight.

01:04:40 Check it out for yourself at training.talkpython.fm.

01:04:43 Be sure to subscribe to the show, open your favorite podcast app and search for Python.

01:04:48 We should be right at the top.

01:04:49 You can also find the iTunes feed at /iTunes, the Google Play feed at /play, and the Direct

01:04:55 RSS feed at /rss on talkpython.fm.

01:04:59 We're live streaming most of our recordings these days.

01:05:01 If you want to be part of the show and have your comments featured on the air, be sure

01:05:05 to subscribe to our YouTube channel at talkpython.fm/youtube.

01:05:10 This is your host, Michael Kennedy.

01:05:11 Thanks so much for listening.

01:05:12 I really appreciate it.

01:05:13 Now get out there and write some Python code.

01:05:16 [MUSIC PLAYING]

01:05:20 [END PLAYBACK]

Back to show page
Talk Python's Mastodon Michael Kennedy's Mastodon