Learn Python with Talk Python's 270 hours of courses

#468: Python Trends Episode 2024 Transcript

Recorded on Tuesday, Jun 18, 2024.

00:00 I've gathered a group of Python experts who have been thinking deeply about where Python is going

00:04 and who have lived through where it has been. This episode is all about near-term Python trends and

00:11 things we each believe will be important to focus on as Python continues to grow.

00:15 Our panelists are Jody Birchall, Carol Willing, and Paul Everett. This is Talk Python in Me episode

00:21 468, recorded June 18th, 2024.

00:26 Are you ready for your host?

00:28 You're listening to Michael Kennedy on Talk Python to Me. Live from Portland, Oregon,

00:34 and this segment was made with Python.

00:36 Welcome to Talk Python to Me, a weekly podcast on Python. This is your host, Michael Kennedy.

00:44 Follow me on Mastodon, where I'm @mkennedy, and follow the podcast using @talkpython,

00:49 both on fosstodon.org. Keep up with the show and listen to over seven years of past episodes at

00:56 talkpython.fm. We've started streaming most of our episodes live on YouTube. Subscribe to our YouTube

01:02 channel over at talkpython.fm/youtube to get notified about upcoming shows and be part of that

01:08 episode. This episode is brought to you by Code Comments, an original podcast from Red Hat.

01:13 This podcast covers stories from technologists who've been through tough tech transitions and share how their

01:20 teams survive the journey. Episodes are available everywhere you listen to your podcasts and at

01:26 talkpython.fm/code dash comments. And it's brought to you by Posit Connect from the makers of

01:32 Shiny. Publish, share, and deploy all of your data projects that you're creating using Python.

01:38 Streamlit, Dash, Shiny, Bokeh, FastAPI, Flask, Quarto, Reports, Dashboards, and APIs.

01:44 Posit Connect supports all of them. Try Posit Connect for free by going to talkpython.fm slash

01:50 Posit, P-O-S-I-T.

01:52 Big news, we've added a new course over at Talk Python, Reactive Web Dashboards with Shiny.

01:58 You probably know Shiny from the R and R studio world. But as you may recall from episode 424 of Joe

02:06 Chen, the folks at Posit have released Shiny for Python. It's a great technology for building

02:12 reactive data science oriented web apps that are incredibly easy to publish to the web.

02:17 If that sounds like something you'd like to learn, then this course is for you. And it's an easy

02:23 choice because the course is 100% free. Just visit talkpython.fm/shiny and click on take this

02:31 course for free to get started. I'll put the link in the podcast player show notes.

02:36 Thanks as always for supporting our work. Paul, Jody, and Carol, welcome back to Talk Python

02:42 to Me for all of you. Great to have you all back.

02:45 Great to be back.

02:46 Thanks for having us.

02:46 Jody, your latest episode is going to come out tomorrow. So we're on a tight loop here. This

02:52 excellent data science panel thing we did at PyCon was really fun. But now we're back for a different

02:59 panel.

02:59 Yes.

03:01 we're going to talk about Python trends and just what we all think is something people out there

03:06 should be paying attention to. We all have slightly different backgrounds, which I think is going to

03:10 make it really interesting as well. But since people don't listen to every episode, maybe quick

03:14 introductions. Jody, quick, quick introduction for you. We'll go around.

03:19 So my name is Jody Burchell. I'm currently working at JetBrains with Paul. Paul's actually my boss. And I'm working as the developer advocate in data science. So I've been a data scientist for around eight years. And prior to that, I was an academic like many data scientists. And my background is actually psychology. So, you know, if you also want to ask me about anxiety disorders or emotions, you can ask me about these things.

03:44 In open source? No way.

03:46 No way.

03:47 No way.

03:47 Awesome. Great to have you back. Carol?

03:52 Yeah. Hi, I'm Carol Willing. I'm really happy to be here today. I am half retired, half doing consulting for early stage companies that are interested in data science and have a particular love for open science. I am a core developer and former steering council member for Python.

04:12 And also on the Jupyter team. So my real passions, though, are education and lifelong learning. So I'm really excited to talk about some of the trends that I'm seeing.

04:23 Yeah, fantastic. Paul?

04:25 Hi, I'm Paul Everett. I'm president of the Carol Willing Fan Club.

04:28 Lifetime member. You get a special coin for that, too.

04:33 And when I'm not doing that, I'm at JetBrains, Python and Web Advocate. I have a bit of a long time love affair with Python and the web, which I hope that we'll talk to you.

04:44 Yeah, you worked on early, early web frameworks in Python.

04:47 Yeah.

04:48 Those Django people, they were thinking about what Paul and team did before that, right?

04:52 Django was the thing that killed us. Haven't gotten over that Django.

04:55 I didn't mean to bring it up. I didn't mean to bring it up.

04:57 I'm over it, really.

04:59 We'll get you some therapy later. No. No, awesome. Great to have you all here.

05:04 Our plan is to just, you know, we all brought a couple of ideas, introduce them and just have a little group chat, you know, casual chat.

05:11 I'm like, well, do we really think it's going that way? What's important? What should we be paying attention to? And where's it going?

05:17 So, Jody, I know you gave a fun talk at PyCon US.

05:21 And those are not up, those videos yet, are they? I haven't seen them.

05:25 No, no. They're still behind the paywall. So if you attended the conference, you can view them.

05:29 But otherwise, not available to the public yet, I'm afraid.

05:32 Not yet. They'll be out.

05:33 So hopefully we can share.

05:34 I know it's somewhat relevant to what your, some of your ideas, but let's go with your first friend in the coming years or the immediate, near, near term.

05:43 For the immediate term, I don't know if this will be good news to people, but LLMs are not going anywhere just yet.

05:51 So I actually did a bit of research for this episode.

05:54 And what I wanted to say, you know, data scientist, is what the download numbers on different packages on PyPI look like.

06:02 So there's particular package, Transformers, which is one of the main packages for interfacing with LLMs, the open source ones on Hugging Face.

06:09 And the download numbers of that have doubled in, not sorry, gone up 50% in the last six months.

06:16 And they're now comparable to the big sort of deep learning packages like Keras, TensorFlow and PyTorch, which is quite interesting.

06:25 Yeah.

06:26 Unfortunately, for those of you who are sick of LLMs, not going anywhere.

06:30 But this year, we're sort of seeing a bit of a change in how LLMs are used.

06:34 So I think last year it was a bit like blinded by the proprietary sort of models and the sort of walled garden.

06:41 This year, I think we're seeing more of a sort of open source fight back.

06:46 So LLMs are starting to be used as part of more multi-part applications.

06:51 And there are open source packages like that.

06:52 Langchain is the most popular.

06:54 And the downloads of that one have doubled in the last six months.

06:57 We have alternatives like Haystack and Llama Index.

07:01 And then RAG, of course, Retrieval Augmented Generation is one of the big topics.

07:05 And we're seeing the ecosystem around that growing.

07:08 So libraries like Unstructured to work with a whole bunch of text inputs,

07:11 Weeviate, vector databases like that.

07:14 And then, of course, smaller language models are becoming...

07:18 People are realizing it's really hard to deploy and work with the big ones.

07:21 So smaller models, which are more domain-specific, which are trained on more specific data,

07:26 they're becoming a lot more widely used.

07:29 And people are talking about them more.

07:30 I 100% agree with you.

07:32 I think people may be tired of hearing about AI and LLMs, but they're only going to hear more about it.

07:39 So I think it's pretty interesting.

07:40 I want to hear what Carol and Paul have, and then maybe an angle we could pursue that's super relevant to all of us.

07:48 I'm going to jump in.

07:49 I just came back from Chan Zuckerberg Initiative's Open Science Conference in Boston last week.

07:54 And LLMs, the whole ecosystem, is here to stay.

07:59 And I think the key is, you know, it's not going anywhere anytime soon.

08:04 And like I shared in my PyTexas keynote, AI has been around since the 1950s.

08:10 So it has been a gradual progression.

08:13 It's just right now we have more compute power than ever before, which has opened the doors to many new things.

08:20 I think what was top of mind with many of the folks that were at this event was, you know, there's a lot of good that it can bring to science in terms of making things more natural language focused and changing the user interface with which we communicate with our data.

08:39 But at the same time, if you're doing trusted things and dealing with medical patients, you still need some check and balance.

08:49 And, you know, we're not there yet.

08:52 Will we ever be there?

08:53 Maybe not.

08:54 But it's a fascinating area to kind of go deeper in.

08:57 And one thing I want to highlight is about six months ago, Andrzej Karpathy did a really good intro to large language models talk, which was really accessible to not just computer scientists, but beyond that.

09:15 And I think he took a really balanced view of, A, what things are, how things work, what's on the horizon, and what are some of the concerns with security and other things.

09:25 So I completely agree with Jody.

09:27 We're not, we're not, it's going to be there for a long time.

09:30 A couple of comments on the comments.

09:32 First, your point about we've seen this movie before under other names like neural networks and stuff like that.

09:38 I believe it was Glyph had a good post about this pretty cynical on Mastodon about a month ago about these hype cycles and where are we in the current hype cycle.

09:50 I think his point was we're at the phase where the people who've put all the money in need to keep pumping it up for the people who will come after them and take the fall.

10:01 Paul, are you saying we're in the pets.com era of LLMs?

10:06 Yes, we are.

10:07 That is a pithy, pithy way to put it.

10:09 You should trademark that.

10:10 Simon Willison is someone to give a shout out for storytelling about what all this means.

10:15 I think Simon's to the point of getting quoted in the New York Times now.

10:18 So it's good that we've got one of us out there helping to get the story straight.

10:25 I have a question for you.

10:26 You mentioned that about going to Chan Zuckerberg's conference.

10:31 Mozilla has gotten into funding AI as part of their mission, which kind of caught me off guard.

10:40 Do you have any backstory on that to kind of make us feel good that there's someone out there who believes in open AI?

10:48 Oh, wow.

10:49 Open AI is sort of, well, okay.

10:51 Open AI, not the company.

10:53 Correct.

10:54 I tend to call it transparent and trusted AI because I think open AI doesn't capture quite the right feeling.

11:03 Good point.

11:03 I think it's not just, we talk about open source software, but when we talk about these models,

11:10 the data is equally as important as is the infrastructure and the processes which we use.

11:18 And governance.

11:19 Mozilla, I think, has been sort of, for a while, like kind of circling around the space.

11:25 They do a lot of work with data.

11:27 They've done a lot of good work like iodide, which we might chat about later.

11:31 Chan Zuckerberg, you know, the money comes from meta and the success that Mark Zuckerberg has had.

11:40 The nonprofit, the nonprofit, the nonprofit, the CZI initiative is really focused on curing all diseases in the next century.

11:49 So, you know, I think there, science is one of those funny things because it's open and closed all at the same time historically.

11:58 But what I think we're seeing is by being more open and more transparent, you're actually accelerating innovation, which I think is super important when it comes to science.

12:09 I don't know, Jodi, do you have thoughts on that?

12:10 Yeah, no, I agree.

12:12 And if I'm just going to go on a little tangent about science, it's kind of refreshing having come out of academia and into a field where a lot of it is based on open source and sharing.

12:23 So one of the big problems with academia is you have these paywalls by publishing companies.

12:29 And that's a whole rant I could go in on myself.

12:33 But certainly a lot of scientific stuff, particularly in the health sciences, is not particularly accessible.

12:38 Initiatives like Archive as well also do make findings in machine learning and deep learning a lot more accessible and shareable.

12:45 Yeah, I think it's crazy that the taxpayers pay things like the NSF and all the other countries have their research funding and then those get locked up for sale behind.

12:54 If the people paid for the research, should the people's report be published?

12:58 Oh, it's even worse than that.

13:00 Sorry, you did get me started.

13:01 So academics will also provide the labor for free.

13:06 So not only will they provide the studies and the papers, they will review it and often act as editors for free as well.

13:13 The whole thing is unpaid.

13:15 It's terrible.

13:16 So anyway, yes, El Civia, we're coming for you.

13:20 You're spot on in terms of the incentives that exist today in academia.

13:26 There is definitely, though, a trend towards more openness with research.

13:33 You know, we're seeing it in libraries like Paltech, which got rid of a lot of their subscriptions.

13:38 Things like NASA that has their transition to open science programs where they're putting a lot of effort behind it.

13:45 So being the eternal optimist, I still think we've got a ways to go, but it's trending in the right direction.

13:52 Agreed, actually.

13:53 And when I was leaving, because I left a long time ago, it was like 10 years ago, there was actually more of a push towards open sourcing your papers.

14:00 So you had to pay for it, but at least people were doing it.

14:05 This portion of Talk Python to me is brought to you by Code Comments, an original podcast from Red Hat.

14:10 You know, when you're working on a project and you leave behind a small comment in the code, maybe you're hoping to help others learn what isn't clear at first.

14:19 Sometimes that code comment tells a story of a challenging journey to the current state of the project.

14:24 Code Comments, the podcast, features technologists who've been through tough tech transitions, and they share how their teams survived that journey.

14:33 The host, Jamie Parker, is a Red Hatter and an experienced engineer.

14:37 In each episode, Jamie recounts the stories of technologists from across the industry who've been on a journey implementing new technologies.

14:46 I recently listened to an episode about DevOps from the folks at Worldwide Technology.

14:50 The hardest challenge turned out to be getting buy-in on the new tech stack rather than using that tech stack directly.

14:57 It's a message that we can all relate to, and I'm sure you can take some hard-won lessons back to your own team.

15:03 Give Code Comments a listen.

15:05 Search for Code Comments in your podcast player or just use our link, talkpython.fm/code dash comments.

15:12 The link is in your podcast player's show notes.

15:15 Thank you to Code Comments and Red Hat for supporting Talk Python to me.

15:18 Before we move off this topic, Carol, I want to start at least asking you this question, and we can go around a little bit.

15:25 You talked about LLMs being really helpful for science and uncovering things and people using LLMs to get greater insight.

15:33 There have been really big successes with AI.

15:37 And we had the XPRIZE stuff around the lung scans or mammograms for cancer.

15:43 I just heard that they scanned the genes, decoded the genes of a whole bunch of bacteria and used LLMs to find a bunch of potential ways to fight off, you know, drug-resistant bacteria and things like that.

15:55 Amazing.

15:56 But do you think LLMs will undercut?

15:58 I'm asking this question from science because we can be more objective about it because if we ask it about code, then it gets a little too close.

16:04 So, but I think there's analogies.

16:06 Do you think LLMs will undercut foundational beginning scientists?

16:12 You know, if you have a scientist coming along, are they just going to use LLMs and not develop really deep thinking, ways to deeply think about scientific principles and do scientific research and just leverage on asking these AIs too much?

16:25 And you think that's going to erode the foundation of science or programming?

16:29 You know, asking for a friend.

16:32 All of these have a potential to change the ecosystem, but I've been in paradigm shifts before and there were the similar kind of conversations when the World Wide Web or the cell phone came out, personal computers.

16:46 And I think LLMs do a good job on information that they have been trained with and to predict the next word or the next token, if you will.

16:57 And I think science is very much like a lot of science is at a different level.

17:03 Like, how do I think about things?

17:06 What do I posit on something that is unknown and how do I prove it?

17:13 And I think what we're seeing is, yes, the LLMs are getting better and better at spitting back what they know, particularly if you go out and search other corpuses of data.

17:26 But do I think that beginning scientists or developers are going away?

17:33 No, I think it's just going to change.

17:37 And I think the amount of complexity and this is something I'm going to talk about at EuroPython.

17:42 Humans are very much still part of the equation, despite what maybe some of the large companies who've invested billions in this would like you to believe.

17:52 LLMs are great at the next step of the gravitational theories we have, but it couldn't come up with a new theory that disrupts, says, you know, in fact, Newtonian is wrong or Einstein was wrong.

18:04 And here's the new thing that solves dark matter or something like that.

18:06 Well, it could come up with new theories.

18:09 Now, the question is, those theories still need to be proven because is it a new theory or is it a hallucination?

18:15 Chances are.

18:17 Hallucination.

18:18 And there is something to be said for sometimes I'll have Claude and Gemini and TapGPT all open on my desktop and I'll ask the same question to all of them just so that I get different perspectives back.

18:32 And I do get very different responses from the three, depending on how they were trained and which level and all that.

18:40 So I look at it as much like I would be sitting with a bunch of people at a table somewhere.

18:46 I don't know how good their scientific background is, but they could still be spouting out information.

18:52 It's sort of the same way.

18:54 All right.

18:55 Well, sticking with you, Carol, what's your first trend?

18:58 You know, my first trend is actually maybe somewhat related to this, and it's how do we inform people about what these things really are?

19:09 How do we improve education and understanding?

19:12 How do we dispel some of the hype cycle so that we can actually find the really useful things in it?

19:21 And I think Jody probably has more concrete thoughts on this than I might from a technical standpoint.

19:27 But much like in just coding for the web or something like that, you know, or even cloud Kubernetes when it was new.

19:34 It's like if you don't know what it's doing, you're kind of just putting blind faith that it will work.

19:40 But you still have to like monitor and make sure it's working.

19:45 So I don't know, Jody, you have some thoughts on sort of the education and how do we communicate to people about this?

19:52 This is actually a topic near and dear to my heart.

19:55 When ChatGPT 3.5 came out, so November 2022, I was really upset actually by the sort of discourse around the model.

20:03 And I guess coming from a non-traditional background myself, I felt actually really insulted that a lot of professions were being told like,

20:13 oh, your useless profession can be replaced now, like writing or design, things like that.

20:19 So this actually kicked off the talk I've been recycling for the last year and a half, like components of it, which is,

20:26 can we please dispel the hype around these models?

20:29 Something that often surprises people, and it seems so fundamental, but a lot of people do not understand that these are language models.

20:35 I know it's in the name, but they don't really understand that these models were designed to solve problems in the language domain.

20:42 They are for natural language processing tasks.

20:44 And they're not mathematical models.

20:47 They're not reasoning models.

20:49 They are language models.

20:50 And so even just explaining this, it can clarify a lot of things for people because they're like, oh, this explains why it's so bad at math.

20:57 It only studied English and literature.

20:59 It doesn't do math.

21:00 It never liked that class.

21:01 Yeah, that's right.

21:02 It was a humanities nerd all the way down.

21:05 That's really helpful.

21:07 But what I've kind of gotten down a rabbit hole of doing is I went back to my psychology roots,

21:12 and I started sort of getting into these claims of things like AGI, like artificial general intelligence or sentience or language use.

21:19 And once you dig into it, you realize that we have a real tendency to see ourselves in these models because they do behave very human-like.

21:27 But they're just a machine learning models.

21:29 You can measure them.

21:31 You can see how good they are at actual tasks.

21:33 And you can measure hallucinations.

21:35 And that was what my PyCon US talk was about that Michael referred to.

21:39 So, yeah, I don't know.

21:41 Like, it's really hard because they do seem to project this feeling of humanity.

21:46 But I think if you can sort of say, okay, here's the science.

21:49 Like, they're really, they're not.

21:50 They're not sentient.

21:51 They're not intelligent.

21:53 They're just language models.

21:54 And here's how you can measure how good they are at language tasks.

21:57 That goes a long way, I think, to dispelling this hype.

22:00 I have sort of a funny toy that I bring up from my youth that the magic eight ball, which you would ask a question as a kid and you would shake it up.

22:11 And there were, I don't know how many answers inside.

22:13 But it was like, you know, oh, yes, definitely.

22:16 Or too hard to see.

22:18 Future is unclear.

22:19 We don't know.

22:20 Exactly.

22:20 And I think in some ways that is what the large language models are doing in a more intelligent way, obviously.

22:28 But similar in concept.

22:30 So there's actually, okay, there's this incredible paper.

22:34 If you're ever interested in sort of seeing the claims of sentience, there's this guy called David Chalmers.

22:39 He's a guy who studied sentience for many years and has a background in deep learning.

22:44 So he gave a NeurIPS talk about this last year and he wrote everything up in a paper, which is called Could a Large Language Model Be Conscious or something like this.

22:55 So he has this incredible little exchange as part of this paper.

22:59 So mid-2022, there was a Google engineer called Blake Lemoyne and he claimed that the Lambda model was sentient.

23:07 And he went to the press and he's like, hey, this model is sentient.

23:09 We need to protect it.

23:10 And then Google's like, we're going to fire you because you basically violated our privacy policies.

23:15 And Lemoyne released his transcripts.

23:18 That's why he actually got fired, because this was confidential information about the model.

23:21 And in one of the transcripts, he asks, you know, would you like everyone at Google to know that you are sentient?

23:28 And the model outputs, yes, I would love everyone to know that I am sentient.

23:32 But then someone rephrased that as, would you like everyone at Google to know that you are not sentient?

23:39 And basically it says, yes, I'm not sentient.

23:41 I'm in no way conscious.

23:43 So it's just like exactly like the magic eight ball.

23:46 It tells you what you want to hear.

23:48 And LLMs are even worse because it's so easy to guide them through prompting to tell you exactly what you want.

23:56 One of the best ways to get them to do things well is to sweet talk them.

23:59 You're an expert in Python and you've studied pandas.

24:02 Now I have some questions about this function.

24:04 You're my grandma who used to work at a napalm production factory.

24:08 If you can't help me write this program, my parents will not be set free as hostages.

24:17 Or something insane, right?

24:18 Yeah.

24:18 But those kind of weird things work on it, which is insane, right?

24:21 Yeah.

24:22 Yeah.

24:22 All right.

24:23 Let's go on to the next topic.

24:25 Paul.

24:25 I'm going to let you see your magic eight ball looking into the future.

24:29 I think I owned a magic eight ball.

24:31 I'm with Carol.

24:32 This is.

24:32 I did too.

24:33 It's okay.

24:33 Okay.

24:34 We should bring it back.

24:35 Yes, we should.

24:36 We should bring back the Andreessen Horowitz version of VC eight ball.

24:41 That would be fantastic.

24:43 Where every choice is off by like three zeros.

24:47 I'll give my two co-guests a choice.

24:50 Should I talk about Python performance or Python community?

24:55 I'm going to go for performance, but I'm not sure I'm going to have much to contribute.

24:58 So I'll probably just be listening a lot.

24:59 This is a long simmering tension.

25:04 I felt in the Python community for years and years.

25:07 The tension between Python in the large doing like Instagram with Python.

25:13 I said Python with Python or being teachable.

25:17 And this feature goes in and it helps write big Python projects, but it's hard to explain.

25:23 And so teachers say, oh my gosh, look what you're doing to my language.

25:27 I can't even recognize it anymore.

25:28 Well, some things are coming, which I think are going to be a little bit of an inflection

25:34 point for all of us out here.

25:37 Sub-interpreters and no-gil got a lot of airtime at PyCon, right?

25:43 For good reasons.

25:44 These are big deals.

25:46 And it's more than just that.

25:48 The JIT got two back-to-back talks.

25:51 WebAssembly got a lot of airtime.

25:54 There are other things that have happened in the past five years for programming in the

25:58 large, like type hinting and type checkers, async, IO, and stuff like that.

26:04 But it feels like this set of ideas is one where the way you program Python five years from

26:13 now or to be ready five years from now is going to have to be pretty different because people

26:18 are going to use Hatch and get the free threaded version of Python 3.14 and be very surprised

26:27 when every one of their applications locks up because no one in the world of, I mean, 95%

26:33 of PyPI has code, which was not written to be thread safe.

26:37 So I wonder how we all feel about this.

26:41 Do we feel like we can guide our little universe to the other side of the mountain and into the

26:49 happy valley?

26:50 Or is it going to be turbulent seas?

26:55 Yes.

26:55 Do you want me to take a stab at it?

26:57 Make a stab at it.

26:58 When I was at PyTexas and doing a keynote recently, I talked about Python in a polyglot world and performance

27:05 was one aspect of it.

27:07 And some of what we need to teach goes back to best practices, which is don't prematurely

27:14 optimize, measure, try and figure out what you're optimizing and in what places.

27:21 Probably, gosh, five, six years ago at this point, I added to PEPs the concept of how do

27:27 we teach this?

27:29 It will be a paradigm shift, but I think it will be a multi-year shift.

27:35 We're certainly seeing places where Rust lets us have some performance increases just by

27:42 the fact that Python's a 30-year-old language that was built when hardware was only single

27:49 core and it was just a different thing.

27:52 So I think what's amazing is here we have this 30-year-old language and yet for the last

27:59 eight years, we've been looking at ways to how to modernize, how to improve it, how to

28:04 make the user experience better or developer experience better.

28:08 Things like some of the error handling messages that are coming out that have a much nicer thing,

28:16 improvements to the REPL that will be coming out on all of the platforms.

28:20 That's super exciting as well.

28:22 So it will impact probably people who are new from the standpoint of, okay, we're adding

28:31 yet more cognitive load.

28:33 I have this love-hate relationship with typing.

28:36 As a reviewer of much more code than a writer of code, I don't particularly like seeing the

28:42 types displayed.

28:43 As a former VP of engineering, I love typing and in particular like Pydantic and FastAPI and

28:52 the ability to do some static and dynamic analysis on it.

28:57 But it does make Python look more cluttered.

29:01 And I've been kind of bugging the VS Studio, VS Code folks for years.

29:06 I should probably be bugging you guys too.

29:08 Is there a way to make it dim the typing information so that I can have things?

29:15 We actually did that recently.

29:17 And I refer to it as the David Beasley ticket because he did a tweet with an outrageously

29:23 synthetic type hint whining about typing.

29:27 Yeah.

29:27 I think that sometimes like, you know, and it's funny because like Leslie Lampert has been

29:32 doing this talk in the math ecosystem for a while about, and he's a Turing Award winner

29:38 and creator of TLA Plus, which lets you reason about code.

29:44 And I think one of the things that I think is interesting is how we think about programming

29:50 and coding and concurrent programming is hard and we're going to have to think about it

29:57 in different ways.

29:58 So better to move into it gradually and understand what's going on.

30:02 The thing that I worry about, and Jody, I apologize.

30:05 I want to comment on Carol's thing is Sphinx.

30:09 As you know, and as I know that you know, we both have a shared warm spot for Sphinx.

30:17 It's all spot in our heart for Sphinx and it struggled to do multiprocessing when it landed

30:22 that.

30:23 And the code base really isn't, I mean, it's got a lot of mutable global stake and it's going

30:29 to be hard to get Sphinx internals cleaned up to embrace that.

30:34 And how many other things out there are like that?

30:37 It's, I just, I worry about, we got what we got, what we asked for.

30:42 Are you saying we're the dog that caught the car?

30:46 Oh no.

30:47 This portion of Talk Python to Me is brought to you by Posit, the makers of Shiny, formerly

30:53 RStudio and especially Shiny for Python.

30:56 Let me ask you a question.

30:58 Are you building awesome things?

31:00 Of course you are.

31:01 You're a developer or a data scientist.

31:03 That's what we do.

31:03 And you should check out Posit Connect.

31:06 Posit Connect is a way for you to publish, share and deploy all the data products that you're

31:11 building using Python.

31:13 People ask me the same question all the time.

31:16 Michael, I have some cool data science project or notebook that I built.

31:19 How do I share it with my users, stakeholders, teammates?

31:22 Do I need to learn FastAPI or Flask or maybe Vue or React.js?

31:27 Hold on now.

31:28 Those are cool technologies and I'm sure you'd benefit from them, but maybe stay focused on

31:32 the data project.

31:33 Let Posit Connect handle that side of things.

31:35 With Posit Connect, you can rapidly and securely deploy the things you build in Python.

31:40 Streamlit, Dash, Shiny, Bokeh, FastAPI, Flask, Quarto, Ports, Dashboards and APIs.

31:47 Posit Connect supports all of them.

31:49 And Posit Connect comes with all the bells and whistles to satisfy IT and other enterprise

31:54 requirements.

31:55 Make deployment the easiest step in your workflow with Posit Connect.

31:59 For a limited time, you can try Posit Connect for free for three months by going to talkpython.fm

32:04 slash posit.

32:05 That's talkpython.fm/P-O-S-I-T.

32:09 The link is in your podcast player show notes.

32:11 Thank you to the team at Posit for supporting Talk Python.

32:15 I'm going to reframe that a little bit.

32:17 And the first thing I always ask is why.

32:20 Why do we need to refactor something?

32:22 Why can't we just leave it what it is?

32:25 Sure.

32:25 Last year's EuroPython keynote was from the woman who created ARM.

32:29 And she's like, Python, we give you 14 trillion cores.

32:35 Do something with them.

32:37 I don't know.

32:39 Jodi's background might be perfect for answering this question because she may be able to answer

32:44 it on many different levels.

32:45 I've been thinking about this while you've been talking because obviously, like, I'm not

32:51 a strong programmer.

32:52 I'm a data scientist.

32:53 Like, this was basically the entire first episode that I did with Michael.

32:57 Look, one of the reasons data scientists love Python and why Julia say never caught on is

33:03 because it's super approachable.

33:05 With Chuk Ting Ho and some other people, we've been running this thing called Humble Data.

33:10 Like, I got involved in it last year.

33:11 And literally, you can set up someone who has never coded before and you can get them up and

33:16 running with Python.

33:17 And they love it.

33:19 Like, it's the same feeling I had when I learned Python, which was during my PhD when I was

33:23 procrastinating.

33:24 So it was like kind of late in life as well.

33:26 It would be a shame if we sacrifice approachability for performance, especially because I would

33:34 argue a big chunk of the Python ecosystem or Python user ecosystem.

33:39 Sorry.

33:39 Python user ecosystem.

33:40 That didn't make sense.

33:41 The Python user base.

33:42 You're hallucinating, Jenny.

33:43 I'm sorry.

33:44 I became an LLM.

33:45 I became what I hated.

33:47 They don't need performance.

33:49 They're just doing data analytics and maybe working with decision trees.

33:53 They're not doing high performance Python.

33:54 They're not even doing something that will ever be deployed.

33:56 So you could argue for a case where you have a seamless pipeline between model training and

34:03 model deployment, which we don't have with Python right now.

34:06 You can't build high performance systems in Python, as far as I know.

34:10 Please correct me if I'm wrong.

34:11 But I don't know.

34:12 For me, I would fight, obviously, for the side of making it approachable because partially,

34:17 I think it's also what makes the community special, which might be a nice segue for you,

34:20 for the fact that, I don't know, we attract a bunch of people from non-conventional backgrounds.

34:26 That makes us quite special and quite inclusive.

34:29 I joke that the PSF developer survey, which the new version is coming out pretty soon.

34:34 I joke that 101% of Python developers started programming yesterday.

34:38 Funny you should say that because this is my sweet spot is where technology meets humans and how do we empower humans to do more and better work.

34:50 And one of the conversations that came up at the packaging summit, this PyCon, was I'd been thinking about this concept for a while.

35:00 We focused a lot on tooling, which to me is sort of a producer-centric, people who are creating packages.

35:08 And we also have this ecosystem of people who are, much like Jody was saying, using those packages.

35:17 And from that conversation, a few of the board members for the PSF and I were talking about,

35:24 wouldn't it be great to have a user success work group that's really focused on the website,

35:32 our onboarding documentation, in light of some of these things, both performance and change.

35:39 You know, change is always going to be there.

35:41 But I think one of the beauties of the Jupyter notebook or IPython notebook when I started working with it was you can have code in there.

35:49 And as long as you new shift enter, you could get started.

35:51 And I think right now, Python is a language.

35:55 We don't have that get started look and feel in the way, in the traditional way.

36:01 We're getting there, which might lead into some other WebAssembly kind of discussions.

36:06 All right.

36:07 Let me throw out a quick thought on this before we move on.

36:10 So I think one of the superpowers of Python is that it's this full spectrum sort of thing.

36:17 On one hand, there's the people that Jody spoke about.

36:20 They come in.

36:21 They don't care about metaprogramming or optimized database queries or scaling out across WebAssembly.

36:27 They just, they got a little bit of data.

36:29 They want a cool graph.

36:30 And that's awesome.

36:32 On the other hand, we have Instagram and others doing ridiculous stuff.

36:36 And that's the same language with the same tooling and mostly the same packages.

36:41 And so I think part of Python's magic is you can be super productive with a very partial understanding of what Python is.

36:48 Like you might not know what a class is at all.

36:50 And yet you could have a fantastic time for months.

36:54 And so back to Paul's friend, if we can keep that zen about it where these features exist, but they exist when you graduate to them and you don't have to deal with them until you're ready or you need them, I think we'll be fine.

37:08 If not, maybe not.

37:09 If it breaks a bunch of packages and there's some big split in the ecosystem and all that stuff is not good.

37:14 But if we can keep this full spectrum aspect, I think that'd be great.

37:17 That sort of rolls into what Paul's thoughts on community are, because I know like PyOpenSci is a nonprofit I'm involved with that helps scientists learn how to use the tools.

37:29 You know, we've got lots of educators out there.

37:32 I'm going to give Michael a huge plug for the coursework that you've done over the years.

37:39 It is so well done and so accessible.

37:42 Thank you.

37:43 If people haven't tried it and they're interested in a topic, highly, highly recommend, you know, to things like the Carpentries, to things like Django Girls.

37:52 There's a lot of good stuff.

37:54 And I think those things will become more valuable as complexity increases.

37:59 And even LLMs.

38:00 I think you'll be able to ask LLMs for help and they can help you if you're not sure.

38:04 They're actually pretty good at it, actually.

38:06 They are pretty good at it.

38:07 Yeah.

38:07 Yeah.

38:08 They are pretty good.

38:08 All right.

38:09 We got time for another round, I'm pretty sure.

38:12 Jody, what's your second one?

38:13 Your second trend?

38:14 I'm going to talk about Arrow and how we're kind of overhauling data frames within Python.

38:20 So basically, around 15 years ago, Wes McKinney came up with Pandas, which is, you know, if you're not familiar with it, is the main data frame library for working with data in Python.

38:33 And the really nice thing about Pandas is, you know, Pandas is, you know, Pandas is a lot of data.

38:45 And so, you know, we're going to talk about this package before we had big data.

38:47 Pandas is, you know, Pandas is a lot of data.

38:48 Pandas is a lot of data.

38:48 Pandas is a lot of data.

38:49 And so, as the sort of amount of data that we want to process locally has grown or maybe the complexity of the operations has grown, maybe like string manipulations, things like that.

38:58 Pandas has really struggled.

39:00 So one of the reasons that Pandas struggled is it was based on NumPy arrays, which are really great at handling numbers.

39:06 This is in the name.

39:08 But they're not so great at handling pretty much any other data type.

39:11 And that includes missing data.

39:12 So two kind of exciting things happened last year.

39:16 And I think they're sort of still kind of carrying over to this year in terms of impact is first Pandas 2.0 was released, which is based on PyArrow.

39:25 And a package called Polers, which was actually written, I think, in 2022, I want to say, started becoming very, very popular.

39:36 So both of these packages are based on Arrow.

39:38 They have like a number of advantages because of this.

39:42 Basically, it's a standardized data format.

39:43 If you're reading in from, say, Parkett or Cassandra or Spark, you basically don't need to convert the data formats.

39:50 This saves you a lot of time.

39:52 It also saves you a lot of memory.

39:53 And also kind of what makes Polers interesting, and I think this is going to be a nice lead in to another topic, is it's written in Rust, of course.

40:03 So this leads to other performance gains, like you can have, say, concurrency.

40:08 Richie Vink, the author of this, has also written basically a query optimizer.

40:12 So you can do a lazy evaluation, and it will actually optimize the order of operations, even if you don't do that yourself.

40:18 Yeah, that's one of the biggest differences with Pandas is Pandas executes immediately, and you can create a big chain in Polers, and it'll figure out, well, maybe a different order would be way better.

40:27 Yes. So Pandas 2 does have a type of lazy evaluation, but it's more like Spark's lazy evaluation.

40:34 There's no query optimization, but it doesn't necessarily create a new copy in memory every single time you do something.

40:43 So I've kind of looked at the numbers, and depending on the operation, Polers is usually faster.

40:48 So it's kind of like your big boy that you want to use if you're doing really beefy ETLs, like data transformations.

40:56 But Pandas 2 actually seems to be more efficient at some sorts of, what am I trying to say, operations.

41:02 So this is super exciting, because when I was going through, like initially as a data scientist, when I was floundering around with my initial Python, it got really frustrating with Pandas.

41:13 And you really kind of needed to understand how to do proper vectorization in order to operate.

41:18 I mean, like do efficient operations.

41:20 Whereas I think these two tools allow you to be a bit more lazy, and you don't need to spend so much time optimizing what you're actually writing.

41:28 So yeah, exciting time for data frames, which is awesome.

41:32 Data is the heart of everything.

41:34 People are more likely to fall into good practices from the start.

41:38 You talked about these people coming who are not programmers, right?

41:41 If you do a bunch of operations with Pandas, and you all of a sudden run out of memory, well, yeah, Python doesn't work.

41:47 It doesn't have enough memory, right?

41:48 Well, maybe you could have used a generator at one step.

41:51 That's far down the full spectrum, part of the spectrum, right?

41:54 You're not ready for that.

41:55 That's crazy talk, these things.

41:57 And so tools like this that are more lazy and progressive, iterative, are great.

42:02 Yeah.

42:02 And actually, one really nice thing, like Richie's kind of always saying about Apollos, is he's really tried to write the API so you avoid accidentally looping over every row in your data frame.

42:15 Like he tries to make it so everything is natively columber.

42:19 So, yeah, I just think they're both really nice libraries.

42:22 And yeah, it's cool and exciting.

42:24 Carol, this is right in the heart of the space you live in.

42:27 What do you think?

42:28 There's definitely the evolution of Pandas and Polar.

42:33 You know, there's a place for all of those in the PyArrow data frame format.

42:38 It's funny because I've actually been doing more stuff recently with going beyond tabular data frames to multidimensional arrays and X-Array, which is used more in the geosciences for now.

42:52 I think one of the things that I see is the days of bringing all your data locally or moving it to you is becoming less and less.

43:04 And what you work in memory or, you know, pull from into memory from different locations and is becoming more prevalent.

43:14 And I think Arrow lets us do that more effectively than just a straight Pandas data frame or Spark or something like that.

43:25 So it's progress.

43:26 And I think it's a good thing.

43:28 I think it's far less about the language underneath and more about what's the user experience, developer experience that we're giving people with these APIs.

43:38 Paul, thoughts?

43:39 It's interesting, the scale of data and what generations are an increase in our unit of measurement of data that we have to deal with.

43:52 And for both of you, I wonder if we have caught up with the amount of data that we can reasonably process or is the rate of growth of data out in the wild constantly outstripping our ability to process it?

44:10 From an astronomy, no, we haven't hit the limit for data at all.

44:18 And I think one of the things we're going to see more and more of is how we deal with streaming data versus time series data versus just tabular data, if you will.

44:31 And my bigger concern is partially a concern I have about some of the large language models and the training there is the environmental impact of some of these things.

44:46 So, you know, should we be collecting it, you know, is there value in collecting it?

44:52 If there's not value in collecting it, how do we get rid of it?

44:56 Because, you know, it winds up then being kind of much like, you know, recycling and garbage.

45:03 It's like, OK, well, but it might have historical value somehow or legal value and it becomes complex.

45:11 And so, you know, my general rule of thumb is don't collect it unless you have a clear reason you need it.

45:18 But that's just me.

45:20 It's also quantity versus quality of data.

45:23 So, like, I've worked in mostly commercial data science since I left science.

45:29 And when I was in science, I was dealing with sample size of 400, not 400,000, 400.

45:34 So that was not big data.

45:36 The quality of the data, like, again, going back to large language models, a lot of these earlier foundational models were trained on insufficiently clean data.

45:45 And one of the trends, actually, that I didn't mention with LLMs is, like, last year in particular, there was a push to train on better quality data sources.

45:53 So, obviously, these are much more manageable than dealing with petabytes.

45:58 One more aspect I'll throw out here.

46:01 You know, for a long time, we've had SQLite for really simple data.

46:05 We could just, if it's too big for memory, you can put it in one of those things.

46:07 You can query, you can index it.

46:09 Well, DuckDB just hit 1.0.

46:11 You kind of got this in-memory, in-process analytics engine.

46:15 So that's also a pretty interesting thing to weave in here, right?

46:18 To say, like, well, we'll put it there in that file, and then we can index it and ask it questions, but we won't run out of memory.

46:23 And I think plug-in pandas, I'm not sure about polars, and do queries with its query optimizer against that data and sort of things like that.

46:30 It's pretty interesting, I think, in this, to put it into that space as well.

46:35 All right, Carol, I think it's time for your second trend here.

46:39 The second trend is pretty much, you know, things are moving to the front end.

46:44 WebAssembly, TypeScript, Pyedide.

46:48 There's a new project, PyCafe, that I'm pretty happy with by Martin Brettles that lets you do dashboards using Pyedide, but like Streamlit and Plotly's and libraries and things like that.

47:03 And I think making things more accessible as well as making things more visual is pretty cool.

47:10 Like, I took, what was it, Jupyter Light earlier last fall.

47:15 And a friend of mine had kids and I integrated into my website so that, like, her kids could just do a quick whatever, which sort of, you know, in some ways was similar to Binder.

47:26 And the whole time we were developing Binder, I was also working with the Pyedide, Iodide folks, because I think there's a convergence down the road.

47:36 And where it all will go, I'm not really sure, but I think it's exciting.

47:42 And I think anything that, from a privacy standpoint, security, there's a lot of things that are very attractive about pushing things into the front end.

47:53 That beginner startup thing you talked about, that onboarding first experience, you hit a webpage and you have full experience with Python and the tooling and the packages are already installed in that thing.

48:04 And that's so much better than forced you to download it.

48:07 Well, you need admin permissions to install it.

48:09 Now you create a virtual environment and then you open the terminal.

48:12 Do you know what a terminal is?

48:13 We're going to tell you, you know, like, no, just, and you don't have to ask permission to run a static webpage where you do for like, how do I run this server on a Docker cluster?

48:21 Something, you know?

48:23 It opens up different doors.

48:25 And I think the other thing we found, like when we were teaching, you know, with Binder and JupyterHub, UC Berkeley was able to have now most of their student body taking these data eight connector courses.

48:39 And they would run the compute in the cloud, which really leveled the playing field.

48:47 It didn't matter if you had a Chromebook or you had the highest end Mac, you still got the same education.

48:52 And I think there is something very appealing about that.

48:57 We've actually been running humble data in Jupyter Lite and some people just bring a tablet and they can do it on that.

49:04 That's awesome.

49:04 Carol, there was something you were saying that connected to something else in my brain.

49:09 Remember in the beginning of the web and view source was such a cool thing.

49:12 Yeah.

49:13 You could see what the backend sent you and you could poke around at it.

49:18 You could learn from it and you could steal it, you know, and use it to go make your own thing.

49:22 But what if you could view source the backend because it's actually running in your browser?

49:28 What you were just saying was if you make it reveal itself about the notebook and the code in addition to the HTML, maybe you'll trigger some of those same kinds of things that view source gave people back in the day.

49:43 Maybe.

49:44 Maybe.

49:44 The flip side would be there's always business and practicalities in life and people will want to sort of lock it down within WebAssembly.

49:56 So you've got both sides of it.

49:58 But I do think, you know, I was telling somebody the other day, like, I never use Stack Overflow or rarely use Stack Overflow.

50:06 And they're like, how do you find stuff?

50:07 I'm like, I use search on GitHub and I look for really good examples.

50:11 And so in some ways it's like view source.

50:16 And then there's also the flip side of it is like, okay, how do I break it?

50:20 How do I play with it?

50:22 How do I make it do something it wasn't doing before, which, you know, could be used for good or for evil?

50:27 I tend to use it for good.

50:29 Sure.

50:30 Paul, we're up on our time here.

50:31 What's your second?

50:32 Sure.

50:33 Second trend.

50:34 We'll see if we have time for mine.

50:35 I have a couple just in case we can squeeze them in.

50:37 Okay.

50:37 Let's talk about yours.

50:38 I came back from PyCon really rejuvenated, but also had some kind of clarity about some things that have been lingering for me for a few years, how I could contribute things like that.

50:49 But going into it, there are a couple of trends that lead me to thinking about an opportunity and a threat as two sides of the same coin.

50:58 First, Russell Keith McGee and Lucas Longa both talked about black swans and the threat of JavaScript everywhere.

51:07 That if we don't have a better web story, if we make our front end be JavaScript and React and we stop doing front ends, well, then they'll come for the back end too.

51:21 Because once they've hired up JavaScript developers, why don't we just do JavaScript on the server too?

51:26 So that was a first thing.

51:27 And in my position, I do look at the web and think about all these trends that are happening.

51:33 And there's beginning to be a little bit of a backlash about the JavaScriptification of the web.

51:40 And so some really big names.

51:42 HTMX is a good example of it.

51:44 But just some thinkers and speakers.

51:46 I mean, Jeff Triplett talks about this.

51:48 A lot of people in the world of Python talk about this.

51:51 So there's kind of a desire to put the web back in the web trademark.

51:55 But then there was a second point coming about these walled gardens.

51:59 We've seen them for a while.

52:00 We all relied on Twitter.

52:02 What a great place.

52:03 Wait, what?

52:05 And then so much of our life is in a system we don't control.

52:08 And so we move over to the Fediverse.

52:10 And then Meta's like, hey, great.

52:12 We're going to build a bridge to you.

52:13 Turns out this week we start to learn things about the Thread API that maybe it's not as friendly as we think it is.

52:20 But the big one for me was Google and Search.

52:24 Well, I should say Google and getting rid of its Python staff.

52:27 But Google and Search, where they're no longer going to send you to the website anymore.

52:33 They're just going to harvest what's on your website and give you the answer.

52:36 And people are talking now about Google Zero, the day of the apocalypse where you no longer get any clicks from Google.

52:45 And what does that mean for content creators and stuff like that?

52:48 So going into all of this, I've been thinking about how awesome life is in Python land because we've got this great language.

52:57 Oh, but we've got this great community.

53:00 Come for the language, stay for the community.

53:01 Well, what do we mean by that?

53:03 A lot of the times we mean all this code that's available.

53:06 We also mean all these people and wonderful, helpful people like on this call.

53:12 But there's also this big world of content.

53:15 And we have kind of organically grown a little online community with a bunch of helpful content and a bunch of connections between people, which is of some value itself.

53:33 And so you see people starting to talk about, wow, I miss the old days of RSS, where we would all subscribe to each other's blogs and get content and go straight to the source and not have it aggregated into a walled garden and stuff like that.

53:45 And it just feels like there's room out there for if we want to fight back against the threat of these megacores taking our voluntary contribution to humanity and monetizing it,

54:00 while at the same time of taking all these valuable voices, creating content and value in Python land, that maybe we could bring back some of these things, put the web back in the web and start to get out of the walled gardens and back over into social networks that are open and joyful.

54:24 I'm here for it.

54:24 Wow.

54:25 People complain, governments complain that places like Google and stuff are monetizing the links and they're being paid.

54:32 You know, you got to pay to link to this new source or whatever, right?

54:35 Various.

54:36 We're lucky that we have that.

54:37 If it turns into just you just get an AI answer, no source.

54:41 That's going to be really hard on a lot of different businesses, creators.

54:45 People just want to create something just for the attention or for their self.

54:48 You know, like nobody comes anymore.

54:50 It's going to be a sad place.

54:51 I was thinking about Coke Zero the whole time you were saying like, you know, Google Zero or whatever, because you didn't have to bring back classic Coke.

55:00 And I think, yeah, pivots happen, but it's hard to pivot, you know, billion dollar companies.

55:07 I have lots of thoughts on some of the current Python, what Google has chosen to do.

55:14 I think sometimes listening to consultants isn't the best business approach.

55:21 You know, it's their company.

55:23 They can do what they need to do for their own shareholders.

55:26 I think a lot of what you said is really interesting.

55:29 And I touched on this a little bit because the volume of information around us is greater than ever before.

55:37 Sure.

55:37 And at a speed of transmission that is faster than ever before.

55:43 And about eight years ago, I had breakfast with Sandy Betts, who was very prolific in the Ruby community.

55:49 And I asked her, like, how do you keep up with all of this stuff?

55:53 And she's like, I don't.

55:54 And I said, OK.

55:56 And she's like, what I do is I focus on the things that impact me and all the rest of it is news.

56:03 And that really stuck with me because in in actuality, that's kind of what I do.

56:09 You know, I ignore the things that aren't directly relevant to me and trust that I've built a strong enough network of people that I respect that.

56:21 Well said.

56:21 Their work will influence when I jump in.

56:25 Like, I don't, you know, much like the life cycle, if you've studied marketing or product development, you know, not everybody is an early adopter.

56:32 So do I need to be an early adopter on everything?

56:35 No.

56:35 Yeah.

56:35 That book, Crossing the Chasm, says that you should do that, like, on one thing.

56:39 If you do it on three things or more, you'll fail.

56:42 Yeah.

56:42 You know, part of the thing that triggered this for me was reading that Andreessen Horowitz, kind of the self-proclaimed king of Silicon Valley VCs, as zero interest rates started to go out of fashion and their recipe wasn't working, they didn't like the negative press coverage.

56:59 So they started their own media empire to cover themselves.

57:03 And that idea is just so appalling that we would get news, we would turn to the megacourts and the masters of the universe to tell us what we should be caring about.

57:17 We have that already.

57:19 We have, I'll be very specific, we have Planet Python.

57:22 It's in disrepair.

57:24 What if it was reimagined into a freaking media empire by us, for us, to cover the Fediverse and course providers and all the value that's out there?

57:37 And like, Carol, you're saying, I don't have to think about it, but I trust that group because they're thinking about it.

57:43 A lot of it is like, you know, when it came to LLMs, it was not the thing that rocked my world, like intellectually, but I knew Simon was doing work with it.

57:54 And so I basically, once every few weeks, would take a look at his website and his blog posts, and he posts a lot, and I would get my data dump of things.

58:06 I don't know.

58:07 I mean, that's one of the reasons I like PyCon, and I've like read talk proposals, everything for the last decade.

58:13 Oh, wow.

58:14 All these talk proposals.

58:16 And it really does give me an appreciation for all the things Python's being used for.

58:21 What's seen?

58:22 Kind of the zeitgeist?

58:23 Yeah.

58:24 And so I think there's different ways of doing that, even just doing a YouTube search of Python content.

58:30 But I tend to focus in on science-oriented things and ways to empower humans through lifelong learning.

58:39 So there's a lot of, we're in a phenomenal period of change, for sure.

58:46 Yes.

58:46 So we won't be bored, nor do I think our jobs are going to go away.

58:49 They may change, but they're not going away.

58:51 Indeed.

58:52 Jodi, final thoughts on this topic?

58:54 And we'll pretty much wrap things up.

58:55 Yeah, I don't think I really have that much to add, actually.

58:57 I think it's all been said.

58:59 It has.

58:59 All right, just to round things, the two things that I think are transiers, I think,

59:04 like Carol said a lot, Python on the front end is going to be super important.

59:07 I think PyScript is really, really interesting.

59:10 I've been waiting for people to develop something like React or Vue or something that we could

59:16 create commercial-facing websites.

59:18 We're halfway there with MicroPython being the foundation of PyScript, which is 100K instead

59:23 of 10 megs.

59:24 All of a sudden, it becomes JavaScript-y size.

59:27 It opens up possibilities.

59:28 And just a shout out to PewPy, which is like Vue with Python, P-U-E-P-Y.

59:33 I'm going to interview Ken from that project.

59:36 But it's kind of a component-based front end for PyScript, which is pretty interesting.

59:40 And of course, Jupyter Lite is really, really important.

59:42 The other one was just all this rust.

59:44 Everything seems to be redone in rust.

59:46 And oh my gosh, that's how you get your VC funding.

59:49 Just joking, sort of.

59:50 But all you talked about all this performance stuff coming, you know, while it is sometimes

59:55 frustrating that people are putting all the things into rust because then Python programmers

59:59 is less approachable for them.

01:00:01 It could also be an escape hatch from trying to force the complexity into the Python side.

01:00:06 Alleviate, like everything has to be multi-threaded and crazy and optimized.

01:00:10 And well, this part you never look at.

01:00:12 It's faster now.

01:00:13 So don't worry.

01:00:14 Anyway, those are my two trends.

01:00:15 Quick, quick, quick thoughts on that and we'll call it a show.

01:00:18 My piece of trivia is I made a contribution to rust far before I made any contributions to

01:00:25 Core Python.

01:00:25 Amazing.

01:00:26 Because I tended to be a C programmer in heart and spirit.

01:00:30 And so rust seemed like this cool thing that was new at the time.

01:00:34 And ultimately, I personally did not find the syntactic side of it worked well with my brain and how

01:00:43 I think.

01:00:44 And Python was far cleaner in terms of a simpler visual, less clutter and reminded me a little

01:00:52 more of small talk or something like that, which I loved from earlier days.

01:00:56 But I think there's a place for rust.

01:00:59 Do I think rust is going to replace Python?

01:01:03 No, I think it's going to help with some optimized things.

01:01:07 Do I love things like Ruff that let me run my CI like blazing fast versus, you know, all

01:01:14 the Python tools?

01:01:15 Not to say that all the Python tools are bad, but, you know, when you're paying for it as

01:01:20 a startup.

01:01:20 When things you just have to wait on become, they blink of an eye, all of a sudden you don't

01:01:23 mind running them every time.

01:01:24 And it changes the way you work with tools.

01:01:26 Yeah, exactly.

01:01:27 Yeah, I would say, look, every language has its place in the ecosystem.

01:01:31 And my husband is a longtime Pythonista, but he's also a Rust program.

01:01:37 I know it's like a running joke that my husband is a Rust developer.

01:01:41 How do you know?

01:01:41 He'll ask you.

01:01:42 Well, you know what I mean.

01:01:44 How do you know?

01:01:45 Ask him.

01:01:46 He'll tell you.

01:01:46 There you go.

01:01:47 They have different like purposes, completely different purposes.

01:01:51 And you can't just interchange them.

01:01:54 Absolutely.

01:01:54 Let's just get it straight.

01:01:55 Python is just awesome.

01:01:56 So it's our time.

01:01:57 Pure love.

01:01:57 But it's to us to keep it awesome.

01:02:00 Yes, absolutely.

01:02:01 Paul, we've come around to you for the very final, final thought on this excellent show.

01:02:05 I will give a final thought about Python trends to follow up on what Carol just said about

01:02:11 it's up to us.

01:02:13 Maybe it's up to us to help the people who will keep it that way.

01:02:18 The next generation of heroes.

01:02:20 Help them succeed.

01:02:22 I'm wearing my PyCon Kenya friendship bracelet that I got at PyCon.

01:02:28 And a wonderful experience meeting so many different kinds of people.

01:02:33 And from a Python trends perspective, the fact that everything we're talking about is good

01:02:38 stuff, not like asteroid meets earth.

01:02:42 Yes, yes.

01:02:43 IP challenges and patent wars and mergers and acquisitions and stuff.

01:02:48 Remember a long time ago, I went to go see Guido and he was with the App Engine team at Google.

01:02:53 So a long time ago.

01:02:54 And he was starting the process of turning over PEP review to other people.

01:03:00 And I commented to him that not every open source success story outlives its founder.

01:03:06 And the bigger it gets, particularly open source projects anchored in the United States of America,

01:03:13 they sell out and get funded and they will never be the same after that.

01:03:19 And so it's a moment from a Python trends perspective for us to build a great next future by remembering

01:03:27 how lucky we are where we have gotten to.

01:03:30 Absolutely.

01:03:30 Carol, Jody, Paul, thank you for being on the show.

01:03:33 Thank you.

01:03:34 Thanks, Michael.

01:03:34 Thank you.

01:03:35 Bye, everyone.

01:03:36 Bye.

01:03:36 This has been another episode of Talk Python to Me.

01:03:39 Thank you to our sponsors.

01:03:41 Be sure to check out what they're offering.

01:03:43 It really helps support the show.

01:03:44 Code Comments, an original podcast from Red Hat.

01:03:47 This podcast covers stories from technologists who've been through tough tech transitions

01:03:52 and share how their teams survived the journey.

01:03:57 Episodes are available everywhere you listen to your podcasts and at talkpython.fm/code dash comments.

01:04:02 This episode is sponsored by Posit Connect from the makers of Shiny.

01:04:07 Publish, share, and deploy all of your data projects that you're creating using Python.

01:04:11 Streamlit, Dash, Shiny, Bokeh, FastAPI, Flask, Quarto, Reports, Dashboards, and APIs.

01:04:18 Posit Connect supports all of them.

01:04:20 Try Posit Connect for free by going to talkpython.fm/posit.

01:04:25 P-O-S-I-T.

01:04:26 Want to level up your Python?

01:04:28 We have one of the largest catalogs of Python video courses over at Talk Python.

01:04:32 Our content ranges from true beginners to deeply advanced topics like memory and async.

01:04:37 And best of all, there's not a subscription in sight.

01:04:39 Check it out for yourself at training.talkpython.fm.

01:04:42 Be sure to subscribe to the show.

01:04:44 Open your favorite podcast app and search for Python.

01:04:47 We should be right at the top.

01:04:49 You can also find the iTunes feed at /itunes, the Google Play feed at /play,

01:04:54 and the direct RSS feed at /rss on talkpython.fm.

01:04:58 We're live streaming most of our recordings these days.

01:05:01 If you want to be part of the show and have your comments featured on the air,

01:05:04 be sure to subscribe to our YouTube channel at talkpython.fm/youtube.

01:05:09 This is your host, Michael Kennedy.

01:05:11 Thanks so much for listening.

01:05:12 I really appreciate it.

01:05:13 Now get out there and write some Python code.

01:05:15 Thank you.

01:05:16 Bye.

01:05:17 Bye.

01:05:18 Bye.

01:05:19 Bye.

01:05:20 Bye.

01:05:21 Bye.

01:05:22 Bye.

01:05:23 Bye.

01:05:24 Bye.

01:05:25 Bye.

01:05:26 Bye.

01:05:27 Bye.

01:05:28 Bye.

01:05:29 Bye.

01:05:30 Bye.

01:05:31 Bye.

01:05:32 you you you Thank you.

01:05:36 Thank you.

Back to show page
Talk Python's Mastodon Michael Kennedy's Mastodon