New course: Agentic AI for Python Devs

2025 Python Year in Review

Episode #532, published Mon, Dec 29, 2025, recorded Tue, Dec 9, 2025
Python in 2025 is in a delightfully refreshing place: the GIL's days are numbered, packaging is getting sharper tools, and the type checkers are multiplying like gremlins snacking after midnight. On this episode, we have an amazing panel to give us a range of perspectives on what matter in 2025 in Python. We have Barry Warsaw, Brett Cannon, Gregory Kapfhammer, Jodie Burchell, Reuven Lerner, and Thomas Wouters on to give us their thoughts.

Watch this episode on YouTube
Play on YouTube
Watch the live stream version

Episode Deep Dive

Guests Introduction and Background

This year-in-review episode features an exceptional panel of Python luminaries representing diverse perspectives across education, industry, and core Python development:

Gregory Kapfhammer is an Associate Professor of Computer and Information Science who conducts research in software engineering and software testing. He has built numerous Python tools, with current research focusing on flaky test cases in Python projects. Gregory also hosts the Software Engineering Radio podcast sponsored by the IEEE Computer Society, where he has interviewed many members of the Python community.

Thomas Wouters is a longtime Python core developer who worked at Google for 17 years before joining Meta. He has served multiple terms on the PSF Board and the Python Steering Council. In 2025, Thomas received the PSF Distinguished Service Award recognizing his extensive contributions to Python.

Jodie Burchell is a Data Scientist and Developer Advocate at JetBrains, working on PyCharm. She has been a data scientist for around 10 years after transitioning from a career as a clinical psychologist. Jodie has been working with NLP and LLMs since before they became mainstream, including early Google language models from 2019.

Brett Cannon has been at Microsoft for 10 years and is currently working on AI R&D for Python developers. He also maintains WASI support for Python and does internal consulting. Despite being a core developer for 22 years, Brett jokes that he is the "shortest-running" core developer on this panel. He previously managed the Python experience in VS Code.

Reuven Lerner is a freelance Python and Pandas trainer who celebrated 30 years of freelancing in 2024. He teaches Python and Pandas both at companies and through his online platform, writes newsletters, publishes books, and speaks at conferences.

Barry Warsaw has been a Python core developer since 1994, making him the longest-tenured developer on the panel. He worked with Guido van Rossum at CNRI in Python's early days, handling mailing lists, version control, and websites. Barry created the PEP (Python Enhancement Proposal) process 25 years ago. He is currently at NVIDIA doing Python work split between internal projects and external open-source community contributions, and serves on the Steering Council.


What to Know If You're New to Python

If you are newer to Python and want to get the most out of this episode analysis, here are some foundational concepts that will help:

  • The GIL (Global Interpreter Lock) is a mechanism in CPython that allows only one thread to execute Python bytecode at a time. This episode discusses its upcoming removal, which is a major architectural change.
  • Type hints are optional annotations that specify what types of values variables and functions should use. Tools called "type checkers" analyze these hints to catch bugs before runtime.
  • Virtual environments are isolated Python installations that keep project dependencies separate. Modern tools like uv automate their creation and management.
  • The PSF (Python Software Foundation) is the nonprofit organization that manages Python's development, PyPI, PyCon, and community grants.

Key Points and Takeaways

1. Free-Threaded Python: The GIL's Days Are Numbered

The most transformative development in Python for 2025 is the advancement of free-threaded Python, which removes the Global Interpreter Lock (GIL). Thomas Wouters, who championed this effort, confirmed that free threading has moved from experimental to officially supported status in Python 3.14. The performance overhead is remarkably small - basically the same speed on macOS (due to ARM hardware and Clang optimizations) and only a few percent slower on Linux with recent GCC versions.

The main challenge now is community adoption: getting third-party packages to update their extension modules for the new APIs. Barry Warsaw called this "one of the most transformative developments for Python, certainly since Python 3." Early experiments show promising results, with highly parallel workloads seeing 10x or greater speedups. The PyTorch data loader, for example, has seen massive improvements by leveraging multiple threads.

2. AI and LLMs: The Bubble Shows Cracks, But the Tools Remain Valuable

Jodie Burchell provided a grounded perspective on AI in 2025, noting that GPT-5's release in August marked a turning point where mainstream observers began questioning the AI bubble. The "scaling laws" that promised exponential improvements turned out not to be laws at all, and OpenAI's internal struggles to deliver on AGI promises became public. Investment in AI spending now exceeds consumer spending in the US, which Jodie called "a really scary prospect."

Despite the bubble concerns, the panel universally agreed that LLMs are incredibly useful tools that are here to stay. Gregory Kapfhammer shared that automated refactoring, test generation, and code coverage improvement - goals he thought would never be achieved in his lifetime - are now routine with tools like Claude Code. The consensus: even if models never improve beyond current capabilities, they represent a massive leap beyond previous development tools. Barry Warsaw noted he no longer searches for documentation - he just asks the AI to set things up.

3. Python Tooling Revolution: uv and the Rise of "Run" Commands

Brett Cannon highlighted a fundamental shift in how Python developers work: tools like uv, Hatch, and PDM now treat Python itself as an implementation detail. Instead of the traditional workflow of installing Python, creating a virtual environment, and managing dependencies, developers can now simply run a command like uv run and everything is handled automatically.

This represents a "renewed focus on user experience" according to Barry Warsaw. The distance between having an idea and having working code has dramatically narrowed. Combined with PEP 723 (inline script metadata), you can now put uv in the shebang line of a Python script and share it without any setup instructions - the script carries its own dependency information. Gregory Kapfhammer shared that he now starts undergraduate students with uv on the first day of class, eliminating the week-long setup struggles that used to plague even strong students.

4. Type Checkers and LSPs: The Year of Rust-Based Tooling

Gregory Kapfhammer declared 2025 "the year of type checking and language server protocols," highlighting the emergence of three new Rust-based type checkers: Pyrefly from Meta, ty from Astral, and Zubon from David Halter (creator of Jedi). These tools have transformed what previously took tens or hundreds of seconds into sub-second operations.

The speed improvements fundamentally change how developers interact with type checking - it becomes a real-time feedback loop rather than a batch process. However, Gregory noted a challenge: different type checkers can disagree on whether code has errors, even on the same codebase. His research team built a tool that automatically generates Python programs causing type checkers to disagree with each other. The PyLands team at Microsoft is working with the Pyrefly team to define a Type Server Protocol (TSP) that would let type checkers feed information to higher-level LSPs.

5. PSF Funding Crisis: A Tragedy of the Commons

Reuven Lerner raised concerns about Python's funding sustainability. The PSF had to pause its grants program and turned down an NSF government grant due to attached DEI restrictions that conflicted with PSF values. Most Python users have no idea the language needs funding - they just know they can download it for free.

Thomas Wouters pointed out that major AI companies, despite relying entirely on PyPI and Python infrastructure, have done "basically no sponsorship of the PSF." The amounts needed are surprisingly small - less than what a medium company spends on expensed meal tips annually, or less than a Silicon Valley developer's salary. All panelists' employers (Microsoft, Meta, NVIDIA, JetBrains) are PSF sponsors, but many companies using Python are not. Reuven noted that NumFocus has about twice the PSF's budget, suggesting it is possible to get corporate sponsorship for Python-related projects.

6. Lazy Imports Coming to Python (PEP 810)

Thomas Wouters mentioned lazy imports as his "second favorite child" topic - a PEP that was accepted in 2025 and will significantly improve Python startup time and import performance. This feature defers the actual importing of modules until they are first used, rather than loading everything at startup.

The lazy imports PEP (810) had broad community support despite some very vocal opposition. Pablo Galindo, a Steering Council member, led the effort and bore the brunt of criticism despite the technical merits being clear. This feature will be especially impactful for applications with many imports that may not all be needed for every code path.

7. The PEP Process Needs Evolution

Barry Warsaw, who created the PEP process 25 years ago, acknowledged that authoring PEPs has become "incredibly difficult and emotionally draining." The community has grown exponentially, but discussions on discuss.python.org can become toxic. PEP authors regularly receive vitriol in their personal inboxes for proposing technical changes.

The Steering Council itself exists because Guido van Rossum received such abuse over PEP 572 (the walrus operator) that he stepped down as BDFL. Thomas shared that Pablo Galindo received "ridiculous accusations" in his mailbox simply for proposing lazy imports. Core developers sometimes skip the PEP process entirely to avoid this gauntlet, which is also problematic because important changes don't get properly documented or discussed. Barry believes Python needs to rethink how it evolves the language while not losing the voice of users.

8. Concurrency Options Are Expanding (But Abstractions Will Help)

With free threading joining asyncio, multiprocessing, and subinterpreters, Python now has multiple concurrency approaches. Reuven asked how developers should choose between them. Thomas explained that most end users should not need to make these low-level choices - higher-level abstractions should handle it. Brett suggested using concurrent.futures as a unified interface where you start with threads (fastest), fall back to subinterpreters if needed, then to process pools.

Michael Kennedy expressed hope that Python's concurrency could eventually solidify around async/await with decorators indicating the execution mode, hiding the complexity from developers. The key insight is that different problems benefit from different approaches, and libraries like PyTorch are already combining techniques (using threads for data loading within an async framework) to achieve massive speedups.

9. Regional Python Conferences Thrive

Gregory Kapfhammer gave a shout-out to regional Python conferences, specifically highlighting PyOhio as a free conference that anyone can attend with no registration fee, hosted at universities like Ohio State and Cleveland State. These local events provide accessible entry points to the Python community.

Reuven shared a contrasting experience attending a non-Python conference where all speakers were from advertisers and sponsors, and people were not interested in sharing. He called Python conferences "really special" for the combination of career benefits and genuine community connection. The community aspect remains one of Python's greatest strengths.

10. AI as a Productivity Multiplier (Not a Replacement)

A Stanford study (the "10K Developer Study") examining real codebases found nuanced results about AI coding assistants. Yes, there are productivity boosts, but AI produces more code that tends to be worse than human-written code and introduces more bugs. When accounting for refactoring and debugging time, developers are still more productive overall.

AI works better for greenfield projects, smaller codebases, simpler problems, and more popular languages (more training data). Jodie emphasized that like an IDE, AI is a tool - not a replacement. Barry's favorite use is bypassing documentation: instead of searching docs, he asks the AI to set things up. Gregory noted that AI agents work much better when given fast feedback tools like type checkers and linters.

11. Community Sustainability and New Contributors

Thomas emphasized the importance of new people joining Python - as triagers, drive-by commenters, or eventually core developers. Python is what the community makes it, with no company dictating direction. Barry encouraged anyone interested in becoming a core developer to find a mentor, and anyone wanting to be involved with the PSF to reach out.

Thomas pointed out he is a high school dropout who never went to college, while Brett has a PhD - neither credential is required to contribute meaningfully to Python. The call to action: run for the Steering Council (they need more candidates), sponsor the PSF if your company uses Python, and engage with the community in whatever way fits your skills and interests.


Interesting Quotes and Stories

"Python in 2025 is a delightfully refreshing place. The GIL's days are numbered, packaging is getting sharper tools, and the type checkers are multiplying like gremlins snacking after midnight." -- Michael Kennedy

"I can remember thinking to myself how cool would it be if I had a system that could automatically refactor and then add test cases and increase the code coverage and make sure all my checkers and linters pass... I remember thinking five to seven years ago I would never realize that goal in my entire lifetime. And now when I use Anthropic's models through Claude Code, it's incredible how much you can achieve so quickly." -- Gregory Kapfhammer

"I'm glad I'll never have to write bash from scratch ever again." -- Brett Cannon on AI assistance

"I never even search for the docs. I never even try to read the docs anymore. I just say, hey, I need to set up this website. And I just tell me what to do or just do it. And it's an immense time saver." -- Barry Warsaw

"The GIL never gave you thread safety. The GIL gave CPython's internals thread safety. It never really affected Python code." -- Thomas Wouters

"Free threading is absolutely the future of Python, and I think it's going to unlock incredible potential and performance." -- Barry Warsaw

"It's ridiculous how small those sponsorships are and yet how grateful we were that they came in because every single one has such a big impact." -- Thomas Wouters on PSF funding

"What the PSF would be happy with is less than a medium company spends on the tips of expensed meals every year." -- Thomas Wouters, quoting Ned Batchelder

"I don't want to call on people in the community to do it because they're probably the same people who are also donating their time for Python. It's just squeezing people who give so much of themselves to this community even more." -- Jodie Burchell on funding burnout

"Python is the community or the community is Python. There's no company that is telling anybody what Python should be. Python is what we make it." -- Barry Warsaw

"I'm a high school dropout and I never went to college or anything. And look at me." -- Thomas Wouters on qualifications needed for Python contribution


Key Definitions and Terms

GIL (Global Interpreter Lock): A mutex in CPython that allows only one thread to execute Python bytecode at a time. Its removal in free-threaded Python enables true parallel execution of Python code across multiple CPU cores.

Free-Threaded Python: A build of Python without the GIL, allowing multiple threads to execute Python bytecode simultaneously. Officially supported in Python 3.14.

PEP (Python Enhancement Proposal): The formal process for proposing major changes to Python. Created by Barry Warsaw 25 years ago, PEPs document design decisions and the rationale behind language features.

LSP (Language Server Protocol): A protocol for communication between code editors and language-specific tools that provide features like autocomplete, go-to-definition, and error checking.

Type Checker: A tool that analyzes Python code with type hints to catch type-related errors before runtime. Examples include mypy, Pyrefly, ty, and Pyright.

Lazy Imports: A feature (PEP 810) where module imports are deferred until the imported names are actually used, improving startup time for applications with many imports.

PyPI (Python Package Index): The official repository for Python packages, run by the PSF and used by millions of developers daily.

Steering Council: A five-person elected body that provides leadership for Python's development, created after Guido van Rossum stepped down as BDFL in 2018.

PEP 723: Inline script metadata that allows Python scripts to declare their dependencies and Python version requirements directly in the file, enabling tools like uv to automatically set up the correct environment.

Subinterpreters: Isolated Python interpreters running in separate threads within the same process, providing parallelism with stronger isolation than threads but less overhead than multiprocessing.


Learning Resources

For those looking to deepen their understanding of topics covered in this episode, here are relevant courses from Talk Python Training:

  • Rock Solid Python with Python Typing: Learn the ins-and-outs of Python typing, explore frameworks using types like Pydantic and FastAPI, and get guidance for using types effectively in your applications.

  • Async Techniques and Examples in Python: Master the entire spectrum of Python's parallel APIs including async/await, asyncio, threads, multiprocessing, and coordination techniques.

  • LLM Building Blocks for Python: Learn to integrate large language models into Python applications with structured outputs, chat workflows, and production-ready async pipelines.

  • Getting Started with pytest: Master pytest's five superpowers - simple test functions, fixtures, parametrization, markers, and plugins.

  • Python for Absolute Beginners: Start from the very beginning with CS 101 concepts before building increasingly complex Python applications.

  • Modern Python Projects: Learn project structure, dependency management, testing, documentation, CI/CD, and deployment for real-world Python applications.


Overall Takeaway

Python in 2025 stands at an inflection point where decades of foundational work are paying off in transformative ways. The removal of the GIL promises to unlock true parallelism, modern tooling like uv is abstracting away complexity that once frustrated beginners, and Rust-based type checkers are making static analysis feel instantaneous. Yet beneath these technical victories lies a community wrestling with sustainability - funding challenges, contributor burnout, and the difficulty of evolving a language used by millions through a 25-year-old proposal process.

What shines through this conversation is that Python's greatest asset has always been its people. From Barry Warsaw, who has been contributing since 1994 and created the very PEP process he now seeks to reform, to Jodie Burchell, a relative newcomer who marveled at how welcoming the community has been, the panel represents a continuum of experience united by shared values. The call to action is clear: sponsor the PSF if your company uses Python, contribute in whatever way fits your skills, attend regional conferences, and remember that Python is not a product controlled by any corporation - it is what we collectively make it. The future is being built right now by people who simply showed up and got involved.

Python Software Foundation (PSF): www.python.org
PEP 810: Explicit lazy imports: peps.python.org
PEP 779: Free-threaded Python is officially supported: peps.python.org
PEP 723: Inline script metadata: peps.python.org
PyCharm: www.jetbrains.com
JetBrains: www.jetbrains.com
Visual Studio Code: code.visualstudio.com
pandas: pandas.pydata.org
PydanticAI: ai.pydantic.dev
OpenAI API docs: platform.openai.com
uv: docs.astral.sh
Hatch: github.com
PDM: pdm-project.org
Poetry: python-poetry.org
Project Jupyter: jupyter.org
JupyterLite: jupyterlite.readthedocs.io
PEP 690: Lazy Imports: peps.python.org
PyTorch: pytorch.org
Python concurrent.futures: docs.python.org
Python Package Index (PyPI): pypi.org
EuroPython: tickets.europython.eu
TensorFlow: www.tensorflow.org
Keras: keras.io
PyCon US: us.pycon.org
NumFOCUS: numfocus.org
Python discussion forum (discuss.python.org): discuss.python.org
Language Server Protocol: microsoft.github.io
mypy: mypy-lang.org
Pyright: github.com
Pylance: marketplace.visualstudio.com
Pyrefly: github.com
ty: github.com
Zuban: docs.zubanls.com
Jedi: jedi.readthedocs.io
GitHub: github.com
PyOhio: www.pyohio.org

Watch this episode on YouTube: youtube.com
Episode #532 deep-dive: talkpython.fm/532
Episode transcripts: talkpython.fm

Theme Song: Developer Rap
🥁 Served in a Flask 🎸: talkpython.fm/flasksong

---== Don't be a stranger ==---
YouTube: youtube.com/@talkpython

Bluesky: @talkpython.fm
Mastodon: @talkpython@fosstodon.org
X.com: @talkpython

Michael on Bluesky: @mkennedy.codes
Michael on Mastodon: @mkennedy@fosstodon.org
Michael on X.com: @mkennedy

Episode Transcript

Collapse transcript

00:00 Python in 2025 is a delightfully refreshing place.

00:04 The guild's days are numbered, packaging is getting sharper tools,

00:07 and the type checkers are multiplying like gremlins snacking after midnight.

00:11 On this episode, we have an amazing panel to give us a range of perspectives

00:15 on what mattered in 2025 in Python.

00:19 We have Barry Warsaw, Brett Cannon, Gregory Kampfhammer, Jody Burchell, Reuven Lerner,

00:24 and Thomas Worders on the show to give us their thoughts.

00:28 This is Talk Python To Me, episode 532, recorded December 9th, 2025.

00:50 Welcome to Talk Python To Me, the number one Python podcast for developers and data scientists.

00:55 This is your host, Michael Kennedy.

00:57 I'm a PSF fellow who's been coding for over 25 years.

01:01 Let's connect on social media.

01:03 You'll find me and Talk Python on Mastodon, BlueSky, and X.

01:06 The social links are all in your show notes.

01:09 You can find over 10 years of past episodes at talkpython.fm.

01:12 And if you want to be part of the show, you can join our recording live streams.

01:16 That's right.

01:17 We live stream the raw uncut version of each episode on YouTube.

01:20 Just visit talkpython.fm/youtube to see the schedule of upcoming events.

01:25 Be sure to subscribe there and press the bell so you'll get notified anytime we're recording.

01:29 Look into the future and see bugs before they make it to production.

01:33 Sentry's SEER AI code review uses historical error and performance information at Sentry

01:39 to find and flag bugs in your PRs before you even start to review them.

01:43 Stop bugs before they enter your code base.

01:46 Get started at talkpython.fm/seer-code-review.

01:50 Hey, before we jump into the interview, I just want to send a little message to all the companies out there with products and services trying to reach developers.

01:59 That is the listeners of this show.

02:01 As we're rolling into 2026, I have a bunch of spots open.

02:05 So please reach out to me if you're looking to sponsor a podcast or just generally sponsor things in the community.

02:12 And you haven't necessarily considered podcasts.

02:14 You really should reach out to me and I'll help you connect with the Talk Python audience.

02:20 thanks everyone for listening all of 2025 and here we go into 2026 cheers hey everyone it's so

02:27 awesome to be here with you all thanks for taking the time out of your day to be part of talk python

02:31 for this year in review this python year in review so yeah let's just jump right into it gregory

02:38 welcome welcome to the show welcome back to the show how you doing hi i'm an associate professor

02:42 of computer and information science and i do research and software engineering and software

02:46 testing. I've built a bunch of Python tools, and one of the areas we're studying now is flaky test

02:51 cases in Python projects. I'm also really excited about teaching in a wide variety of areas. In fact,

02:57 I use Python for operating systems classes or theory of computation classes. And one of the

03:02 things I'm excited about is being a podcast host. I'm also a host on the Software Engineering Radio

03:08 podcast sponsored by the IEEE Computer Society, and I've had the cool opportunity to interview a

03:14 whole bunch of people in the Python community. So Michael, thanks for welcoming me to the show.

03:18 Yeah, it's awesome to have you back. And we talked about FlakyTest last time. I do have to say

03:23 your AV setup is quite good. I love the new mic and all that. Thomas, welcome. Awesome to have

03:29 you here. Thanks for having me. I'm Thomas Wauters. I'm a longtime Python core developer,

03:34 although not as long as one of the other guests on this podcast. I worked at Google for 17 years.

03:40 for the last year or so I've worked at Meta. In both cases, I work on Python itself within the

03:45 company and just deploying it internally. I've also been a board member of the PSF, although I'm not

03:51 one right now. And I've been a steering council member for five years and currently not because

03:58 the elections are going and I don't know what the result is going to be. But I think there's like

04:02 five, six chance that I'll be on the steering council since we only have six candidates for

04:09 five positions when this episode probably airs. I don't know. That's quite the contribution to the

04:14 whole community. Thank you. I always forget this. I also got the, what is it, the Distinguished

04:19 Service Award from the PSF this year. I should probably mention that. So yes, I have been

04:24 recognized. No need to talk about it further. Wonderful. Wonderful. Jody, welcome back on the

04:29 show. Awesome to catch up with you. Yeah, thanks for having me back. I am a data scientist and

04:34 developer advocate at JetBrains working on PyCharm. And I've been a data scientist for around 10

04:39 years. And prior to that, I was actually a clinical psychologist. So that was my training,

04:45 my PhD, but abandoned academia for greener pastures. Let's put it that way.

04:50 Noah Franz-Gurig.

04:53 Brett, hello. Good to see you.

04:55 Hello. Yes. Yeah, let's see here. I've been at Microsoft for 10 years. I started doing,

05:00 working on AI R&D for Python developers.

05:03 Also keep Wazzy running for Python here and do a lot of internal consulting for teams outside.

05:09 I am actually the shortest running core developer on this call, amazingly,

05:14 even though I've been doing it for 22 years.

05:16 I've also only gotten the Frank Wilson award, not the DSA.

05:19 So I feel very under accomplished here as a core developer.

05:22 Yeah, that's me in a nutshell.

05:24 Otherwise, I'm still trying to catch that.

05:26 Most quoted.

05:27 Yeah, most quoted.

05:29 I will say actually at work, it is in my email footer that I'm a famous Python quotationist.

05:33 That was Anthony Shaw's suggestion, by the way.

05:35 That was not mine, but does link to the April Fool's joke from last year.

05:39 And I am still trying to catch Anthony Shaw, I think, on appearances on this podcast.

05:43 Well, plus one.

05:44 Anthony Shaw should be here, honestly.

05:46 I mean, I put it out into Discord.

05:48 He could have been here, but probably at an odd time.

05:51 You used to work on VS Code a bunch on the Python aspect of VS Code.

05:54 You recently changed roles, right?

05:57 Not recently.

05:57 That was, I used to be the dev manager.

05:59 Every seven years, years ago.

06:01 Yeah, September of 2024.

06:03 So it's been over a year.

06:04 But yeah, I used to be the dev manager.

06:04 That counts as recent for me.

06:06 Yes, I used to be the dev manager for the Python experience in VS Code.

06:09 Okay, very cool.

06:10 That's quite a shift.

06:11 Yeah, it went back to being an IC basically.

06:13 You got all, you're good at your TPS reports again now?

06:17 Actually, I just did do my connect, so I kind of did.

06:19 Awesome.

06:20 Reuven, I bet you haven't filed a TPS report in at least a year.

06:23 So yeah, I'm Reuven.

06:24 I'm a freelance Python and Pandas trainer.

06:28 I just celebrated this past week 30 years since going freelance.

06:32 So I guess it's working out okay.

06:35 We'll know at some point if I need to get a real job.

06:38 I teach Python Pandas both at companies and on my online platform.

06:41 I have newsletters.

06:42 I've written books, speaking conferences, and generally try to help people improve their

06:47 Python and Pandas fluency and confidence and have a lot of fun with this community as well

06:51 as with the language.

06:52 Oh, good to have you here.

06:53 Barry, it's great to have a musician on the show.

06:56 Thanks.

06:58 Yeah, I've got my bases over here.

07:00 So, you know, if you need to be serenaded.

07:02 Yeah, like a Zen of Python may break out at any moment.

07:05 You never know when it's going to happen.

07:07 Thanks for having me here.

07:08 Yeah, I've been a core developer for a long time, since 1994.

07:13 And I've been, you know, in the early days, I did tons of stuff for Python.org.

07:20 I worked with Guido at CNRI and we moved everything from the mailing, the Postmaster stuff,

07:27 and the version control systems back in the day, websites, all that kind of stuff.

07:32 I try to not do any of those things anymore.

07:35 There's way more competent people doing that stuff now.

07:39 I have been a release manager.

07:42 I'm currently back on the steering council and running again.

07:47 between Thomas and I, we'll see who makes it to six years, I guess.

07:52 And I'm currently working for NVIDIA, and I do all Python stuff.

07:58 Some half and half, roughly, of internal things and external open source community work,

08:05 both in packaging and in core Python.

08:08 That's, I guess, I think that's about it.

08:10 Yeah, you all are living in exciting tech spaces, that's for sure.

08:14 That's for sure.

08:15 For sure.

08:16 Yeah. Well, great to have you all back on the show. Let's start with our first topic. So the

08:20 idea is we've each picked at least a thing that we think stood out in 2025 in the Python space

08:27 that we can focus on. And let's go with Jody first. I'm excited to hear what you thought was

08:33 one of the bigger things. I'm going to mention AI. Like, wow, what a big surprise. So to kind of

08:40 give context of where I'm coming from, I've been working in NLP for a long time. I like to say I was

08:45 working on LLMs before they were cool. So sort of playing around with the very first releases from

08:50 Google in like 2019, incorporating that into search. So I've been very interested sort of

08:56 seeing the unfolding of the GPT models as they've grown. And let's say slightly disgusted by the

09:03 discourse around the models as they become more mainstream, more sort of the talk about people's

09:09 jobs being replaced, a lot of the hysteria, a lot of the doomsday stuff. So I've been doing talks

09:15 and other content for around two and a half years now, just trying to cut through the hype a bit,

09:19 being like, you know, they're just language models, they're good for language tasks. Let's think about

09:23 realistically what they're about. And what was very interesting for me this year, I've been

09:29 incorrectly predicting the bubble bursting for about two and a half years. So I was quite vindicated

09:34 when in August, GPT-5 came out, and all of a sudden, everyone else started saying,

09:40 maybe this is a bubble.

09:41 Don't you think that was the first big release that was kind of a letdown compared to what the hype was?

09:47 Yeah, and it was really interesting.

09:48 So I found this really nice Atlantic article, and I didn't save it, unfortunately,

09:52 but essentially it told sort of the whole story of what was going on behind the scenes.

09:58 So GPT-4 came out in March of 2023, and that was the model that came out

10:03 with this Microsoft research paper saying, you know, sparks of AGI, artificial general intelligence,

10:08 blah, blah, blah. And from that point, there was really this big expectation sort of fueled by

10:15 OpenAI that GPT-5 was going to be the AGI model. And it turns out what was happening internally

10:22 is these scaling laws that were sort of considered, you know, this exponential growth thing that would

10:27 sort of push the power of these models perhaps towards human-like performance. They weren't

10:33 laws at all. And of course they started failing. So the model that they had originally pitched as

10:38 GPT-4 just didn't live up to performance. They started this post-training stuff where they were

10:43 going more into like specialized reasoning models. And what we have now are good models that are good

10:48 for specific tasks, but I don't know what happened, but eventually they had to put the GPT-5 label on

10:54 something. And yeah, let's say it didn't live up to expectations. So I think the cracks are starting

11:01 to show because the underlying expectation always was this will be improving to the point where

11:08 anything's possible and you can't put a price on that. But it turns out that if maybe there's a

11:14 limit on what's possible, yeah, you can put a price on it. And a lot of the valuations are on the

11:19 first part. Yes. And it's always been a bit interesting to me because I come from a scientific

11:24 background and you need to know how to measure stuff, right? And I'm like, what are you trying

11:28 to achieve? Like Gregory's nodding, like, please jump in. I'm on my monologue, so please don't

11:34 interrupt me. You really need to understand what you're actually trying to get these models to do.

11:38 What is AGI? No one knows this. And what's going to be possible with this? And it's more science

11:46 fiction than fact. So this for me has been the big news this year, and I'm feeling slightly smug,

11:51 I'm going to be honest, even though my predictions were off by about a year and a half.

11:55 Yeah, maybe it's not an exponential curve.

11:56 It's a titration S curve with an asymptote.

11:59 We'll see.

12:00 Yeah, sigmoid.

12:01 Yeah.

12:02 Yeah, yeah, yeah.

12:03 I mean, I think we have to sort of separate the technology from the business.

12:07 And the technology, even if it doesn't get any better, even if we stay with what we have today,

12:13 I still think this is like one of the most amazing technologies I've ever seen.

12:18 It's not a god.

12:19 It's not a panacea.

12:21 But it's like a chainsaw that if you know how to use it, it's really effective.

12:25 but in the hands of amateurs, you can really get hurt. And so, yes, it's great to see this sort of

12:32 thing happening and improving, but who knows where it's going to go. And I'm a little skeptical of

12:36 the AGI thing. What I'm a little more worried about is that these companies seem to have no

12:40 possible way of ever making the money that they're promising to their investors. And I do worry a lot

12:47 that we're sort of like a year 2000 situation where, yeah, the technology is fantastic,

12:53 But the businesses are unsustainable. And out of the ashes of what will happen, we will get some amazing technology and even better than we had before. But there are going to be ashes.

13:03 For me, that also makes me worry. And I don't know if anyone reads Ed Zitron here. He's a

13:09 journalist kind of digging into the state of the AI industry. He does get a bit sort of,

13:15 his reputation is a bit of a crank now. So I think he's leaned into that pretty hard,

13:20 but he does take the time to also pull out numbers and point out things that don't make sense.

13:25 And he was one of the first ones to sound the whistle on this circular funding we've been seeing.

13:30 So the worry, of course, is when a lot of this becomes borrowings from banks and then that starts dragging in funding from everyday people.

13:41 And also the effect that this has had on particularly the U.S. economy, like the stock market.

13:45 I think the investment in AI spending now exceeds consumer spending in the U.S., which is a really scary prospect.

13:54 That is crazy.

13:55 Mm-hmm. But yeah, also as Reven said, I love LLMs. They are the most powerful tools we've

14:02 ever had for natural language processing. It's phenomenal the problems we can solve with them

14:06 now. I didn't think this sort of stuff would be possible when I started in data science.

14:10 I still think there's a use case for agents, although I do think they've been a bit overstated,

14:16 especially now that I'm building them. Let's say it's not very fun building

14:20 non-deterministic software. It's quite frustrating, actually. But I hope we're going to see improvements

14:25 in the framework, particularly I've heard good things about Pydantic AI. And yeah, hopefully we

14:30 can control the input outputs and make them a bit more strict. This will fix a lot of the problems.

14:36 One thing I do want to put out in this conversation, I think is worth separating. And Reuven,

14:41 you touched on this some. I want to suggest to you, I'll throw this out to you all and see what

14:45 you think. I think it's very possible that this AI bubble crashes the economy and causes bad things

14:51 economically to happen and a bunch of companies that are like wrappers over open ai api go away

14:58 but i don't think things like the agentic coding tools will vanish they might stop training they

15:03 might slow their advance because that's super expensive but i even as if you said even if we

15:09 just had claude sonnet 4 and the world never got something else it would be so much far farther

15:16 beyond autocomplete and the other stuff that we had before and stack overflow that it's i don't

15:20 think it's going to go. The reason I'm throwing this out there is I was talking to somebody and

15:23 they were like, well, I don't think it's worth learning because I think the bubble is going to

15:26 pop. And so I don't want to learn this agent at coding because it won't be around very long.

15:30 What do you all think? It's here to stay. I think it's just, where's the limit? Where does it stop?

15:34 I think that's the big open question for everybody, right? Like pragmatically, it's a tool. It's useful

15:40 in some scenarios and not in others. And you just have to learn how to use the tool appropriately

15:44 for your use case and to get what you need out of it. And sometimes that's not using it because

15:47 it's just going to take up more time than it will to be productive. But other times it's

15:51 fully juices up your productivity and you can get more done. It's give and take. But I don't think

15:56 it's going to go anywhere because as you said, Michael, there's even academics doing research now.

16:00 There's open weight models as well. There's a lot of different ways to run this, whether you're

16:05 at the scale of the frontier models that are doing these huge trainings or you're doing something

16:11 local and more specialized. So I think the general use of AI isn't going anywhere. I think it's just

16:16 the question of how far can this current trend go and where will it be i want to say stop that's

16:22 because that plays into the whole it's never it's going to completely go away i don't think it ever

16:25 will i think it's just going to be where where are we going to start to potentially bump up against

16:29 limits one thing that i'll say is that many of these systems are almost to me like a dream come

16:33 true now admittedly it's the case that the systems i'm building are maybe only tens of thousands of

16:38 lines or hundreds of thousands of lines but i can remember thinking to myself how cool would it be

16:44 if I had a system that could automatically refactor

16:47 and then add test cases and increase the code coverage

16:51 and make sure all my checkers and linters pass and do that automatically and continue the process

16:57 until it achieved its goal.

16:59 And I remember thinking that five to seven years ago,

17:01 I would never realize that goal in my entire lifetime.

17:05 And now when I use like anthropics models through open code or Claude code,

17:10 it's incredible how much you can achieve so quickly,

17:13 even for systems that are of medium to moderate scale.

17:17 So from my vantage point, it is a really exciting tool.

17:20 It's incredibly powerful.

17:21 And what I have found is that the LLMs are much better

17:24 when I teach them how to use tools and the tools that it's using

17:29 are actually really quick, fast ones that can give rapid feedback to the LLM

17:34 and tell it whether it's moving in the right direction or not.

17:36 Yeah, there's an engineering angle to this.

17:39 It's not just Vibe Coding if you take the time to learn it.

17:42 There was actually a very interesting study. I don't think the study itself has been released.

17:48 I haven't found it yet, but I saw a talk on it by some guys at Stanford. So they call it the 10K

17:53 developer study. And basically what they were doing was studying real code bases, including,

17:59 I think 80% of them were actually private code bases and seeing the point where the team started

18:05 adopting AI. And so their findings are really interesting and nuanced. And I think they probably

18:10 intuitively align with what a lot of us have experienced with AI. So basically, yes, there

18:16 are productivity boosts, but it produces a lot of code, but the code tends to be worse than the code

18:21 you would write and also introduces more bugs. So when you account for the time that you spend

18:27 refactoring and debugging, you're still more productive. But then it also depends on the

18:32 type of project, as Gregory was saying. So it's better for greenfield projects, it's better for

18:36 smaller code bases. It's better for simpler problems and it's better for more popular languages because

18:41 obviously there's more training data. And so this was actually, I like this study so much. I'll

18:46 actually share it with you, Michael, if you want to put it in the show notes, but it shows that,

18:50 yeah, the picture is not that simple and all this conflicting information and conflicting experiences

18:54 people were having line up completely with this. So again, like I work at an IDE company, it's tools

19:00 for the job. It's not like your IDE will replace you. AI is not going to replace you. It's just

19:06 going to make you maybe more productive sometimes.

19:08 Yeah.

19:09 Wait, IDE, you work for me.

19:11 Right.

19:12 It's not about you.

19:13 But then I work for the IDE.

19:18 This portion of Talk Python To Me is brought to you by Sentry.

19:22 Let me ask you a question.

19:24 What if you could see into the future?

19:26 We're talking about Sentry, of course.

19:28 So that means seeing potential errors, crashes, and bugs before they happen, before you even

19:33 accept them into your code base.

19:35 That's what Sentry's AI Sears Code Review offers.

19:39 You get error prediction based on real production history.

19:43 AI Sear Code Review flags the most impactful errors your PR is likely to introduce before merge

19:50 using your app's error and performance context, not just generic LLM pattern matching.

19:55 Sear will then jump in on new PRs with feedback and warning if it finds any potential issues.

20:01 Here's a real example.

20:03 On a new PR related to a search feature in a web app, we see a comment from seer bicenturybot in the PR.

20:11 And it says, potential bug, the process search results function, can enter an infinite recursion when a search query finds no matches.

20:19 As the recursive call lacks a return statement and a proper termination condition.

20:24 And Seer AI Code Review also provides additional details which you can expand for further information on the issue and suggested fixes.

20:32 And bam, just like that, Seer AI Code Review has stopped a bug in its tracks without any

20:37 devs in the loop.

20:38 A nasty infinite loop bug never made it into production.

20:42 Here's how you set it up.

20:43 You enable the GitHub Sentry integration on your Sentry account, enable Seer AI on your

20:49 Sentry account, and on GitHub, you install the Seer by Sentry app and connect it to your

20:53 repositories that you want it to validate.

20:55 So jump over to Sentry and set up Code Review for yourself.

20:59 Just visit talkpython.fm/seer-code-review.

21:03 The link is in your podcast player show notes and on the episode page.

21:06 Thank you to Sentry for supporting Talk Python and me.

21:10 I mean, the other thing is a lot of people and a lot of the sort of when people talk about AI

21:15 and LLMs and so forth in context of coding, it's the LLM writing code for us.

21:20 And maybe because I'm not doing a lot of serious coding,

21:23 it's more instruction and so forth.

21:25 I use it as like a sparring or brainstorming partner So it does, you know, checking of my newsletters for language and for tech edits and just sort of exploring ideas.

21:37 And for that, maybe it's because I do everything in the last minute and I don't have other people around or I'm lazy or cheap and don't want to pay them.

21:43 But definitely the quality of my work has improved dramatically.

21:46 The quality of my understanding has improved, even if it never wrote a line of code for me.

21:50 Just getting that feedback on a regular automatic basis is really helpful.

21:54 Yeah, I totally agree with you.

21:55 All right. We don't want to spend too much time on this topic, even though I believe Jody has put her finger on what might be the biggest tidal wave of 2025.

22:06 But still, a quick parting thoughts. Anyone else?

22:08 I'm glad I'll never have the right bash from scratch ever again.

22:12 Tell me about it.

22:13 Yeah.

22:15 I'll just say from anecdotally, the thing that I love about it is when I need to do something

22:22 and I need to go through docs, online docs for whatever it is, you know, it might be

22:28 GitLab or some library that I want to use or something like that.

22:32 I never even search for the docs.

22:33 I never even try to read the docs anymore.

22:35 I just say, hey, you know, whatever model I need to set up this website.

22:41 And I just, just tell me what to do or just do it.

22:43 And it's an immense time saver and productivity.

22:47 And then it gets me bootstrapped to the place where now I can start to be creative.

22:52 I don't have to worry about just like digging through pages and pages and pages of docs to

22:57 figure out one little setting here or there.

23:00 That's an amazing time saver.

23:02 Yeah, that's a really good point.

23:03 Another thing that I have noticed, there might be many things for which I had a really good

23:07 mental model, but my brain can only store so much information.

23:10 So for example, I know lots about the abstract syntax tree for Python, but I forget that

23:16 sometimes.

23:16 And so it's really nice for me to be able to bring that back into my mind quickly with

23:21 an LLM.

23:22 And if it's generating code for me that's doing a type of AST parsing, I can tell whether

23:27 that's good code or not because I can refresh that mental model.

23:30 So in those situations, it's not only the docs, but it's something that I used to know

23:35 really well that I have forgotten some of.

23:37 And the LLM often is very powerful when it comes to refreshing my memory and helping me to get started and move more quickly.

23:44 All right. Out of time, I think. Let's move on to Brett. What do you got, Brett?

23:48 Well, I actually originally said we should talk about AI, but Jody had a way better pitch for it than I did because my internal pitch was a little bit AI.

23:56 Do I actually have to write a paragraph explaining why? Then Jody actually did write the paragraph. So she did a much better job than I did.

24:01 So the other topic I had was using tools to run your Python code.

24:06 And what I mean by that is traditionally, if you think about it,

24:10 you install the Python interpreter, right?

24:13 Hopefully you create a virtual environment, install your dependencies,

24:16 and you call the Python interpreter in your virtual environment to run your code.

24:19 Those are all the steps you went through to run stuff.

24:21 But now we've got tools that will compress all that into a run command,

24:25 just do it all for you.

24:26 And it seems like the community has shown a level of comfort with that,

24:31 that I'd say snuck up on me a little bit, but I would say that I think it's a good thing, right?

24:37 It's showing us, I'm going to say us, as the junior core developer here on this call,

24:43 as to, sorry to make you too feel old, but admittedly, Barry did write my letter of recommendation

24:48 to my master's program.

24:51 So what happened was like, yeah, we had Hatch and PDM,

24:55 poetry before that, and uv as of last year, all kind of come through

24:59 and all kind of build on each other and take ideas from each other

25:02 and kind of just slowly build up this kind of repertoire of tool approaches

25:06 that they all kind of have a baseline kind of, not synergy is the right word,

25:09 but share just kind of approach to certain things with their own twists and added takes on things.

25:15 But in general, this whole like, you know what, you can just tell us to run this code

25:19 and we will just run it, right?

25:20 Like inline script metadata coming in and help making that more of a thing.

25:24 Disclaimer, I was the PEP delegate for getting that in.

25:27 But I just think that's been a really awesome trend And I'm hoping we can kind of leverage that a bit.

25:34 Like I have personal plans that we don't need to go into here,

25:36 but like I'm hoping as a Python core team, we can kind of like help boost this stuff up a bit

25:41 and kind of help keep a good baseline for this for everyone.

25:43 Because I think it's shown that Python is still really good for beginners.

25:45 You just have to give them the tools to kind of hide some of the details

25:49 to not shoot yourself in the foot and still leads to a great outcome.

25:52 Yeah, 2025 might be the year that the Python tools stepped outside of Python.

25:56 Instead of being, you install Python and then use the tools.

26:00 You do the tool to get Python, right?

26:02 Like uv and PDM and others.

26:03 Yeah, and inverted the dependency graph in terms of just how you put yourself in, right?

26:08 I think the interesting thing is these tools treat Python as an implementation detail almost, right?

26:13 Like when you just say uv or hatch run or PDM run thing,

26:17 these tools don't make you have to think about the interpreter.

26:19 It's just a thing that they pull in to make your code run, right?

26:22 It's not even necessarily something you have to care about if you choose not to.

26:26 And it's an interesting shift in that perspective, at least for me.

26:30 But I've also been doing this for a long time.

26:31 I think you're really onto something.

26:33 And what I love at sort of a high level is this, I think there's a renewed focus on the user experience.

26:40 And like uv plus the PEP 723, the inline metadata, you know, you can put uv in the shebang line of your script.

26:49 And now you don't have to think about anything.

26:52 You get uv from somewhere, and then it takes care of everything.

26:57 And Hatch can work the same way, I think, for developers.

27:01 But this renewed focus on installing your Python executable,

27:08 you don't really have to think about, because those things are very complicated,

27:12 and people just want to hit the ground running.

27:14 And so if you think about the previous discussion about AI,

27:18 I just want things to work.

27:20 I know what I want to do.

27:22 I can see it.

27:23 I can see the vision of it.

27:24 And I just don't want to.

27:25 An analogy is like when I first learned Python and I came from C++ and all those languages.

27:32 And I thought, oh my gosh, just to get like, hello world,

27:35 I have to do a million little things that I shouldn't have to do.

27:39 Like create a main and get my braces right and get all my variables right and get my pound includes correct.

27:46 And now I don't have to think about any of that stuff.

27:49 And the thing that was eye-opening for me with Python was the distance between vision of what I wanted and working code just really narrowed.

28:00 And I think that as we are starting to think about tools and environments and how to bootstrap all this stuff, we're also now taking all that stuff away.

28:09 Because people honestly don't care.

28:11 I don't care about any of that stuff.

28:13 I just want to go from like, I woke up this morning and had a cool idea and I just wanted to get at work.

28:18 Or you wanted to share it so you could just share the script and you don't have to say,

28:22 here's your steps that you get started with.

28:24 Exactly.

28:25 Exactly.

28:26 I want to thank the two of you for, oh, sorry, sorry, go ahead.

28:28 I'm just going to say, like, for years teaching Python that how do we get it installed?

28:34 At first, it surprised me how difficult it was for people.

28:37 Because like, oh, come on, we just got Python.

28:39 Like, what's so hard about this?

28:40 But it turns out it's a really big barrier to entry for newcomers.

28:45 And I'm very happy that Jupyter Lite now has solved its problems with input.

28:49 And it's like huge.

28:50 But until now, I hadn't really thought about starting with uv because it's cross-platform.

28:57 And if I say to people in the first 10 minutes of class, install uv for your platform and

29:02 then say uv in it, your project, bam, you're done.

29:05 It just works.

29:06 And then it works cross-platform.

29:07 This is mind-blowing.

29:08 And I'm going to try this at some point.

29:10 Thank you.

29:10 I can comment on the mind-blowing part because now when I teach undergraduate students, we

29:14 start with uv in the very first class. And it is awesome. There were things that would take students,

29:20 even very strong students who've had lots of experience, it would still take them a week to

29:26 set everything up on their new laptop and get everything ready and to understand all the key

29:30 concepts and know where something is in their path. And now we just say, install uv for your

29:36 operating system and get running on your computer. And then, hey, you're ready to go. And I don't have

29:42 teach them about docker containers and i don't have to tell them how to install python with some

29:47 package manager all of those things just work and i think from a learning perspective whether you're

29:52 in a class or whether you're in a company or whether you're teaching yourself uv is absolutely

29:58 awesome i'm actually wondering whether i am the one who is newest to python here i taught myself

30:05 python in 2011 so i was like python 2.7 stage but it was my first programming language i was just

30:12 procrastinating during my PhD. And I was like, I should learn to program. So I just taught myself

30:18 Python. And I can tell you, you do not come from an engineering background. And you're like,

30:23 what is Python? What is Python doing? Why am I typing Python to execute this hello world? And

30:29 if you're kind of curious, you get down a rabbit hole before you even get to the point where you're

30:33 just focusing on learning the basics. And so it's exactly, I was going to say with Reuven,

30:39 And like whether you thought about it for teaching, because we're now debating for Humble Data,

30:43 which is a beginner's data science community that I'm part of, whether we switch to uv.

30:48 This was Chuck's idea because it does abstract away all these details.

30:52 The debate I have is, is it too magic?

30:55 This is kind of the problem because I also remember learning about things like virtual

30:59 environments, because again, this was my first programming language and being like,

31:02 oh, it's a very good idea.

31:04 This is best practices.

31:05 And it's also a debate we have in PyCharm, right?

31:08 Like how much do you magic away the fundamentals versus making people think a little bit, but

31:15 I'm not sure.

31:15 All right.

31:15 Like, would you even let somebody run without a virtual environment?

31:19 That's like, that's a take you, that's a stance you could take.

31:21 I used to when I first learned Python, because it was too complicated, but then I learned

31:27 better.

31:28 But yes.

31:29 The consideration here is like hiding the magic isn't like hiding the details and having

31:34 all this magic just work is great as long as it works.

31:38 And the question is, how is it going to break down and how are people going to know how to deal

31:43 when it breaks down, if you hide all the magic? And I think virtual envs were, or let's say before

31:49 we had virtual envs, installing packages was very much in the, you had to know all the details

31:54 because it was very likely going to break down in some way right before we had virtual envs,

32:00 because you would end up with weird conflicts or multiple copies of a package installed in

32:04 different parts of the system. When we got virtual ends, we sort of didn't have to worry about that

32:10 anymore because we were trained in that you can just blow away the virtual one and it just works.

32:14 And with uv, we're back into, this looks like a single installation. We don't know what's going

32:19 to go on, but we've learned, we as a community and also the people working on uv, we have learned

32:25 from those earlier mistakes or not, maybe not mistakes, but consequences of the design.

32:32 And they have created something that is, that appears to be very stable where it's unlikely

32:38 the magic will break.

32:39 And when the magic does break, it's obvious what the problem is or, or it automatically

32:44 fixes itself.

32:45 So like it's not reusing, broken, installations and that kind of thing.

32:50 So the risk now, as it turns out, I think as is proven by the community adopting uv so

32:57 fast and so willingly, I think it's acceptable.

33:00 Well, I think it's, yeah, I think it's proven itself.

33:02 It's clear that this is, it's worth the potential of discovering weird edge cases later, both

33:09 because it's probably low likelihood, but also the people behind uv Astral have proven that

33:16 they would jump in and fix those issues, right?

33:18 They would do anything they need to keep uv workable the same way.

33:23 And they have a focus that Python as a whole cannot have because they cater to fewer use

33:28 cases than Python as a whole needs to.

33:31 On the audience, Galano says, as an enterprise tech director in Python coder, I believe we

33:36 should hide the magic which empowers the regular employee to do simple things that make their

33:40 job easier.

33:41 Yeah.

33:41 This notion of abstractions, right, has always been there in computer science.

33:47 And, you know, we've used tools or languages or systems where we've tried to bring that

33:53 abstraction layer up so that we don't have to think about all these details, as I mentioned

33:58 before. The question is, that's always the happy path. And when I'm trying to teach somebody

34:04 something like, here's how to use this library or here's how to use this tool, I try to be very

34:09 opinionated to keep people on that happy path. Like, assume everything's going to work just right.

34:15 Here's how you just make you go down that path to get the thing done that you want. The question

34:19 really is when things go wrong, how narrow is that abstraction? And are you able, and even when

34:27 you're just curious, like what's really going on underneath the hood? Of course, that's not a really

34:31 good analogy today because cars are basically computers on wheels that you can't really

34:36 understand how they work. But back in your day. But back in my day, we were changing spark plugs,

34:42 you know, but and crank that window down. Exactly. So I think we always have to leave that

34:50 room for the curious and the bad path where when things go wrong or when you're just like,

34:56 you know what, I understand how this works, but I'm kind of curious about what's really going on.

35:01 How easy is it for me to dive in and get a little bit more of that background, you know,

35:07 a little bit more of that understanding of what's going on. I want the magic to decompose,

35:12 right like you should be able to explain the magic path via a more decomposed steps using the tool

35:17 all the way down to what the tools like to do behind the scenes just just to admit i the reason

35:21 i brought this up and i've been thinking about this a lot is i'm thinking of trying to get the python

35:26 launcher to do a bit more because one interesting thing we haven't really brought up here is we're

35:31 all seeing uv uv uv uv is a company there's always there's they might disappear and we haven't

35:36 de-risked ourselves from that. Now we do have Hatch, we do have PDM, but as I said, there's kind

35:41 of a baseline I think they all share that I think they would be probably okay if the Python launcher

35:45 just did because that's based on standards, right? Because that's the other thing that there's been

35:48 a lot of work that has led to this step, right? Like we've gotten way more packaging standards,

35:52 we've got PEP 723, like we mentioned. There's a lot of work that's come up to lead to this point

35:58 that all these tools can lean on to have them all have an equivalent outcome because it's expected

36:03 is how they should be.

36:05 And so I think it's something we need to consider of how do we make sure,

36:09 like, by the way, uv, I know the people, they're great.

36:12 I'm not trying to spares them or think they're going to go away,

36:14 but it is something we have to consider.

36:16 And I will also say, Jody, I do think about this for teaching

36:20 because I'm a dad now and I don't want my kid coming home

36:24 when they get old enough to learn Python and go, hey, dad,

36:26 why is getting Python code running so hard?

36:29 So I want to make sure that that never happens.

36:32 But they fall in love with it from the start.

36:34 I realized something for the 2026 year interview.

36:37 I have to bring a sign that says time for next topic because we got a bunch of topics and we're running low on time.

36:43 So, Thomas, let's jump over to yours.

36:46 Oh, and I had two topics as well.

36:48 So I'm only going to have to pick my favorite child, right?

36:52 That's terrible.

36:53 My second favorite child is Lazy Imports, which is a relatively new development.

36:58 So we'll probably not get to that.

37:00 And just accepted.

37:00 Yes, it's been accepted and it's going to be awesome.

37:03 So I'll just give that a shout out and then move to my favorite child, which is free threaded

37:06 Python.

37:07 For those who were not aware, the global interpreter lock is going away.

37:12 I am stating it as a fact.

37:13 It's not actually a fact yet, but it, you know, that's because the steering council hasn't

37:18 realized the fact yet.

37:20 It is trending towards.

37:22 Well, I was originally on the steering council that accepted the proposal to add free threading

37:28 as a, as an experimental feature, we had this idea of adding it as experimental and then making it

37:34 supported, but not the default and then making it the default. And it was all a little vague and,

37:39 and up in the air. And then I didn't get reelected for the steering council last year,

37:44 which I was not sad about at all. I sort of ran on a, well, if there's nobody better, I'll do it,

37:49 but otherwise I have other things to do. And it turns out those other things were making sure that

37:54 prefer that Python landed in a supported state. So I lobbied the steering council quite hard,

38:00 as Barry might remember at the start of the year, to get some movement on this, like get some

38:04 decision going. So for Python 3.14, it is officially supported. The performance is great. It's like

38:10 between a couple of percent slower and 10% slower, depending on the hardware and the compiler that

38:16 you use. It's basically the same speed on macOS, which is really like it's, that's a combination of

38:23 the ARM hardware and Clang specializing things, but it's basically the same speed, which, wow.

38:29 And then on recent GCCs on Linux, it's like a couple of percent slower. The main problem is

38:35 really community adoption, getting third-party packages to update their extension modules for

38:40 the new APIs and the things that by necessity sort of broke, and also supporting free threading in a

38:48 in a good way and in packages for Python code, it turns out there's very few changes that

38:54 need to be made for things to work well under free threading.

38:57 They might not be entirely thread safe, but usually like almost always in cases where it

39:02 wasn't thread saved before either, because the guild doesn't actually affect thread safety.

39:07 Just the likelihood of things breaking.

39:09 I do think there's been a bit of a, the mindset of the Python community hasn't really been

39:14 focused on creating thread safe code because the GIL is supposed to protect us.

39:18 But soon as it takes multiple steps, then all of a sudden it's just less likely.

39:22 It's not that it couldn't happen.

39:23 Yeah, that's my point, right?

39:24 It's not the GIL never gave you threat safety.

39:27 The GIL gave cpythons internals threat safety.

39:31 It never really affected Python code and it very rarely affected thread safety in

39:36 extension modules as well.

39:37 So they already had to take care of, of making sure that the global interpreter

39:41 couldn't be released by something that they ended up calling indirectly so it's actually not that

39:46 hard to port most things to support free threading and the benefits we've seen some experimental

39:53 work because you know it's still it's still new there's still a lot of things that don't

39:56 quite support it there's still places where thread contention slows things things down a lot but

40:02 we've seen a lot of examples of really promising very parallel problems that now speed up by 10x or

40:09 more. And it's going to be really excited in the future. And it's in 2025 that this all started.

40:15 I mean, Sam started it earlier, but he's been working on this for years, but it landed in 2025.

40:21 It dropped its experimental stage in 314, basically. Yeah. I was going to say, were we all,

40:27 the three of us on the steering council at the same time when we decided to start the experiment

40:30 for free threading? I think Barry wasn't on it. Yeah, I missed a couple of years there, but I'm

40:36 Not sure.

40:36 No, I totally agree.

40:37 I think free threading is one of the most transformative developments for Python, certainly since Python 3, but even maybe more impactful because of the size of the community today.

40:49 Personally, you know, not necessarily speaking as a current or potentially former steering council member.

40:56 We'll see how that shakes out.

40:58 But I think it's inevitable.

41:00 I think free threading is absolutely the future of Python, and I think it's going to unlock incredible potential and performance.

41:08 I think we just have to do it right.

41:10 And so I talked to lots of teams who are building various software all over the community.

41:16 And I actually think it's more of an educational and maybe an outreach problem than it is a technological problem.

41:23 I mean, yes, there are probably APIs that are missing that will make people's lives easier.

41:30 There's probably some libraries that will make other code a little easier to write or whatever or to understand.

41:36 But like all that's solvable.

41:38 And I think really reaching out to the teams that are, you know, like Thomas said,

41:42 that are building the ecosystem, that are moving the ecosystem to a free threading world.

41:47 That's where we really need to spend our effort on.

41:50 And we'll get there.

41:51 It won't be that long.

41:52 It certainly won't be as long as it took us to get to Python 3.

41:56 I'm sort of curious as someone who's not super experienced with threading or, you know, basic concurrency.

42:04 I mean, I've used it, but I feel like now we have threads, especially with free threading and sub interpreters and multiprocessing and asyncio.

42:14 And I feel like for many people now it's like, oh, my God, which one am I supposed to use?

42:19 And for someone who's experienced, you can sort of say, well, this seems like a better choice.

42:24 But are there any plans to sort of try to have a taxonomy of what problems are solved by which of these?

42:31 The premise here is that everyone would be using one or more of these low-level techniques that you mentioned.

42:37 And I think that's not a good way of looking at it.

42:40 Like AsyncIO is a library that you want to use for the things that AsyncIO is good at.

42:46 And you can actually very nicely combine it with multiprocessing, with subprocesses, with

42:52 so that subprocesses and subinterpreters, just to make it clear that those are two very separate

42:57 things and multithreading, both with and without free threading.

43:01 And it solves different problems or it gives you different abilities within the AsyncIO

43:06 framework.

43:06 And the same is true for like GUI frameworks.

43:09 I mean, GUI frameworks usually want threads for multiple reasons, but you can use these

43:14 other things as well.

43:15 I don't think it's down to teaching end users when to use or avoid all these different things.

43:22 I think we need higher level abstractions for tasks that people want to solve.

43:27 And then those can decide on what for their particular use case is a better approach.

43:33 For instance, PyTorch has multiple.

43:36 So it's used for people who don't know to train, not just train, but it's used in AI

43:42 for generating large matrices and LLMs and what have you.

43:46 Part of it is loading data and processing.

43:49 And the basic ideas of AsyncIO are, oh, you can do all these things in parallel

43:55 because you're not waiting on the CPU, you're just waiting on IO.

43:58 Turns out it is still a good idea to use threads for massively parallel IO

44:02 because otherwise you end up waiting longer than you need to.

44:05 So a problem where we thought AsyncIO would be the solution

44:10 and we never needed threads is actually much improved if we tie in threads as well.

44:15 And we've seen massive, massive improvements in data loader.

44:19 There's even an article, a published article from some people at Meta

44:24 showing how much they improve the PyTorch data loader by using multiple threads.

44:29 But at a very low level, we don't want end users to need to make that choice, right?

44:33 I concur to futures is a good point, right?

44:35 Like all of these approaches are all supported there and it's a unified one.

44:39 So if you were to teach this, for instance, you could say use concurrent.tot futures.

44:42 These are all there.

44:44 This is the potential tradeoff.

44:45 Like basically use threads.

44:46 It's going to be the fastest unless there's like some module you have that's not that's

44:51 screwing up because of threads, then use sub interpreters.

44:53 And if for some reason sub interpreters don't work, you should move to the processing pool,

44:57 the process pool.

44:58 But I mean, basically, you just kind of just like, it's not go sort the fast stuff.

45:02 And for some reason, it doesn't work.

45:03 Use the next fastest and just kind of do it that way.

45:05 After that, then you start to the lower level.

45:08 Like, okay, why do I want to use subinterpreters instead of threads?

45:11 Those kinds of threads.

45:12 But I think that's a different, as I think we're all searching,

45:15 a different level of abstraction, which is a term we keep bringing up today.

45:19 It's a level that a lot of people are not going to have to care about.

45:21 I think the libraries are the ones that are going to have to care about this

45:23 and who are going to do a lot of this for you.

45:25 Let me throw this out on our way out the door to get to Reuven's topic.

45:29 I would love to see it solidify around async and await.

45:33 And you just await a thing, maybe put a decorator on something.

45:36 say this, this one, I want this to be threaded. I want this to be IO. I want this. And you don't,

45:42 you just use async and await and don't have to think about it, but that's, that's my dream.

45:46 Reuven, what's your dream?

45:48 Wow. How long do you have?

45:51 No, what's your topic?

45:52 So I want to talk about Python ecosystem and funding. When I talk to people with Python

45:59 and I talk to them about it, how it's open source, they're like, oh, right, it's open source. That

46:01 means I can download it for free. And from their perspective, that's sort of where it starts

46:05 and ends. And the notion that people work on it, the notion that needs funding, the notion that

46:10 there's a Python software foundation that supports a lot of these activities, the infrastructure

46:15 is completely unknown to them and even quite shocking for them to hear. But Python is in many

46:21 ways, I think, starting to become a victim of its own success, that we've been dependent on

46:27 companies for a number of years to support developers and development. And we've been

46:32 assuming that the PSF, which gives money out to lots of organizations to run conferences and

46:38 workshops and so forth, can sort of keep scaling up and that they will have enough funding. And

46:43 we've seen a few sort of shocks that system in the last year. Most recently, the PSF announced that

46:48 it was no longer going to be doing versus sort of pared down about a year ago, what it would give

46:52 money for. And then about five months ago, six months ago, I think it was in July or August,

46:56 they said, actually, we're not going to be able to fund anything for about a year now.

47:00 And then there was the government grant, I think from the NSF that they turned down. And I'm not disputing the reasons for that at all. It basically, it said, well, we'll give you the money if you don't worry about diversity and inclusion. And given that that's like a core part of what the PSF is supposed to do, they could not do that without shutting the doors, which would be kind of counterproductive.

47:17 And so I feel like we're not yet there, but we're sort of approaching this, I'm going to term like a problem crisis in funding Python.

47:27 The needs of the community keep growing and growing, whether it's workshops, whether it's PyPI, whether it's conferences.

47:32 And companies are getting squeezed.

47:35 And the number of people, it always shocks me every time there are PSF elections, the incredibly small number of people who vote.

47:42 Which means that, let's assume half the people who are members, third of the people.

47:46 Like for the millions and millions of people who program Python out there, an infinitesimally

47:50 small proportion of them actually join and help to fund it.

47:53 So I'm not quite sure what to do with this other than express concern.

47:57 But I feel like we've got to figure out ways to fund Python and the PSF in new ways that

48:01 will allow it to grow and scale as needed.

48:04 I couldn't agree more.

48:06 Obviously, the PSF is close to my heart because I was on the board for, I think, a total of

48:11 six or nine years or something over, you know, the last 25.

48:15 I was also for six months, I was the interim general manager because Eva left and we hadn't

48:21 hired Deb yet while I was on the board.

48:23 I remember signing the sponsor contracts for the companies that came in wanting to sponsor

48:29 Python.

48:29 And it is like, it's ridiculous how, and I can say this working for a company that is

48:35 one of the biggest sponsors of the PSF and has done so for years.

48:38 It's ridiculous how small those sponsorships are and yet how grateful we were that they

48:45 came in because every single one has such a big impact.

48:48 You can do so much good with the money that comes in.

48:52 We need more corporate sponsorships more than we need.

48:55 Like, I mean, obviously a million people giving us a couple of bucks, giving the PSF, let's

49:00 be clear.

49:00 I'm not on the board anymore.

49:01 Giving the PSF a couple of bucks would be fantastic.

49:05 But I think the big players in the big corporate players where all the AI money is, for instance,

49:12 having done basically no sponsorship of the PSF is mind-boggling. It is a textbook

49:18 tragedy of the commons right there, right? They rely entirely on PyPI and PyPI is run entirely

49:25 with community resources, mostly because of very generous and consistent sponsorship,

49:31 basically by Fastly, but also the other sponsors of the PSF. And yet very large players use those

49:38 resources more than anyone else and don't actually contribute. Georgie Kerr, she wrote this fantastic

49:45 blog post saying pretty much this straight after Europython. So Europython this year was really big

49:52 actually. And she was wandering around looking at the sponsor booths and the usual players were there,

49:57 but none of these AI companies were there. And the relationship actually between AI, if you want to

50:03 call it that. Let's call it ML and neural networks. And like some of the really big companies and

50:09 Python actually is really complex. Obviously, a lot of these companies and some of us are here,

50:15 employ people to work on Python. Companies like Meta and Google have contributed massively to

50:21 frameworks like PyTorch, TensorFlow, Keras. So it's not as simple a picture as saying cough up money

50:27 all the time. Like there's a more complex picture here, but definitely there are some notable

50:32 absences. And we talked about the volume of money going through. I totally agree with the sentiment.

50:39 When the shortfall came and the grants program had to shut down, we were brainstorming at JetBrains,

50:45 like maybe we can do some sort of, I don't know, donate some more money and call other companies

50:51 to do it. Or we can call on people in the community. And I was like, I don't want to call

50:55 on people in the community to do it because they're probably the same people who are also

50:59 donating their time for Python. Like it's just squeezing people who give so much of themselves

51:05 to this community even more. And it's not sustainable. Like Reuben said, if we keep doing

51:10 this, the whole community is going to collapse. Like I'm sure we've all had our own forms of

51:16 burnout from giving too much. I'm going to pat ourselves on the back here. Everyone on this

51:19 call who works at a company are all sponsors of the PSF. Thank goodness. But there's obviously a

51:25 lot of people not on this call who are not sponsors. And I know personally, I wished every

51:29 company that generated revenue from python usage donated to the psf like and it doesn't see and i

51:35 think part of the problem is some people think it has to be like a hundred thousand dollars it does

51:38 not have to be a hundred thousand dollars now if you can afford that amount please do or more there

51:43 are many ways to donate more than the maximum amount for getting on the website but it's one

51:48 of these funny things where a lot of people just like oh it's not me right like even startups don't

51:51 some do to give those ones credit but others don't because like oh we're we're burning through

51:56 capital level i was like yeah but we're not we're asking for like less so you'd pay a dev right by

52:01 a lot per year right like the amount we actually asked for to get to the highest tier is still less

52:07 than a common developer in silicon valley if we're gonna price point to a geograph geogra geographical

52:13 location we call kind of comprehend i'm gonna steal a net bachelor's observation here and yeah what

52:18 the psf would be happy with is less than a medium company spends on the tips of expensed meals every

52:25 year. Yeah. Yeah. And it's a long running problem, right? Like, I mean, I've been on the PSF for a

52:31 long time, too. I've not served as many years as Thomas on the board, but I was like executive

52:36 vice president because we had to have someone with that title at some point. It's always been a

52:40 struggle, right? Like I and I also want to be clear, I'm totally appreciative of where we have

52:45 gotten to, right? Because for the longest time, I was just dying for paid staff on the core team.

52:50 And now we have three developers as residents. Thank goodness. Still not enough to be clear.

52:55 I want five.

52:56 And I've always said that, but I'll happily take three.

52:58 But it's one of these things where it's a constant struggle.

53:00 And it got a little bit better before the pandemic

53:03 just because everyone was spending on conferences

53:05 and PyCon US is a big driver for the Python Software Foundation.

53:08 And I know your Python's a driver for the European Society.

53:12 But then COVID hit and conferences haven't picked back up.

53:15 And then there's a whole new cohort of companies that have come in post-pandemic

53:19 that have never had that experience of going to PyCon and sponsoring PyCon.

53:22 And so they don't think about, I think, sponsoring PyCon

53:25 the PSF because that's also a big kind of in your face, you should help sponsor this.

53:29 And I think it's led to this kind of lull where offered spending has gone down, new entrants

53:33 into the community have not had that experience and thought about it. And it's led to this kind

53:37 of dearth where, yeah, that PSF had to stop giving out grant money. And it sucks. And I would love

53:43 to see it not be that problem. I want to add one interesting data point that I discovered in

53:47 short. Keep it short. Yes. NumFocus has about twice the budget of the PSF. I was shocked to

53:53 discover this. So basically it is possible to get money from companies to sponsor development of

54:00 Python related projects. And I don't know what they're doing that we aren't. And I think it's

54:05 worth talking and figuring it out. We need a fundraiser and marketer in residence, maybe. Who

54:10 knows? Lauren does a great job, to be clear. The PSF has Lauren and Lauren is that. But it's still

54:18 hard. We have someone doing it full time at the PSF and it's just hard to get companies to give

54:22 cash up cash.

54:23 Yeah, and what do we get in return?

54:25 Well, we already get that.

54:26 So, yeah, I know.

54:27 All right, Barry.

54:28 To just, you know, shift gears into a different area,

54:32 something that I've been thinking a lot over this past year on the steering council.

54:36 Thomas, I'm sure, is going to be, you know, very well aware,

54:39 having been instrumental in the lazy imports PEP A10.

54:45 We have to sort of rethink how we evolve Python

54:50 and how we pose changes to Python and how we discuss those changes in the community.

54:56 Because I think one of the things that I have heard over and over and over again is that authoring PEPs

55:04 is incredibly difficult and emotionally draining and it's a time sink.

55:11 And leading those discussions on discuss.python.org, which we typically call DPO,

55:18 can be toxic at times and very difficult.

55:21 So one of the things that I realized as I was thinking about this

55:25 is that peps are 25 years old now, right?

55:29 So we've had this, and not only just peps are old,

55:33 but like we've gone through at least two, if not more sort of complete revolutions

55:38 in the way we discuss things.

55:40 You know, the community has grown incredibly.

55:43 The developer community is somewhat larger, but just the number of people

55:47 who are using Python and who have an interest in it has grown exponentially. So it has become

55:54 really difficult to evolve the language in the standard library and the interpreter. And we need

56:01 to sort of think about how we can make this easier for people and not lose the voice of the user.

56:09 And the number of people who actually engage in topics on DPO is the tip of the iceberg. You know,

56:14 We've got millions and millions of users out there in the world who, for example, lazy imports will affect, free threading will affect and don't even know that they have a voice.

56:24 And maybe we have to basically represent that, but we have to do it in a much more collaborative and positive way.

56:31 That's something that I've been thinking about a lot.

56:33 And whether or not I'm on the steering council next year, I think this is something that I'm going to spend some time on trying to think about, you know, talk to people about ways we can make this easier for everyone.

56:43 The diversity of use cases for Python in the last couple of years.

56:47 So complex.

56:48 Yes, exactly.

56:49 It should also be preface that Barry created the PEP process. He should have started that one.

56:55 It is that old.

56:57 Yeah.

56:58 By the way, just so everyone knows, these are not ages jokes to be mean to Barry.

57:02 We've always known Barry long enough that we know Barry's okay with us making these jokes.

57:07 To be very, very clear.

57:07 Also, I am almost as old as Barry, although I don't look as old as Barry.

57:12 Yeah, we're all over from the same age anyways.

57:15 Yeah, Barry and I have known each other for 25 years,

57:18 and I've always made these jokes of him.

57:21 So it is different when you know each other in person.

57:26 Let's put it that way.

57:28 For the PEP process, I think for a lot of people, it's not obvious how difficult the process is.

57:35 I mean, it wasn't even obvious to me.

57:37 I saw people avoiding writing peps multiple times, and I was upset, like on the steering council, right?

57:43 I saw people making changes where I thought, this is definitely something

57:47 that should have been discussed in a PEP and the discussion should be recorded in a PEP and all that.

57:51 And I didn't understand why they didn't until, basically until PEP 8.10.

57:56 So I did PEP 779, which was the giving free threading supported status

58:02 at the start of the year.

58:03 And the discussion there was, you know, sort of as expected and it's already,

58:08 was already an accepted PEP.

58:10 It was just the question of how does it become supported?

58:12 That one wasn't too exhausting.

58:14 And then we got to Lazy Imports, which was Pablo, who is another steering council member,

58:20 as well as a bunch of other contributors, including me and two of my co-workers and

58:24 one of my former co-workers, who had all had a lot of experience with Lazy Imports, but

58:29 not necessarily as much experience with the PEP process.

58:32 And Pablo took the front seat because he knew the PEP process and he's done like five PEPs

58:37 in the last year or something, some ridiculous number.

58:40 And he shared with us the vitriol he got for like offline for the, just the audacity of proposing

58:49 something that people disagreed with or something. And that was like, this is a technical suggestion.

58:54 This is not a code of conduct issue where I have received my fair share of vitriol around.

59:00 This is a technical discussion. And yet he gets this, these ridiculous accusations in his mailbox.

59:06 And for some reason, only the primary author gets it as well, which is just weird to me.

59:12 But people are lazy.

59:13 Thomas is what I think you just said.

59:15 Remember, the steering council exists because Guido was the got the brunt of this for Pet 572, which was the walrus operator.

59:24 Right. Which is just like this minor little syntactic thing that is kind of cool when you need it.

59:31 But like just the amount of anger and negative energy and vitriol that he got over that was enough to for him to just say, I'm out, you know, and you guys figure it out.

59:42 And that cannot be an acceptable way to discuss the evolution of the language.

59:48 Especially since apparently now every single PEP author of any contentious or semi contentious pep.

59:55 Although I have to say, Pep 810 had such broad support.

59:59 It was hard to call it contentious.

01:00:01 It's just there's a couple of very loud opinions, I guess.

01:00:04 And I'm not saying we shouldn't listen to people.

01:00:06 We should definitely listen to especially contrary opinions.

01:00:11 But there has to be a limit.

01:00:12 There has to be an acceptable way of bringing things up.

01:00:15 There has to be an acceptable way of saying, hey, you didn't actually read the Pep.

01:00:21 please go back and reconsider everything you said after you fully digested the things,

01:00:27 because everything's already been addressed in the pep.

01:00:29 It's just really hard to do this in a way that doesn't destroy the relationship with the person you're telling this, right?

01:00:37 It's hard to tell people, hey, I'm not going to listen to you because you haven't, you know, you've done a bad job.

01:00:44 You've chosen not to inform yourself.

01:00:46 I think you make another really strong point, Thomas, which is that there have been changes that have been made to Python that really should have been a pep.

01:00:55 And they aren't because people don't want to go through core developers, don't want to go through this gauntlet.

01:01:01 And so they'll create a PR and then that.

01:01:03 But that's also not good because then, you know, we don't have that.

01:01:06 We don't have the right level of consideration.

01:01:11 And you think about the way that, you know, if you're in your job and you're making a change to something in your job, you have a very close relationship to your teammates.

01:01:20 And so you have that kind of respect and hopefully, right, like compassion and consideration.

01:01:26 And you can have a very productive discussion about a thing and you may win some arguments and you may lose some arguments, but the team moves forward as one.

01:01:35 And I think we've lost a bit of that in Python.

01:01:38 So that's not great.

01:01:40 I think society in general could use a little more civility and kindness, especially to strangers that they haven't met in forums, social media, driving, you name it.

01:01:50 Okay, but we're not going to solve that here, I'm sure.

01:01:54 So instead, let's do Gregory's topic.

01:01:57 Hey, I'm going to change topics quite a bit, but I wanted to call 2025 the year of type checking and language server protocols.

01:02:04 So many of us probably have used tools like mypy to check to see if the types line up in our code or whether or not we happen to be overriding functions correctly.

01:02:14 And so I've used mypy for many years and loved the tool and had a great opportunity to chat with the creator of it.

01:02:20 And I integrate that into my CI and it's really been wonderful.

01:02:24 And I've also been using a lot of LSPs, like, for example, PyRite or PyLands.

01:02:28 But in this year, one of the things that we've seen is, number one, Pyrefly from the team at Meta.

01:02:34 We've also seen ty from the team at Astral.

01:02:36 And there's another one called Zubon.

01:02:38 And Zubon is from David Halter.

01:02:41 David was also the person who created JEDI, which is another system in Python that helped with a lot of LSP tasks.

01:02:48 What's interesting about all three of the tools that I just mentioned

01:02:51 is that they're implemented in Rust, and they have taken a lot of the opportunity to make the type checker

01:02:58 and or the LSP significantly faster.

01:03:01 So for me, this has changed how I use the LSP or the type checker and how frequently I use it.

01:03:07 And in my experience, it has helped me to take things that might take tens of seconds or hundreds of seconds and cut them down often to less than a second.

01:03:17 And it's really changed the way in which I'm using a lot of the tools like ty or Pyrefly or Zubon.

01:03:24 So I can have some more details if I'm allowed to share, Michael, but I would say 2025 is the year of type checkers and LSPs.

01:03:31 I think given the timing, let's have people give some feedback.

01:03:34 I personally have been using Pyrefly a ton and am a big fan of it.

01:03:38 I don't know if I'm allowed to have an opinion that isn't Pyrefly is awesome.

01:03:43 I mean, I'm not on the Pyrefly team, but I do regularly chat with people from the Pyrefly team.

01:03:49 Tell people real quick what it is, Thomas.

01:03:51 So Pyrefly is Meta's attempt at a Rust-based type checker.

01:03:56 And so it's very similar to ty.

01:03:58 Started basically at the same time, a little later.

01:04:01 Meta originally had a type checker called Pyre, which was written in OCaml.

01:04:06 They basically decided to start a rewrite in Rust.

01:04:09 And then that really took off.

01:04:11 And that's where we're going now.

01:04:13 Yeah.

01:04:14 Yeah.

01:04:14 I don't know what I can say because I'm actually on the same team as the Pylands team.

01:04:18 So, but no, I mean, I think it's good.

01:04:21 I think this is one of those interesting scenarios where some people realize like,

01:04:25 you know what, we're going to pay the penalty of writing a tool in a way that's faster,

01:04:30 but makes us go slower because the overall win for the community

01:04:33 is going to be a good win.

01:04:34 So it's worth that headache, right?

01:04:36 Not to say I don't want to scare people off from writing Rust, but let's be honest,

01:04:39 it takes more work to write Rust code than it does take to write Python code.

01:04:42 But some people chose to make that trade off and we're all benefiting from it.

01:04:46 The one thing I will say that's kind of interesting from this

01:04:48 that hasn't gotten a lot of play yet because it's still being developed,

01:04:51 But PyLens is actually working with the Pyrefly team to define a type server protocol, TSP, so that a lot of these type servers can just kind of feed the type information to a higher level LSP and let that LSP handle the stuff like symbol renaming and all that stuff.

01:05:05 Right. Because the key thing here and the reason there's so many different type checkers is there are there is a spec.

01:05:12 Right. And everyone's trying to implement it. But there's differences like in type in terms of type inferencing.

01:05:16 And if I actually go listen to Michael's interview, talk Python to me with the Pyrefly team,

01:05:21 they actually did a nice little explanation of the difference between Pyrites approach

01:05:25 and Pyrefly's approach.

01:05:27 And so there's a bit of variance.

01:05:28 But for instance, I think there's some talk now of trying to like, how do we make it

01:05:32 so everyone doesn't have to reimplement how to rename a symbol, right?

01:05:35 That's kind of boring.

01:05:35 That's not where the interesting work is.

01:05:37 And that's not performant from perspective of you want instantaneously to get that squiggly red line

01:05:43 in whether it's VS Code or it's in PyCharm or whatever your editor is, right?

01:05:48 You want to get it as fast as possible, but the rename-

01:05:51 Jupyter.

01:05:52 Jupyter.

01:05:52 No, not Emacs.

01:05:53 Everything but Emacs.

01:05:53 No, not Emacs.

01:05:56 Just to bring things full circle, it's that focus on user experience, right?

01:06:00 Which is, yes, you want that squiggly line, but when things go wrong,

01:06:04 when your type checker says, oh, you've got a problem,

01:06:07 you know, like I think about as an analogy, how Pablo has done an amazing amount of work

01:06:14 on the error reporting, right?

01:06:15 When you get an exception and, you know, now you have a lot more clues about what is it that I actually have to change to make the tool, you know, to fix the problem, right?

01:06:26 Like so many times years ago, you know, when people were using mypy, for example, and they'd have some complex failure of their type annotations and have absolutely no idea what to do about it.

01:06:40 And so getting to a place where now we're not just telling people you've done it wrong,

01:06:44 but also here's some ideas about how to fix it.

01:06:49 I think this is a full circle here because honestly, using typing in your Python code

01:06:54 gives a lot of context to the AI when you ask for help.

01:06:57 If you just give it a fragment and it can't work with it.

01:06:59 That's true.

01:07:00 And also, if you can teach your AI agent to use the type checkers and use the LSPs,

01:07:06 it will also generate better code for you.

01:07:09 I think the one challenge I would add to what Barry said a moment ago is that if you're a developer and you're using, say, three or four type checkers at the same time, you also have to be careful about the fact that some of them won't flag an error that the other one will flag.

01:07:24 So I've recently written Python programs and even built a tool with one of my students named Benedek that will automatically generate Python programs that will cause type checkers to disagree with each other.

01:07:39 I will flag it as an error, but none of the other tools will flag it as an error.

01:07:45 And there are also cases where the new tools will all agree with each other, but disagree with mypy.

01:07:50 So there is a type checker conformance test suite.

01:07:53 But I think as developers, even though it might be the year of LSP and type checker,

01:07:57 we also have to be aware of the fact that these tools are maturing and there's still disagreement among them.

01:08:03 and also just different philosophies when it comes to how to type check and how to infer.

01:08:08 And so we have to think about all of those things as these tools mature and become part of our ecosystem.

01:08:12 Yeah, Greg, that last point is important.

01:08:14 Out of curiosity, how did the things where the type checkers disagree

01:08:18 match up with the actual runtime behavior of Python?

01:08:21 Was it like false positives or false negatives?

01:08:24 That's a really good question.

01:08:26 I'll give you more details in the show notes because we actually have it in a GitHub repository

01:08:30 and I can share it with people.

01:08:31 But I think some of it might simply be related to cases where mypy is more conformant to the spec, but the other new tools are not as conformant.

01:08:43 So you can import overload from typing and then have a very overloaded function.

01:08:48 And mypy will actually flag the fact that it's an overloaded function with multiple signatures, whereas PyRite and Pyrefly and Zubon will not actually flag that, even though they should.

01:09:00 Another big area is optional versus not optional.

01:09:03 Yes.

01:09:03 Like, are you allowed to pass a thing that is an optional string when the thing accepts a string?

01:09:09 Some stuff's like, yeah, it's probably fine.

01:09:10 Others are like, no, no, no.

01:09:11 This is an error that you have to do a check.

01:09:13 And if you want to switch type checkers, you might end up with a thousand warnings that you didn't previously had because of an intentional difference of opinion on how strict to be, I think.

01:09:23 Yeah.

01:09:23 So you have to think about false positives and false negatives when you're willing to break the build because of a type error.

01:09:29 All of those things are things you have to factor in.

01:09:31 But to go quickly to this connection to AI, I know it's only recently, but the Pyrefly

01:09:37 team actually announced that they're making Pyrefly work directly with Pydantic AI.

01:09:42 So there's going to be an interoperability between those tools so that when you're building

01:09:46 an AI agent using Pydantic AI, you can also then have better guarantees when you're using

01:09:52 Pyrefly as your type checker.

01:09:53 It makes total sense, though, because then the reasoning LLM that's at the core of the

01:09:57 agent can actually have that information before it tries to execute the code and you don't get in that

01:10:04 loop that they often get in. You can correct it before it runs. Yeah, really good point. I want to

01:10:09 just sort of express my appreciation to all the people working on this typing stuff. As someone

01:10:14 who's come from many, many years in dynamic languages, I was always like, oh, typing. Those

01:10:27 E, I love seeing how easy it is for people to ease into it when they're in Python.

01:10:32 It's not all or nothing.

01:10:34 C, I love the huge number of tools.

01:10:36 The competition in this space is really exciting.

01:10:39 And D, guess what?

01:10:40 It really, really does help.

01:10:42 And I'll even add an E, which is my students who come from Java, C++, C#, and so forth

01:10:47 feel relief.

01:10:48 They find that without type checking, it's like doing a trapeze act without a safety

01:10:53 net.

01:10:53 And so they're very happy to have that typing in there,

01:10:57 typings in there.

01:10:58 So kudos to everyone.

01:10:59 All right, folks, we are out of time.

01:11:00 This could literally go for hours longer.

01:11:04 It was a big year.

01:11:05 It was a big year, but I think we need to just have a final word.

01:11:10 I'll start and we'll just go around.

01:11:12 So my final thought here is, we've talked about some things that are negatives

01:11:17 or sort of downers or whatever here and there, but I still think it's an incredibly exciting time

01:11:22 To be a developer, data scientist, there's so much opportunity out there.

01:11:26 There's so many things to learn and take advantage of and stay on top of.

01:11:29 And amazing.

01:11:30 Every day is slightly more amazing than the previous day.

01:11:33 So I love it.

01:11:34 Gregory, let's go to you next.

01:11:35 Let's go around the circle.

01:11:35 Yeah, I wanted to give a shout out to all of the local Python conferences.

01:11:40 I actually, on a regular basis, have attended the PyOhio conference.

01:11:44 And it is incredible.

01:11:45 The organizers do an absolutely amazing job.

01:11:49 And they have it hosted on a campus, oftentimes at Ohio State or Cleveland State University.

01:11:54 And incredibly, PyOhio is a free conference that anyone can attend with no registration fee.

01:12:01 So Michael, on a comment that I think is really positive, wow, I'm so excited about the regional

01:12:06 Python conferences that I've been able to attend.

01:12:08 Thomas.

01:12:09 Wow, I didn't expect this.

01:12:10 So I think I want to give a shout out to new people joining the community and also joining

01:12:16 just core developer team as triagers or it's just drive by commenters. I know we harped a little bit

01:12:22 about people, you know, giving strong opinions and discussions, but I always look to the far future

01:12:27 as well as the near future. And we always need new people. We need new ideas. We need new opinions. So

01:12:33 yeah, I'm, I'm excited that there's still people joining and signing up and even when it's

01:12:39 thankless work. So I guess I want to say thank you to people doing all the thankless work. Jodi.

01:12:44 Yeah, I want to say this is actually really only my third year or so really in the Python community.

01:12:52 So before that, I was just sort of on the fringes, right?

01:12:54 And after I started advocacy, I started going to the conferences and meeting people.

01:12:58 And I think I didn't kind of get how special the community was until I watched the Python documentary this year.

01:13:04 And I talked to Paul about this, Paul Everett afterwards, also made fun of him for his like early 2000s fashion.

01:13:11 but I think, yeah, like I'm a relative newcomer to this community and you've all made me feel so

01:13:18 welcome. And I guess I want to thank all the incumbents for everything you've done to make

01:13:24 this such a special tech community for minorities and everyone, newbies, you know, Python,

01:13:30 Python is love. Oh, geez. How am I supposed to follow that?

01:13:37 I think one of the interesting things that we're kind of looping on here is

01:13:41 I think the language evolution has slowed down, but it's obviously not stopped, right?

01:13:45 Like as Thomas pointed out, there's a lot more stuff happening behind the scenes.

01:13:49 Lazy imports are coming, and that was a syntactic change, which apparently brings out the mean side of some people.

01:13:55 And we've obviously got our challenges and stuff, but things are still going.

01:13:58 We're still looking along.

01:13:59 We're still trying to be an open, welcoming place for people like Jody and everyone else who's new coming on over

01:14:05 and to continue to be a fun place for all of us slightly grain-beard people who have been here for a long time

01:14:11 to make us want to stick around.

01:14:12 I think it's just more of the same, honestly.

01:14:15 It's all of us just continuing to do what we can to help out to keep this community being a great place.

01:14:20 And it all just keeps going forward.

01:14:22 And I'll just end with, if you work for a company that's not sponsored the PSF,

01:14:26 please do so.

01:14:26 It's rare to have, I mean, a programming language or any sort of tool

01:14:32 where it is both really, really beneficial to your career

01:14:36 and you get to hang out with really special, nice, interesting people.

01:14:40 And it's easy to take all that for granted if you've been steeped in the community.

01:14:45 I went to a conference about six months ago, a non-Python conference.

01:14:49 And that was shocking to me to discover that all the speakers were from advertisers and sponsors.

01:14:55 Everything was super commercialized.

01:14:57 People were not interested in just like hanging out and sharing with each other.

01:15:00 And it was a shock to me because I've been to basically only Python conferences for so many years.

01:15:05 I was like, oh, that's not the norm in the industry.

01:15:08 So we've got something really special going that not only is good for the people, but good for everyone's careers and mutually reinforcing and helping each other.

01:15:16 And that's really fantastic.

01:15:17 And we should appreciate that.

01:15:19 Barry, final word.

01:15:20 Thomas stole my thunder just a little bit, but just to tie a couple of these ideas together.

01:15:25 Python, and you know, Brett said this, right?

01:15:29 This is Python is the community or the community is Python.

01:15:33 There's no company that is telling anybody what Python should be.

01:15:38 Python is what we make it.

01:15:40 And, you know, as folks like myself get a little older and, you know, and we have younger people

01:15:47 coming into the community, both developers and everything else

01:15:51 who are shaping Python into their vision.

01:15:54 I encourage you, if you've thought about becoming a core dev,

01:15:58 find a mentor.

01:15:59 There are people out there that will help you.

01:16:01 If you want to be involved in the community, the PSF, you know, reach out.

01:16:06 There are people who will help guide you this community. You can be involved. Do not let any self-imposed limitations stop you from

01:16:16 becoming part of the Python community in the way that you want to. And eventually run for the

01:16:22 steering council because we need many, many, many more candidates next year. And you don't need any

01:16:29 qualifications either because I'm a high school dropout and I never went to college or anything.

01:16:34 And look at me.

01:16:35 And I have a PhD and I will tell you, I did not need all that to become a Python developer

01:16:39 because I was the Python developer before I got the PhD.

01:16:42 I'm a bass player.

01:16:43 So if I can do it, anybody can do it.

01:16:47 Thank you everyone for being here.

01:16:49 This awesome look back in the air and I really appreciate you all taking the time.

01:16:52 Thank you, Michael.

01:16:53 Thanks everybody.

01:16:54 Bye everybody.

01:16:57 This has been another episode of Talk Python To Me.

01:17:00 Thank you to our sponsors.

01:17:01 Be sure to check out what they're offering.

01:17:02 It really helps support the show.

01:17:04 Look into the future and see bugs before they make it to production.

01:17:08 Sentry's Seer AI Code Review uses historical error and performance information at Sentry

01:17:13 to find and flag bugs in your PRs before you even start to review them.

01:17:18 Stop bugs before they enter your code base.

01:17:20 Get started at talkpython.fm/seer-code-review.

01:17:25 If you or your team needs to learn Python, we have over 270 hours of beginner and advanced courses

01:17:30 on topics ranging from complete beginners to async code, Flask, Django, HTMX, and even LLMs.

01:17:37 Best of all, there's no subscription in sight.

01:17:40 Browse the catalog at talkpython.fm.

01:17:42 And if you're not already subscribed to the show on your favorite podcast player,

01:17:46 what are you waiting for?

01:17:47 Just search for Python in your podcast player.

01:17:49 We should be right at the top.

01:17:51 If you enjoy that geeky rap song, you can download the full track.

01:17:54 The link is actually in your podcast blur show notes.

01:17:56 This is your host, Michael Kennedy.

01:17:58 Thank you so much for listening.

01:17:59 I really appreciate it.

01:18:01 I'll see you next time.

01:18:27 I think is the norm.

Talk Python's Mastodon Michael Kennedy's Mastodon