Just released: Talk Python in Production book

38 things Python developers should learn in 2025

Episode #524, published Mon, Oct 20, 2025, recorded Mon, Sep 22, 2025
Guests and sponsors
Python in 2025 is different. Threads really are about to run in parallel, installs finish before your coffee cools, and containers are the default. In this episode, we count down 38 things to learn this year: free-threaded CPython, uv for packaging, Docker and Compose, Kubernetes with Tilt, DuckDB and Arrow, PyScript at the edge, plus MCP for sane AI workflows. Expect practical wins and migration paths. No buzzword bingo, just what pays off in real apps. Join me along with Peter Wang and Calvin Hendrix-Parker for a fun, fast-moving conversation.

Watch this episode on YouTube
Play on YouTube
Watch the live stream version

Episode Deep Dive

Guests introduction and background

Peter Wang is the co-founder of Anaconda and currently serves as its Chief AI and Innovation Officer. He helped found the PyData community and has long focused on improving Python’s data and scientific tooling at scale. (Anaconda)

Calvin Hendryx-Parker is the co-founder and CTO of Six Feet Up, an AWS Hero and long-time leader in Python architecture and DevOps. He and his team maintain SCAF, a Kubernetes-ready blueprint for shipping Python apps. (All Things Open 2025)


What to Know If You’re New to Python

Here are a few episode-specific concepts that will help you get the most out of the breakdown below.

  • Free-threaded CPython (PEP 703): A build mode that removes the GIL so CPU-bound threads can run in parallel. Expect ecosystem work to make C extensions safe here. (Python Enhancement Proposals (PEPs))
  • uv and uvx: A faster, modern installer and environment manager that makes “clean, repeatable” installs feel instant. (Astral Docs)
  • Docker + Compose: Containers capture your app and its services so dev and prod stay in sync. (Docker Documentation)
  • Async vs threads: Async keeps the event loop free by awaiting I/O; use threads or processes for blocking or CPU-heavy work. (Python documentation)

The Topics

Python documentary as a mirror: reflecting on Python’s past helps set priorities for where the community should push next.

Free-threaded CPython (no GIL) arriving: what it is, why Meta’s work matters, and how it changes concurrency expectations in real apps.

Why the GIL existed and what it broke: a plain-English recap of the GIL’s purpose and why multithreaded speedups often fell flat.

C-extension audit season: the free-threaded future requires libraries to rethink locking, atomicity, and internal assumptions.

Async vs threads, new mental models: how free-threading and async coexist, where deadlocks lurk, and how to avoid blocking the event loop.

Packaging that feels fast (uv / uvx): modern installs and hermetic venvs that make reproducible environments a default rather than a dream.

Nix as a toolchain anchor: using Nix to pin compilers and CLIs so dev machines stop drifting.

Ship your own Python: vendor the interpreter to escape OS Python quirks and keep apps portable.

Lock it down with pins: pip-tools style workflows for deterministic requirements while still being friendly to plain pip users.

Containers as the baseline unit: Docker images capture app plus infra versions to eliminate “works on my machine.”

Compose for local stacks: Docker Compose describes services, versions, and networks so onboarding and parity are simple.

Kubernetes when orchestration pays off: a pragmatic “when is k8s worth it vs Compose” framing rather than a default religion.

Tilt makes k8s dev humane (tilt.dev): live code sync and tight inner loops against a real cluster instead of rebuild-and-pray.

Lightweight clusters (k3s/kind): run a legit control plane on one VM or EC2 box to test orchestration without heavyweight ops.

Talos for hardened k8s nodes (talos.dev): immutable, API-managed nodes with no SSH shell remove a whole class of security risk.

GitOps cadence (Argo CD): declarative deploys that continuously reconcile cluster state with what’s in git.

ASGI server choices (Granian vs Uvicorn): understand worker models and where blocking calls can freeze a “fast” async app.

No blocking in the event loop: replace sync HTTP/file/database calls with async equivalents or offload to executors.

DuckDB as the local analytics engine: full SQL on Parquet or S3 that feels instant on a laptop and scales your thinking.

Arrow and Parquet as lingua franca: columnar formats and zero-copy interchange so tools cooperate instead of serialize.

PyScript + WebAssembly (pyscript.net): run a real PyData stack in the browser or at the edge for private, portable analytics.

Edge compute patterns (Cloudflare Workers): push logic to the data for latency, privacy, and cost control.

Run LLMs locally when it fits: a 32-GB Mac mini can handle surprising models; pick local vs cloud on cost, privacy, and latency.

Tokens and context are budgets: structure prompts and retrieval so models spend tokens on signal, not fluff.

Model Context Protocol (MCP): use tool-calling servers to constrain context, wire local resources, and keep agents on task.

IDE copilots as tireless juniors (Cursor): let AI draft the 90–95 percent and plan to massage the last 5 percent yourself.

Agentic coding in VS Code (Cline): autonomous task runners that can reason over repos and execute multi-step plans.

Front-end refactors with LLMs: migrate a Bootstrap 2015 site to Bulma/Tailwind by letting AI do the grunt work while you review.

Rebuild velocity over ceremony: prefer stacks and CSS frameworks that minimize build steps for long-lived sites.

Sensible supply-chain hygiene: minimal base images, pinned versions, and repeatable builds to dodge transient breakage.

Compose-to-k8s migration path: start simple, adopt Tilt for the inner loop, then add GitOps when orchestration clearly pays.

SCAF blueprint as a head start (Six Feet Up): a minimal, readable template that shows how to scale the same patterns to prod.

Containers for dev parity: keep Postgres, Redis, Mongo, and friends version-matched so “minor” upgrades don’t wreck your day.

Kubernetes for data jobs: schedule batch/analytics workloads (Airflow-style) to use cluster power without special-case infra.

Bring compute to data as a habit: stop copying datasets around and query them where they live with the right engine.

Cloud plus edge plus local is normal now: mix and match deployments instead of forcing a one-size-fits-all architecture.

Community stewardship matters: lowering friction (packaging, docs, defaults) keeps Python welcoming and moving forward.

Skills to bank for 2026: concurrency fundamentals, packaging literacy, containers, and modern data formats will keep compounding.


Key points and takeaways

  • Free-threaded CPython is arriving, and it changes concurrency expectations We discussed PEP 703’s free-threaded build that removes the GIL so CPU-bound threads can actually run in parallel. The big work now shifts to libraries: auditing C extensions for thread safety, adding internal locks where needed, and exposing clear guidance for users mixing threads and async. Teams should plan experiments in sandboxes, measure real-world speedups, and watch release notes as wheels go “FT-ready.” This isn’t a flip overnight, but it unlocks simpler parallelism for many workloads. (Python Enhancement Proposals (PEPs))
  • Packaging that feels fast with uv and uvx The show called out how much nicer workflows get when installs are fast and environments are hermetic by default. uv provides a modern resolver and cache behavior; uvx spins up ephemeral envs to run tools without polluting your system. Teams adopting uv reduce “works on my machine” drift and speed up CI. It’s a small change with a daily pay-off. (Astral Docs)
    • Links and tools:
  • Containers as the default unit, Compose for local stacks We framed containers as the baseline for dev parity: pin your Python, Postgres, and friends so onboarding is trivial and “minor upgrades” stop breaking prod. Docker Compose then declares your local services, networks, and volumes, making the inner loop reliable. Keep Compose simple; resist adding orchestration until you truly need it. (Docker Documentation)
  • Kubernetes when it clearly pays off, and how to make it humane We compared Compose vs k8s with a pragmatic lens: orchestration is worth it when you need scheduling, tenancy, or cluster-wide ops. For development, Tilt watches your code and syncs to a real cluster so you stop rebuild-and-pray cycles. Lightweight k3s makes running a legit control plane feasible on a single VM or edge box. Talos hardens nodes by removing SSH and managing the OS via an API, which cuts a whole class of security issues. For rollout discipline, Argo CD brings GitOps reconciliation so “what’s in git” is what’s running. (Tilt)
    • Links and tools:
      • tilt.dev/docs: Tilt guides. (Tilt)
      • docs.k3s.io: k3s docs. (K3s)
      • talos.dev: Talos Linux. (TALOS LINUX)
      • argoproj.github.io/cd: Argo CD. (Argo Project)
  • Async, threads, and the “don’t block the event loop” rule Even as free-threading lands, async remains essential for network concurrency and low-latency APIs. The maxim stays the same: never run blocking I/O in the event loop; use asyncio.to_thread, executors, or async clients. For serving, know your ASGI server tradeoffs: Uvicorn is the battle-tested baseline; Granian is a Rust-backed option that emphasizes performance and also speaks ASGI. Choose based on your workload profile and operational needs. (Python documentation)
    • Links and tools:
      • docs.python.org/3/library/asyncio-task.html: asyncio.to_thread. (Python documentation)
      • uvicorn.dev: Uvicorn docs. (Uvicorn)
      • github.com/emmett-framework/granian: Granian repo. (GitHub)
  • DuckDB as your local analytics engine, Arrow/Parquet as the common language We highlighted how DuckDB lets you run serious SQL on Parquet files or local data lakes, all in-process and absurdly fast for exploratory work. Arrow’s columnar memory format and Parquet on disk make tools interoperate without endless serialize/deserialize traps. This “bring compute to data” habit keeps data in place and boosts iteration speed. It’s a powerful upgrade for analysts and developers alike. (DuckDB)
    • Links and tools:
  • PyScript + WebAssembly make Python at the edge feel real Running a real PyData slice in the browser or at the edge is no longer fanciful. PyScript sits on Pyodide/MicroPython stacks and WASM to provide a private, portable execution path. Pair that with Cloudflare Workers for low-latency compute near users or data. This opens new UX patterns where lightweight analytics and visualizations execute client-side or at POPs without shipping sensitive data to big servers. (PyScript)
    • Links and tools:
  • MCP: sane AI workflows via tool-calling servers The Model Context Protocol gives you a standard way to expose tools, data, and resources to AI apps, keeping context grounded and secure. Instead of bespoke adapters per model, you run MCP servers and let compatible clients connect. Adoption is expanding across vendors and platforms, which means you can design once and reuse broadly. For Python teams, this is the missing “USB-C port” for reliable agent workflows. (Model Context Protocol)
  • IDE copilots and agentic coding: treat them like tireless juniors The conversation positioned tools like Cursor as accelerators for scaffolding, refactors, and tests. The mindset shift is to let the tool do the first 90–95 percent, then review and polish the last mile. This keeps humans in charge of design and nuance while offloading the grind. Keep security and review practices in place, especially when agents can read and edit your repo. (Cursor)
    • Links and tools:
      • cursor.com: Cursor editor. (Cursor)
  • GitOps cadence for confidence We emphasized reconciling declared state in git with what actually runs in the cluster. Argo CD continuously compares and syncs, giving rollback safety and auditability. The practice pairs well with SRE habits like minimal images and pinned versions to cut supply-chain noise. It’s not hype; it’s boring, reliable automation. (Argo Project)
    • Links and tools:
  • SCAF blueprint as a head start Calvin pointed to SCAF as a readable template that scales from sandbox to production with containers, CI, and Terraform to spin up Talos-backed clusters on AWS. It’s opinionated without being opaque, and it demonstrates a Compose-to-k8s journey in code. For teams who want to see “the whole thing” working end-to-end, a maintained blueprint saves weeks. (Six Feet Up)
    • Links and tools:
      • sixfeetup.com/company/scaf-a-blueprint-for-developers: SCAF overview. (Six Feet Up)
      • github.com/sixfeetup/scaf: SCAF repo. (GitHub)
  • Nix as a toolchain anchor We talked about using Nix to pin compilers and CLIs so laptops stop “drifting.” With Nix flakes and shells, your dev env becomes declarative and reproducible, which dovetails with containers and CI. It’s a solid answer for polyglot teams where Python is only part of the stack. (Nix)
    • Links and tools:
      • nix.dev/tutorials/first-steps: Nix first steps. (Nix)
  • Documentary as mirror, community stewardship as momentum Reflecting on where Python came from helps choose what to fix next: defaults, docs, packaging friction, and developer experience. The thread throughout the episode is stewardship over hype. We invest where the day-to-day gets tangibly better for newcomers and seniors alike, which is how Python stays welcoming and productive.

Interesting quotes and stories

"Free-threaded Python changes the social contract around threads. Now the hard work is making the extension ecosystem safe." -- Peter Wang

"Treat AI like a junior developer who can grind through the first draft. Your job is to direct, review, and improve." -- Peter Wang

"Tilt made Kubernetes development feel humane. You edit, it syncs, and you test against a real cluster without losing your flow." -- Calvin Hendrix-Parker


Key definitions and terms

  • GIL (Global Interpreter Lock): A CPython mechanism that historically allowed only one thread to execute Python bytecode at a time.
  • Free-threaded CPython: A build mode from PEP 703 that removes the GIL so CPU-bound threads can run in parallel; requires thread-safe libraries. (Python Enhancement Proposals (PEPs))
  • ASGI: A standard interface for async Python web apps and servers. Uvicorn and Granian are ASGI servers. (Uvicorn)
  • Docker Compose: A YAML format and CLI to define and run multi-container apps locally. (Docker Documentation)
  • K3s: A lightweight, CNCF-certified Kubernetes distribution suited for edge and dev clusters. (K3s)
  • Talos Linux: An immutable, API-managed OS for Kubernetes nodes with no SSH shell. (TALOS LINUX)
  • GitOps / Argo CD: Managing deployments by declaring state in git and continuously reconciling cluster state to match. (Argo Project)
  • DuckDB: An in-process analytical SQL engine ideal for local Parquet/CSV exploration. (DuckDB)
  • Apache Arrow / Parquet: Columnar formats for in-memory (Arrow) and on-disk (Parquet) data that enable fast, interoperable analytics. (Apache Arrow)
  • MCP (Model Context Protocol): An open standard that lets AI apps connect to tools and data through MCP servers. (Model Context Protocol)

Learning resources

Here are curated resources to dig deeper. Course links include a small parameter so you can find them later.


Overall takeaway

Python in 2025 is about pragmatic upgrades that compound: faster installs, reproducible environments, containers as the norm, and a realistic path from Compose to Kubernetes. The concurrency story gets simpler with free-threaded CPython while async continues to shine for I/O and APIs. On the data side, DuckDB plus Arrow/Parquet let you bring compute to the data with almost silly speed, and at the edge PyScript and WASM widen where Python can run. Layer in MCP-style tooling and responsible use of IDE copilots, and you get a stack that is faster, safer, and more fun to ship. Pick two or three of these to adopt this quarter, measure the wins, and keep rolling.

Calvin Hendryx-Parker: github.com/calvinhp
Peter on BSky: @wang.social

Free-Threaded Wheels: hugovk.github.io
Tilt: tilt.dev
The Five Demons of Python Packaging That Fuel Our ...: youtube.com
Talos Linux: talos.dev
Docker: Accelerated Container Application Development: docker.com
Scaf - Six Feet Up: sixfeetup.com
BeeWare: beeware.org
PyScript: pyscript.net
Cursor: The best way to code with AI: cursor.com
Cline - AI Coding, Open Source and Uncompromised: cline.bot

Watch this episode on YouTube: youtube.com
Episode #524 deep-dive: talkpython.fm/524
Episode transcripts: talkpython.fm

Theme Song: Developer Rap
🥁 Served in a Flask 🎸: talkpython.fm/flasksong

---== Don't be a stranger ==---
YouTube: youtube.com/@talkpython

Bluesky: @talkpython.fm
Mastodon: @talkpython@fosstodon.org
X.com: @talkpython

Michael on Bluesky: @mkennedy.codes
Michael on Mastodon: @mkennedy@fosstodon.org
Michael on X.com: @mkennedy

Episode Transcript

Collapse transcript

00:00 Python in 2025 is different.

00:02 Threads are really about to run in parallel.

00:05 Installs, finish before your coffee cools, and containers are the default.

00:10 In this episode, we count down 38 things to learn this year.

00:14 Free-threaded CPython, uv for packaging, Docker and Compose, Kubernetes with Tilt, DuckDB and Arrow, PyScript at the Edge, plus MCP for sane AI workflows.

00:25 Expect practical wins and migration paths.

00:28 No buzzword bingo, just what pays off in real apps.

00:31 Join me, along with Peter Wang and Calvin Hendrix Parker, for a fun, fast-moving conversation.

00:37 This is Talk Python To Me, episode 524, recorded September 22nd, 2025.

00:43 five. Welcome to Talk Python To Me, a weekly podcast on Python. This is your host, Michael

01:04 Kennedy. Follow me on Mastodon, where I'm @mkennedy, and follow the podcast using @talkpython, both accounts over at fosstodon.org, and keep up with the show and listen to over nine years of episodes at talkpython.fm. If you want to be part of our live episodes, you can find the live streams over on YouTube. Subscribe to our YouTube channel over at talkpython.fm/youtube and get notified about upcoming shows. This episode is brought to you by Sentry. Don't let those errors go unnoticed. Use Sentry like we do here at Talk Python.

01:34 Sign up at talkpython.fm/sentry.

01:37 And it's brought to you by Agency.

01:40 Discover agentic AI with Agency.

01:42 Their layer lets agents find, connect, and work together.

01:45 Any stack, anywhere.

01:47 Start building the internet of agents at talkpython.fm/agency.

01:51 Spelled A-G-N-T-C-Y.

01:53 Hello, hello.

01:54 Peter and Calvin, welcome back to Talk Python.

01:57 I mean, to both of you.

01:58 It's great to be here.

01:59 Great to be here.

01:59 Thanks for having us.

02:00 Yeah, I know you both are very passionate technologists.

02:04 and Pythonistas, and we're going to dive into some really exciting things.

02:09 What do people need to know as developers and data scientists in 2025?

02:14 And I'm going to take a wild guess and bet that these trends, most of them carry over to 2026.

02:19 We're just a few months.

02:22 So let's just really quickly have both of you introduce yourselves just because not everyone has had a chance to listen to every episode.

02:29 And even if they did, they may not remember.

02:32 So Peter, welcome.

02:33 Who are you?

02:34 Hi, I'm Peter Wang. I'm a founder of Anaconda and the creator of the PyData community.

02:40 And I'm sort of leading the advocacy, at least, and been at the center of evangelism for the use of Python in the data science and machine learning world for over 12 years now. I think 13 years at this point. But my day job is at Anaconda. I'm the chief AI officer. So I work on open source community projects, innovation, looking at AI things and how that impacts our community and our users and what good could look like there for us.

03:07 I mean, there's a lot of discussion on AI, of course, good, bad, and ugly.

03:11 And I'm really trying to figure out if we as responsible open source community stewards want to have something meaningful to say here, what are the right things to do?

03:18 So that's what I spend a lot of my time focused on.

03:20 Yeah, that's really good work.

03:21 Yeah, it's really good work.

03:22 And congrats with all the access you've had at Anaconda.

03:25 Thank you.

03:26 You made a serious dent.

03:27 You were featured in or you were part of the Python documentary, right?

03:33 That's right.

03:33 Yeah, that was really great.

03:35 I really appreciated your words in there.

03:37 Thank you.

03:37 Thank you.

03:38 Yeah, that was great.

03:38 Really honor to be included in that.

03:40 Well, tell people, I haven't technically talked about it on the documentary or the documentary on the podcast very much.

03:46 So you just give people a quick rundown of what that is and why they should check it out.

03:50 Well, anyone who's listening to this podcast should absolutely watch the documentary because it has just got a cast of characters telling the story about how our favorite programming language came to be.

03:58 All of the, not all, okay, not all, but some of the travails that have.

04:03 challenged us as a community over the period of time since its inception, you know, 30 years ago at this point.

04:09 And so it's just a really fun, nice, you know, I think it's weird because Python has been around forever, right?

04:16 And yet in many respects, we are still, the world is changing.

04:19 And I think there's lots of amazing new opportunities for Python as a language.

04:23 And we've been growing, growing so fast and so much and evolving as a language and as a community.

04:29 This documentary, I think, is a nice way to sort of like check in and say, oh, wow, we got to here and here's the journey we've been on.

04:36 And that gives us almost the space to then be a little bit more intentional about talking about where we want to go from here, which I think is something very important that we need to do as a community.

04:43 So anyway, I just really liked it from philosophically speaking from that perspective.

04:48 But it's also just fun just to get the perspectives like the CPython core maintainers and the BDFL and all the stuff on just the language over the years.

04:55 Yeah, I thought it was really excellent.

04:56 Yeah, I enjoyed it.

04:59 Tremendously.

04:59 Like I really love hearing all the old stories.

05:03 You know, I've been around for a long time in the community and seeing all the familiar faces.

05:06 And I feel like it gives a face and a level of empathy to the community that's needed.

05:11 Yeah.

05:11 I would say that the production quality was almost as good as Calvin's camera here.

05:17 You always look great on these streams.

05:21 Welcome.

05:21 Tell people about yourself.

05:22 Thank you, Michael.

05:23 I appreciate that.

05:26 Well, I guess I can give a quick introduction.

05:28 I'm Calvin Hendryx-Parker.

05:29 I'm CTO and co-founder of Six Feet Up.

05:31 We are a Python and AI consulting agency who helps impactful tech leaders solve the hard problems.

05:37 I've been in the Python community for ages.

05:41 I probably don't outnumber Peter in years, but at least since 2000, I've been involved.

05:45 I started with Zope and then through that, the Plone community got very involved in the governance of the open source project.

05:51 Now we do a lot of Django, a lot of other Python open source data projects like Airflow, for example.

05:58 I think that's on the list for later.

06:00 And so we just enjoy hanging out and being an awesome group of folks who love solving the hard problems.

06:06 Yeah, excellent.

06:07 Yeah, you've been doing it longer than me for sure.

06:09 I'm the baby.

06:11 Well, 2000 is about when I got involved in Python as well.

06:13 So the old man was supposed to be maybe from 99, but basically 2000.

06:18 Yeah, my first PyCon was 2003 and I think there were 250 people in the room.

06:24 It was amazing.

06:24 Yeah, you actually beat me by a couple of years.

06:26 I went to, I went to 05 was my first one at George Washington University.

06:31 I think it was.

06:31 Yeah.

06:32 In DC.

06:33 And it was about 200 something people.

06:34 They had a track in the keynote speakers.

06:37 Wow.

06:38 I've only been doing this since 2011.

06:40 So I'm just barely getting started.

06:42 That used to seem pretty recent ago, but it doesn't anymore.

06:45 Oddly.

06:45 No, it turns out it was, yeah, it's a long time ago.

06:48 We're halfway through the 2020s now.

06:50 It's crazy.

06:51 I know.

06:51 Yeah.

06:51 Yeah.

06:51 When you said 2025 things that developers should learn in 2025, I was like, is this a science fiction movie we're talking about.

06:58 Exactly.

06:58 What is this like then?

06:59 It's a dystopian science fiction movie.

07:00 This is the same crap we had to deal with in 2010.

07:03 Mostly.

07:04 Although async back then, it was interesting.

07:06 We didn't have, you know, we had staff list, I guess.

07:09 There's a, I don't know.

07:11 2010 there's tornado.

07:12 Yeah.

07:13 There were various async systems.

07:15 Anyway, salary.

07:16 Yeah.

07:16 Wow.

07:17 We've got, we've got free threaded Python.

07:18 Now we do features now.

07:21 Yes.

07:21 Almost.

07:22 We almost have free that Python.

07:24 Yeah.

07:24 Yeah.

07:24 Yeah.

07:24 Spoiler alert.

07:25 That may make an appearance in one of the topics.

07:28 Well, we may not get to 20 things, but they may not be 20 big, bold items, right?

07:35 Yeah.

07:35 We have a list of things we want to go through.

07:37 That's right.

07:38 Peter, we reserve the right to design the designation of the size of the buckets that define the things.

07:43 The things, that's right.

07:45 But I think the plan is we're going to just riff on some ideas we think are either emerging or current important trends or even foundational things, that people should be paying attention to in the zeitgeist right now, right?

07:59 What are things that maybe you haven't necessarily been tracking or you heard of, but you're like, ah, I haven't got time for that, or it's not for me yet.

08:06 So I think that'll be fun.

08:08 Let's start with you, Peter.

08:09 What's your first...

08:10 We all gathered up a couple of things that we think might be super relevant.

08:15 And yeah, what do you think?

08:17 So I think, well, let's just get started with it.

08:19 Let's just talk about the free threading bit.

08:20 And let's really, because this is a kind of, it touches the past, and it also really takes us into the future.

08:26 And it's this thing that has taken quite some time to emerge.

08:29 I think the GIL has been a topic of discussion since as long as I've been using Python.

08:34 And finally, we have, courtesy of the team at Meta, an excellent set of patches that delivered true free threading to Python.

08:44 And of course, this is both a blessing and a curse, right?

08:46 You should be careful what you ask for.

08:47 Because now we end up having to deal with true free threading in Python.

08:50 And for those who maybe are not so familiar with this whole topic, you know, the global interpreter lock, we call it GIL, G-I-L for short.

08:59 The global interpreter lock is how the Python virtual machine protects its innards.

09:04 And so when you use Python and you write code, even if you use threading, like the threading module in Python, ultimately the CPython interpreter itself as a system level process, it only has one real thread.

09:16 And it has this global interpreter lock that locks many of the internals of the interpreter.

09:20 The problem is that sometimes you want to have real multi-threading, and so you have to release this global interpreter lock.

09:26 And doing this is hard to get right, especially if you reach into C modules and whatnot.

09:33 The most popular C modules are pretty good at handling this kind of thing.

09:36 NumPy and others come to mind.

09:38 So we get really great performance from those when they release the gil.

09:41 But if you want to actually do a lot of Python logic in multiple threads, you end up essentially getting no lift whatsoever by using a threading module with classic single threaded or GIL locked python with the free threading you actually now are able to have threads running in parallel touching things like free lists and stuff like that and and and you know module definitions in the interpreter itself now what this means is a lot of python modules or packages which had been developed when python was you know implicitly single threaded they now have potential of thread contention, race conditions, all sorts of weird and unexpected behavior when they're used in a free threaded way. So we have this patch, we have this change now for free threading in the Python interpreter itself. That means that, however, what that means is we have to make sure that all of the rest of the package ecosystem is updated and tested to work with free threaded Python. So in Python 3.13, it was incorporated as an experimental, it was in the code base, but it was a build time feature. So you have to compile your own Python interpreter and turn on that flag to get a version of the interpreter that would be free threaded. In 3.14, it is now supported in the interpreter. It's still not turned on by default. And then at some indeterminate date, it will be turned on by default. The classic behavior with the global interblock will still always be there as a fallback for safety and compatibility and all that. But Python team has said, hey, we're ready to take this thing to supported mode and let the bugs flow,

11:18 right? So now if you go and install Python, a Python build with, it actually has a different

11:24 ABI tag. So it's CP313 or 314T for threading or free threading. So that's available through Python.org. There's a condo build for it as well. And so right now there's actually a page, Maybe we'll have the link for it, I think, in the show notes, right?

11:42 But there's a page that lists what the status is.

11:46 Think of the free-threaded wheels.

11:49 And right now, 105 out of 360 that are passing, basically.

11:54 The maintainers have updated them.

11:56 And this is out of the top, like, oh, there it is, great.

11:58 Yeah, out of the top 500 Python packages, something like this.

12:02 So you can see we have, as a community, a lot of work to do.

12:05 So the call to action here is not only should a Python developer learn this, because this is definitely coming and everyone has a multi-core machine now.

12:13 So this is definitely coming.

12:14 But you can also, this is a great way to give back.

12:17 You know, we talk about in the open source community oftentimes, how do we get starter bugs in there for people to start becoming contributing members of the community?

12:23 This is a great way to give back.

12:24 If there's some packages you see here that are yellow, you're like, wait, I use AIo HTTP.

12:28 Like, let me go and test that with free threading and see if I can bang, you know, just beat it up with my code in production and see like what fails there.

12:36 So this is a great way for the community to really get back and help us test and make sure all this works on what is certainly to be the next generation of the Python interpreter.

12:43 Yeah, there was a great talk at DjangoCon just two weeks ago by Michael Lyle.

12:49 He gave a talk about using free threading in Django.

12:52 And I think right now your mileage may vary was the answer.

12:56 Like it kind of depends.

12:58 I can only imagine going through and trying to commit and help.

13:01 Threading is hard.

13:02 It sounds like free threading is harder to wrap your brain around.

13:05 So I think it'd be tricky for someone starting and learning something new.

13:09 This may be on the more advanced edge of what someone should be learning.

13:14 It's more for the advanced crotchety, you know, senior developers.

13:18 I ain't got time to contribute to open source.

13:20 You can.

13:20 You can make your own life better.

13:22 We can all sort of, this is the sort of stone soup or good old Amish barn raising.

13:25 We should all get together and chip in.

13:27 But you're right.

13:28 Debugging async free threading issues is definitely not a beginner kind of task.

13:33 Sure.

13:33 But there's a lot of people who do have that experience from probably more from other languages or C extensions who could jump in, right?

13:40 Yeah, actually, you know, if you're a C++ developer who has been forced to use Python because of our success of driving the growth and adoption of the community, and you're really angry about this and you want to show other ways that Python is broken, this is a great way to show how Python is broken is to test really gnarly async and multi-threaded use cases.

13:57 Actually, one thing about this that I will point out for the more advanced users, Dave Beasley gave a great talk years ago at PyCon about Python parallelism.

14:06 And are you IO bound? Are you CPU bound?

14:08 I think he was looking at maybe it was actually relative to PyPy, PYPY.

14:13 And it wasn't about async in particular, but it was a rolling your own distributed computing or something like this.

14:19 I forget the exact title, but he did a deep analysis of when are we CPU bound or when are we IO bound and when are we CPU bound?

14:26 When we get to free threading Python like this, I think we're going to, as a community, be faced with having to up-level our thinking about this kind of thing.

14:32 Because so far we've done a lot of like, oh, delegating CPU bound numeric stuff to like Python or Pandas or Cython.

14:37 But with this, now we can really play first class in system level code.

14:41 And we have to think more deeply about how are we blocking events?

14:44 How are we handling things?

14:45 Is this, you know, or, you know, is this a, you know, event polling kind of thing?

14:48 Or is this more of a completion port thing?

14:51 Like a windows, you have different options.

14:52 So this is a very interesting topic.

14:53 Actually, it goes quite deep.

14:54 It goes very deep.

14:55 And I think it's going to be a big mental lift for people in the community, generally speaking.

15:01 I talk to a lot of people, as you know, from the podcast, and then also interact with a lot of people teaching.

15:07 And I don't see a lot of people stressing about thread safety or any of those kinds of things these days.

15:13 And I think in general, it's just not in the collective thinking to be really worried about it.

15:18 There are still cases in multi-threaded Python code where you need to take a lock because it's not one line is going to deadlock another or something like that, but you've got to take five steps.

15:29 And if you get interrupted somewhere in those five steps, the guilt could still theoretically interrupt you in the middle of code, right?

15:35 It still could be in a temporarily invalid state across more than one line.

15:40 But I just don't see people even doing anything hardly at it at all.

15:44 And when we just uncork this on them, it's going to be, it's going to be something.

15:49 And I don't think we're going to see deadlocks as a problem first.

15:52 I think we're going to see race conditions because deadlocks require people already having locks there that get out of order.

15:57 And I just think the locks are not there.

15:59 Then people are going to put the locks there and they're like, whoa, it's just stopped.

16:02 It's total chaos.

16:05 Yeah.

16:05 It's not using CPU anymore.

16:06 What is it doing?

16:07 Well, now you found the deadlock.

16:08 You, you, you added the deadlock, right?

16:10 So it's going to be, it's going to be a challenge, but the potential, on the other side of this, if you can get good at it, it's going to be amazing. You know, even on my little cheapo Mac mini, I've got 10 cores. If I run Python code, unless I do really fancy tricks or multiple processes, the best I can get is like 10%. Yeah. And I know this might be a little bit

16:31 of a spicy take, but like there, there was, I think a line that was being held by the CPython core team that we will accept a GIL removal or a gillectomy as it was called. We'll accept a GIL removal patch uh when it doesn't affect or negatively impact single core performance right and and like when that first came out in 2000 i think that first time i heard that article was a 2005 six seven time frame back then that was almost a defensible position nowadays you can't find a smartphone with a single core you know i can't find a raspberry pi a five dollar raspberry pi has dual core so it's like i get the general gist of that but like come on we have like 90 like you know, John Carmack's on Twitter talking about 96 core thread ripper performance with Python.

17:12 We, you know, we sort of need to lean into that, right? So I'm really, really bullish on this.

17:17 Cause as you know, like I'm very close to the data science and machine learning and the AI use cases.

17:21 And those are all just, you know, they're looking for whatever language gives us the best performance right now. It happens to be Python. If we as a community and we as evangelists of that community, if we don't lead into that and those users, they will happily go somewhere else. I mean,

17:34 that is bonusing people a hundred million dollars to start. They're not going to wait for your language to catch up. They'll make a new language, right? But I think there was something in 2025 that these developers should be learning along these lines would be just async programming and when it should be used. That's why the really tactical maneuver today. Yeah, I agree. I think

17:54 the async and await keywords are super relevant and the frameworks, I think, will start to take advantage of it. We're going to see what comes along with this free threading, but there's no reason you couldn't await a thread rather than await an IO operation.

18:07 You know what I mean?

18:08 I come, my background is C++ and C# and C# is actually where async and await came from, from Anders Halsberg, I believe.

18:15 And over there, you don't care if it's IO or compute bound.

18:19 You just await some kind of async thing.

18:21 It's not your job to care how it happens.

18:23 So I think we're going to start to see that, but it's going to take time for that, those foundational layers to build for us to build on.

18:30 Yeah.

18:32 This portion of Talk Python To Me is brought to you by Sentry's Seer.

18:36 I'm excited to share a new tool from Sentry, Seer.

18:40 Seer is your AI-driven pair programmer that finds, diagnoses, and fixes code issues in your Python app faster than ever.

18:48 If you're already using Sentry, you are already using Sentry, right?

18:52 Then using Seer is as simple as enabling a feature on your already existing project.

18:57 SEER taps into all the rich context Sentry has about an error.

19:01 Stack traces, logs, commit history, performance data, essentially everything.

19:05 Then it employs its agentic AI code capabilities to figure out what is wrong.

19:10 It's like having a senior developer pair programming with you on bug fixes.

19:15 SEER then proposes a solution, generating a patch for your code and even opening a GitHub pull request.

19:21 This leaves the developers in charge because it's up to them to actually approve the PR.

19:26 but it can reduce the time from error detection to fix dramatically.

19:30 Developers who've tried it found it can fix errors in one shot that would have taken them hours to debug.

19:36 SEER boasts a 94.5% accuracy in identifying root causes.

19:41 SEER also prioritizes actionable issues with an actionability score, so you know what to fix first.

19:49 This transforms sentry errors into actionable fixes, turning a pile of error reports into an ordered to-do list.

19:56 If you could use an always-on-call AI agent to help track down errors and propose fixes before you even have time to read the notification, check out Sentry's Seer.

20:06 Just visit talkpython.fm/seer, S-E-E-R.

20:11 The link is in your podcast player's show notes.

20:13 Be sure to use our code TALKPYTHON.

20:16 One word, all caps.

20:17 Thank you to Sentry for supporting Talk Python To Me.

20:20 Pamela Fox out in the audience.

20:22 Throws out that last time she really used locks was in my code for operating system class in college.

20:27 It doesn't come up much in web dev.

20:28 That's true.

20:29 A lot of the times the web, it's at the web framework, the web server level, the app server level, right?

20:35 It's Granian or it's UVicorn or something like that.

20:37 That thing does the threading and you just handle one of the requests.

20:41 I literally just deadlocked and I guess probably broke the website for a couple people at Talk Python today because I have this analytics operation that's fixing up a bunch of stuff.

20:51 And it ran for like 60 seconds.

20:53 Even though there's multiple workers, something about the fan out, it still sent some of the traffic to the one that was bound up.

20:59 And then those things were timing out after 20 seconds.

21:01 I'm like, oh, no, what have I done?

21:03 And if that was true threading, it wouldn't have mattered.

21:05 It would have used up one of my eight cores and the rest would have been off jamming along.

21:09 It would have been fine, you know?

21:10 Well, sort of, right?

21:12 And I think this is, I think it's, I'm really glad Pamela brought this up because we do, when we're focused on a particular just the worker thread, it's like, okay, What am I doing?

21:21 You know, pull this, run that, and then push this thing out.

21:25 But if you start getting to more, anytime you start having either value dependent or heterogeneous sort of workload and time boundaries for these tasks, you start having to think about threat contention.

21:39 you start you know it's um i mean to to your point calvin i think it's not so far that you have to go before you quickly find yourself thinking about things like grand central dispatch like io uh like mac os has or io completion ports and like oh crap i'm actually slamming it's not under certain cases you know to your point about the analytics maybe you're not doing a gpu based analytics thing but maybe you're slamming a bunch of stuff to disk or loading a bunch of stuff up from disk and you start getting all these things where at some point one of these things the bottleneck is the cpu is it the you know the code itself is it the disc is the network um and you're just

22:13 slamming your code into one of these different boundaries stochastically and as a developer

22:18 maybe as an entry-level developer you don't have to think about it too much but as any kind of a mid to senior developer you're going to be running into these problems and they are going to be stochastic they are value dependent you're gonna hit them in production and you have to sort of know

22:31 what what could bite you even if it's not biting you all the time in dev right you you remove one bottleneck and it starts to slam into a different part.

22:38 Maybe you take them to the database and it's even a worse console.

22:42 You never know, right?

22:43 We're going to see.

22:44 It's going to be interesting.

22:44 But thinking about that in production, you've got new challenges there because you may have containers and you're running in Kubernetes and you've got pods and resource limits and other kinds

22:54 of constraints that are happening that aren't on your local machine.

22:56 All of a sudden you're saturating your local machine.

22:58 You're like, this is great.

22:59 I'm using all the resources.

23:00 Look at it go.

23:01 And now you release that to production and watch calamity and chaos.

23:04 They get killed off because you've set some.

23:07 Yeah.

23:08 Like my websites and APIs and databases all have production level, like RAM limits and kind of things like that.

23:15 So that if they go completely crazy, at least it's restricted to that one thing dying.

23:20 Yeah.

23:20 Everything.

23:21 Yeah.

23:21 Speaking of which, maybe you've got some ideas on what's next, Calvin.

23:25 Sure.

23:26 I mean, I've been a big believer in containers.

23:29 I really got turned onto this in 2020 and went down the path.

23:33 And now we're finally arrived where I believe developers should be learning Kubernetes, even for local development.

23:40 I feel like that whole front to back story is not as complicated.

23:44 The tooling has really come up to date.

23:47 And so being able to use containers to get reliable, repeatable builds, being able to use tools like Tilt.dev, for example, as a developer locally with my Kubernetes clusters, I can now have file systems syncing, use all my local tools.

24:03 This just literally does take the pains out of, they say microservice development.

24:07 I think that's a little bit of a buzzwordy explanation there.

24:11 I will say that it's good for Django development.

24:14 So if you check out the SCAF full stack template,

24:17 are you going to change it for me?

24:18 Perfect, that's perfect.

24:21 This is exactly where we can use the same tools in production that we use in development so that it's much easier to track down issues.

24:30 Containers obviously unlocked a lot of those.

24:32 I feel like the true superpower of Kubernetes, I think a lot of people love it for orchestration or claim it's for orchestration.

24:39 I really love the fact that it's got a control plane and a URL and an API so you can do things like continuous deployment.

24:46 So being able to deliver your code, build an image, update a manifest and have things just deploy without you having to think twice about it and be able to roll back with a click of a button and using tools like Argo CD.

24:58 Argo CD is a great CI CD tool.

25:01 So we leverage it very heavily.

25:02 If you want a good example of how to do that, you can check out that same full stack template.

25:07 We have all the pieces put in there for you in GitHub to understand how that works.

25:13 So I think it's real.

25:14 I think developers should be embracing the container world, especially if you have more than one developer.

25:20 As soon as you have a second developer, this becomes an immediate payoff in the work it took to put it in place.

25:28 And so I think it hits all the environments too, like not just web dev.

25:32 I think the data folks benefit from containers,

25:35 especially if you look at tools like Airflow, be able to deploy that into containers, be able to manage workers that are, you know, Kubernetes-based tasks.

25:45 So you can like natively handle asynchronous tasks in a cluster and leverage all that power you've got under the covers and scalability of being able to scale out all the nodes.

25:54 You get a lot of, a lot of win for adopting a tool that I think a lot of people and me included used to consider overkill.

26:01 Yeah. Well, let's, let's put some layers on this. First of all, preach on, preach on, but you say containers and you said Kubernetes and these, some other things. Do we, do you have to know Docker and containers? Is Docker synonymous with containers for you? Do you have to know that before you're successful with Kubernetes? Like what are the, you know, there's, there's a couple of layers of, of architecture here, where are you telling people they should pay attention to?

26:29 I think you have to start with containers. Start with Docker. Oh, the dog wants me to play with

26:33 the toy over here. If you start with the container, because you have to have a good

26:37 container strategy, even to be able to build and work with containers inside of any kind of a, you know, whether it's Docker Compose or Swarm or, you know, using Fargate or some kind of

26:48 container app service, like on DigitalOcean. Yeah, count me down as Docker Compose, by the way. I'm Yeah, that's where we started.

26:56 I really enjoyed the ability to have Compose describe my whole stack and be able to run the exact same version of the exact right version of Redis, the exact right version of Postgres, the exact right version of whatever my dependent other pieces are around me, because that matters.

27:12 I don't remember, folks remember the Redis 608 to 609, like a very minor release introduced a new feature that was unusable in a very minor release backward.

27:26 So you want to be able to pin these things down so you aren't chasing ghosts and weird edge cases and containers enable that.

27:33 And whether it's Compose or Kubernetes, it doesn't matter.

27:36 You get that benefit.

27:38 I feel like the Kubernetes piece just takes that to the next level and gives you a lot of the programmability of the web with an API and the fact that I'm not logging in.

27:48 Our preferred way to deploy Kubernetes onto servers is actually to use Talos Linux, which has no SSH shell.

27:54 There is not a way to shell into that box.

27:56 It eliminates a whole class of security vulnerabilities because there is no shell on the box whatsoever.

28:02 You have to interact with it programmatically via APIs.

28:06 And even the upgrades happen via the same API backplanes.

28:11 And just that level of control, security, reliability, and comfort helped me sleep really well at night knowing where I've deployed these things.

28:21 But you do need containers first.

28:23 I think if you don't understand the layers of the containers, but I think that's a quick read.

28:27 There's some really good resources online.

28:30 Nana's Tech World does a really good job of describing containers and Kubernetes.

28:35 And she does an awesome job of bringing that down to an every person, most every person level

28:40 who would even care to want to touch it.

28:42 I have some thoughts about containers and compose and stuff

28:45 that I want to throw in.

28:46 But I do especially want to hear, Peter, contrast your take with the, you sort of say the same thing, but for data scientists, do you need to pay attention to containers and data science? Is that different? I interviewed Matthew Rockland from Coiled recently, and they've got a really interesting way to ship and produce your code without containers that you actually interact with. There's options, but what do you think?

29:09 Yeah, I think, I mean, I think containers are just part of the technical landscape now, So it's good to know them.

29:16 I think if we were to remove the capabilities of data science from everyone who doesn't know about containers, that we would end up with a deeply impoverished user base, right?

29:25 The truth of the matter is that there are a lot of people out there today who, if you think about what containers really do from a software development and a DevOps perspective, it is a mechanism for your dog knows about to say something spicy.

29:38 No, I'm not trying to be controversial.

29:40 Just thinking about it on first principles, A container is a way for us to sort of format and rationalize the target deployment environment at within the boundaries of the box, within the boundaries of a particular compute node with an IP address or something like this.

29:55 And then Kubernetes takes the next level up, which is, oh, if your dependencies for your application is if you have like a microservices classic sort of example, if your application is architected in such a way that you need a lot of services to be running.

30:07 Well, to format that, you need to actually create a cluster of services configured with particular network configuration and various kinds of things.

30:15 So you're actually shipping a cluster as the first thing you land and then you land, you deploy the airport, then you land the plane.

30:22 So if you need to do that, if the thing you're doing is so big and I think that we think about the U.S. Air Force and Army, like the reason why the American military sort of has the dominance it has is because of the logistics chain.

30:34 They can land just hundreds and hundreds of tons of military hardware and food and personnel into any location on the earth inside of 24 hours.

30:43 And this is sort of what Kubernetes gives you is that ability to format at that level.

30:46 But at the end of the day, if you have a Jupyter Notebook, well-known data set, you know how many CPU cores, what kind of GPU you need to run a particular analytic, that can seem like overkill.

30:56 Because you could say, spin up the CC2 box, get me in there, spin up Jupyter Hub, copy the thing over, and now it's running.

31:03 You know, yay.

31:04 So I don't think that containers are necessary, but in life, we don't just do what's necessary, right?

31:09 I think it is useful to know something about how to ship and work with modern IT environments and cloud-native kinds of environments.

31:18 So it's a useful thing to know.

31:20 But then again, like I said, it's the goal for us as technologists should be empowering those who are less technically inclined than us.

31:28 And so removing the complexity for them should be the thing that we should be trying to do.

31:31 And this is then to the spirit is what I think Matt Rocklin talks to. And what we on the sort of Anaconda data science oriented side also hope for, right? Is that to make as much of this disappear into the background as possible for people who don't want to learn it, who don't need to know it necessarily.

31:45 Yeah. I think we want to get, we all want to score well on the plays well with others scorecard. And so, you know, if we can deploy and use containers, that means it's much easier to

31:54 onboard the next dev. Yeah. And a lot of this, not everyone has to be an expert at it. Correct.

32:00 A couple of people set up a cluster or some Docker compose system together, and then you all get to use it.

32:07 It's a little bit like the people that work on Jupyter have to do a lot of JavaScript and TypeScript, so the rest of us don't have to do so much.

32:14 Right.

32:14 Although you just whipped out a little HTML editing, so I was pretty slick.

32:19 Yeah, I think here's a good question from a little bit earlier from Pamela, but I think especially this one goes out to you, Calvin.

32:26 I think you've walked this path recently.

32:28 How much harder is Kubernetes versus Docker Compose to learn for a web dev?

32:32 I think if you have a good template to start from, that's where this becomes a no-brainer.

32:38 If you were to try and go learn all the things about the Kubernetes stack orchestrators, the storage bits, all these kind of pieces, that could be really overwhelming.

32:49 And whereas Docker Compose, it's one file, it lists your services, it feels fairly readable, It's just YAML.

32:57 Kubernetes is going to have a few more things going on under the covers.

33:00 But again, I'll point to our SCAF example as a minimal, as little as you needed to get going version of being able to do Kubernetes locally and in a sandbox and production environment.

33:12 So it scales up to all those pieces.

33:14 So as a web dev, you just develop your code locally.

33:17 You use your IDE.

33:18 You're in PyCharm.

33:19 You're in VS Code.

33:20 You're editing your files locally.

33:22 Tools like Tilt are kind of hiding a lot of that complexity out under the covers for you and synchronizing files two-way.

33:30 So if things happen in the container, for example, you probably want to be able to build, compile your dependencies with the hashes in the target container environment that you're going to release to.

33:40 Because if you did it locally and you're on Windows or on Mac or on Linux, you're going to get potentially different hashes, different versions of different dependencies.

33:48 So those kinds of things need to write back from the container to your local file system and Tilt enables that and takes that whole pain away.

33:55 I think Tilt was the big changing point for me, the inflection point for me when I moved over and fully embraced Kubernetes for local web dev.

34:03 Interesting.

34:04 Over at Talk Python, I've got, I think last time I counted 23 different things

34:10 running in Docker containers were managed by a handful of Docker Compose things that grew them by what they're related to.

34:16 And it's been awesome.

34:17 It's been, it really lets you isolate things.

34:20 The server doesn't get polluted with installing this version of Postgres or that version of Mongo.

34:25 I think I've got two versions of Postgres, another version of MongoDB, and a few other things.

34:30 Yeah, and it just doesn't matter.

34:31 Do I RAM and CPU for it?

34:33 Plenty, okay, good.

34:34 And you can run in a one CPU or one server node.

34:38 You don't need to have five machines running with a control plane and all the pieces.

34:43 You will have the control plane, You can use like K3S is a minimal Kubernetes project that you can use to deploy, for example, on a single EC2 instance.

34:52 Spin that up, deploy your containers.

34:54 Now you can hook it up to your GitHub actions, which I think we should also talk about as something people should learn.

34:59 You hook that up and away you go.

35:01 You're now releasing without logging into a server and typing git pole and potentially injecting into it unintended changes from your version control.

35:12 I mean, it's a peace of mind to be able to know and audit and know what you've released is exactly what you expected to get released.

35:20 So I want to wrap up this container side of things with two thoughts.

35:24 First, I'm a big fan of dogs.

35:26 I don't know if you guys know, but I kind of understand what dogs say a little bit.

35:29 It's a little weird.

35:30 I believe Calvin's dog, I don't know, Peter, back me up here.

35:33 I believe Calvin's dog said, forget containers I edit in production.

35:37 I think that's what the dog said when it barked.

35:39 I'm not entirely sure.

35:40 I mean, he is a black dog.

35:42 you can only. Yeah, you never know. You never know. They're known for being rebels. That's right.

35:48 Exactly. Not the black sheep, but the black lab. The black dog. And then the second one I want to kind of close this out with is see for yourself out on YouTube says, I like Python for low code ML with PyCaret. The problem is that Python is now up to 313.3 and very soon 314.0 folks, while PyCaret only supports up to 311. And I think this is a good place to touch on reproducibility and isolation, right?

36:11 Like you could make a container that just runs 3.11 and it doesn't matter what your server has, right, Peter?

36:16 Yeah, I mean, the, is the, sorry, if you could pop up the question again, I was, I think it was just that PyCaret, yeah.

36:24 So this, I guess I don't really see the, I don't see the problem.

36:31 Like this is a statement of fact, right?

36:33 The PyCaret only supports 3.11.

36:35 Are there features that you really want to see in 3.13 or that you really need to use in 3.13 or, I mean, there's...

36:43 It could be that they work.

36:45 Yeah, but it could be they get a new Linux machine that's Ubuntu

36:48 and it's got 3.12 on it.

36:49 Yep.

36:50 I mean, but you never...

36:53 Okay, this might be where the dog barks again, but you never use the system Python.

36:57 Well, right.

36:58 It doesn't matter what the system ships with.

37:00 What does macOS ship with?

37:02 I don't know.

37:03 You install...

37:04 You either install a distribution like Anaconda, Miniconda or something like this or uv using Python standalone, the virtual environments there have their own ship, their own Python.

37:14 This is now I because I am who I am, like on the Anaconda side of things, we've known that you have in order to really isolate your Python installation.

37:23 You really have to have the interpreter itself be built with the same tool chain and the same versions of the tool chain as all the libraries within it.

37:31 And so this is what the Conda universe, Conda Forge, BioConda, we've been doing this forever.

37:36 And then with uv, I think uv has really pushed the spearheading the whole like install a separate Python bit.

37:42 I know that Pyan has been there, but like it's not, I don't think it was a standard part of,

37:47 it was considered best practice, right?

37:50 For people, but, but I'm hoping that, you know, that, that uv helps to change minds in this way as well.

37:55 But ultimately for, for, if you actually, if you do all the bits, right, you actually can have a isolated and separated, perfectly isolated Python install without needing to use containerization.

38:08 Not that there's anything wrong with containerization, but just sort of saying like, this is a solvable problem.

38:13 It's just so darn complicated to try to give anyone best practices in the Python packaging world because some guidance can be wrong for somebody, right?

38:20 But in this case, yes, you could absolutely use containers to isolate that or look to use Konda or uv to create an isolated install with just that version of Python just to run then PyCaret inside it.

38:34 Yeah, I feel like containers a pure expression of an isolated environment where you can't get it messed up. If you do anything, just know that the system Python is not your Python. You shouldn't be allowed to use it. It should be almost in a user private bin path that's not usable by people.

38:53 Calvin, I've been on a journey. It's a failed journey, but it was a long, long, solid attempt. I've been trying to remove Python from my system as a global concept, period. But I'm a big fan of Homebrew and too many things that Homebrew want it. And I know something's gone wrong when my app falls back to using Python 3.9. I'm like,

39:12 no, Homebrew. I deleted all my local Pythons, Pyamv and Homebrew in any packages that depended on it.

39:18 And I went fully uv and uvx for any tools that would rely on it. And we've also moved to Nix.

39:24 We've started using Nix for our package management instead of Homebrew for that reason.

39:31 This portion of Talk Python To Me is brought to you by Agency.

39:34 Build the future of multi-agent software with Agency, spelled A-G-N-T-C-Y.

39:40 Now an open-source Linux foundation project, Agency is building the Internet of Things.

39:45 Think of it as a collaboration layer where AI agents can discover, connect, and work across any framework.

39:52 Here's what that means for developers.

39:54 The core pieces engineers need to deploy multi-agent systems now belong to everyone who builds on Agency.

40:00 You get robust identity and access management, so every agent is authenticated and trusted before it interacts.

40:06 You get open, standardized tools for agent discovery, clean protocols for agent-to-agent communication, and modular components that let you compose scalable workflows instead of wiring up brittle glue code.

40:19 Agency is not a walled garden.

40:21 You'll be contributing alongside developers from Cisco, Dell Technologies, Google Cloud, Oracle, Red Hat, and more than 75 supporting companies.

40:30 The goal is simple.

40:32 Build the next generation of AI infrastructure together in the open so agents can cooperate across tools, vendors, and runtimes.

40:39 Agencies dropping code, specs, and services with no strings attached.

40:44 Sound awesome?

40:45 Well, visit talkpython.fm/agency to contribute.

40:49 that's talkpython.fm/agntcy the link is in your podcast player show notes and on the episode page thank you as always to agency for supporting talk python to me maybe the next one i want to throw out there to talk about is just is uv it's yeah it was compelling when it was uv pip install and uv venv but i think peter really hit the nail on the head when once it's it sort of jujitsu'd Python and said, okay, now here's the deal. We manage Python. Python doesn't manage us. It just uncorked the possibilities, right? Because you can say uv venv and specify a Python version, and it's not even on your machine. Two seconds later, it's both installed on your machine and you have a virtual environment based on it. And you don't have to think about it. You know, Peter, you talked about PyM. It's great, but it compiles Python on a machine that's super slow and error prone. Because if your build tools in your machine aren't quite right, then well,

41:44 Oh, yeah.

41:44 Compiling Python is no joke.

41:46 No, it isn't.

41:46 I used to do it for a while for Talk Python in production.

41:49 It was like a 10-minute deal.

41:51 I had it automated.

41:52 It was fine, but it took 10 minutes.

41:53 There was no rush in it.

41:54 And you don't need to spend.

41:56 The great irony of this is that, again, we in the data science world have spent years trying to convince certain folks in the sort of non-data and science Python world that you can't solve the Python packaging problem without including the management of Python itself in it. And we just got nowhere. We're just repeatedly told PyCon after PyCon, packaging summit after packaging summit. The scope of a Python package manager is to manage the things inside site packages and anything outside of that system libraries, lib ping, lib tiff, you know, open CV, these things are outside of scope. And, you know, many distributions, There's Linux distros like Debian or Red Hat.

42:42 There's distribution vendors of like us and Akana that are cross-platform.

42:46 We're trying to make the case for this, but we just kept not landing that argument.

42:50 UV comes along and does it.

42:51 And everyone's like, oh, this is totally the way to do it.

42:54 It's like, well, I guess the users finally have, you know, I think that we can follow, we can pave that cow path.

42:59 And I agree, it is utterly the way to do it.

43:01 And then what we're going to learn, I think, on the other side of that is, oh, not only is it great to manage Python as part of the whole thing, But now we actually should care how we build that Python, because your choice of clang, GCC, your choice of what particular libraries you link in, that determines the tool chain for compiling everything else as well.

43:19 And especially to talk about data, AI, ML kinds of libraries, there's incompatibilities that will emerge as you try to install this, install that.

43:27 So I gave a talk at PyBay, sorry to sort of toot my own horn a little bit, but I gave a

43:32 talk at PyBay last fall about the five demons of Python packaging, where I try to unravel why is this perennial problem so gnarly and horrible?

43:42 And it's because there's many dimensions of it.

43:44 And most users only care about one and a half of those dimensions.

43:47 They just really want to install the damn package and just use it.

43:51 But if you're a maintainer, that's right.

43:53 You got to have the obligatory Blade Runner.

43:56 And anyway, so I put that talk together just to sort of get everyone on the same page to understand why we have different tools, why distribution vendors, whether it's an Anaconda or an Ubuntu, a Red Hat or Nix, right? Why homebrew? These things do matter. And there's a reason people depend on these tools.

44:14 And anyway, I hope that people who care about Python packaging or want to understand more deeply go and look at this talk because I do try to put time at each of their own topics that make this so complicated. And for Python in particular, because I hear a lot of people talking about, why isn't it as easy as Rust? Or, oh, MPM is so nice. Well, I don't hear that very

44:34 often. Is it? No, no, no. Actually, I don't hear a lot of praise for npm. Well, like, why doesn't

44:39 JavaScript have this problem? It's like, well, JavaScript doesn't have a pile of Fortran involved in it, right? Many people don't know, but, you know, there's a fun thing in there. I talk about the fact that if you want to use MB convert, if you want to use MB convert to convert notebook into a PDF, you need a Haskell compiler because Pandoc depends on Haskell. So like there's just things like that that are just our ecosystem is replete with these things. And most users don't have to see it if the upstream folks or the distribution folks and packaging people are doing their jobs right. But that doesn't mean that it's not hard. It doesn't mean that it's not

45:09 real labor that goes into making it work. Yeah. Look in the chat, Pamela points out that even using uv, there are now multiple ways, which is tricky. And I would refer to myself as one of the old school people. I still use uv kind of in a agnostic way. Like if people don't want to use uv and they take one of my projects, they can still use pip and they can still use pip-tools. And things like uvvenv or uv pip install or you know pip compile especially to build out the pin requirements but if you don't like it you just pip install -r requirements.txt instead of using what i was doing right and then there's this other way of embracing like let it sort of manage your pyproject.tomal entirely and create it and so on so i think there is some a little bit

45:55 of confusion but i think yeah it's probably good to make you step forward for the python packaging community for sure it's good they made that compatibility path though it helps people be comfortable because change is hard as as humans we don't like change but this is a really good change that that speed is a feature that that charlie talks about is a hundred percent i'm on

46:13 the on board yeah i agree and it's it's changed even my docker stuff now yeah i just i one of my docker layers is just uv python install or really i think it just creates a virtual environment which also installs the Python. And that's a two second, one time deal. And it's after the races. It's really, really nice. All right. We have probably time for a few more topics. However, if I put this out into the world, it may consume all of the time as it does pretty much all of the GPUs.

46:41 What are your thoughts on agent encoding? What are your thoughts on them? On LLMs and agent coding AI and that whole soup of craziness? I'm shocked how many people are not diving in

46:54 headfirst on this. I literally started talking to some developer last week and I was like, hey, we tried Claude Code. And they were like, no, what's that? I was like, oh my.

47:05 What?

47:05 Yeah, exactly. Well, we've got Copilot. I think the issue is in the enterprise, a lot of people have opted to purchase Copilot because it's a checkbox and a one-click purchase.

47:16 So it's easy, but they're not giving them Copilot Studio, which is the agentic version of it.

47:21 They're just like, yeah, you've got your LLMs now.

47:23 Go have fun.

47:24 I think they're really missing out on the true power of like a tool that can inspect your file system, a tool that can like look at things and do actions.

47:31 Now, obviously that introduces risk.

47:33 So a lot of these security people in these environments are not excited about that level of risk.

47:38 I don't have a good answer for that other than if you're a developer and you're going to turn on energetic coding, you kind of have to like sign up and be accountable for what it's going to do.

47:46 I've got some ideas and some concrete recommendations for you.

47:49 But Peter, I want to hear what you have to say first.

47:51 So first of all, I think vibe coding is simultaneously oversold.

47:56 At the same time, I'm very bullish on where this can go.

48:02 Ultimately, the Transformers models and that style of current era AI has some structural mathematical limitations.

48:11 The recent open AI paper about hallucinations are inevitable and sort of part of the math sort of shows that, yeah, we're going to end up.

48:17 It is, to some extent, glorious high-dimensional autocomplete.

48:21 But oh my God, it's glorious when it's right.

48:23 So it is steerable.

48:24 It's like trying to fly a very, very awkward airplane before we've really figured out aerodynamics.

48:29 But it kind of still does work.

48:31 So people should absolutely 100% be looking at what this can do for them.

48:36 And thinking really right now, I would say actually the limitations, the known, the visible limitations of VibeCoding, should actually, you should be grateful for that because that gives us time and space to think about how would we design projects?

48:52 Because I know for myself, the way I code is I write doc strings and comments and sort of class structures first.

48:59 And then I think about what needs to play with what and you're writing documentation.

49:03 And if I can just have the code itself just get filled out with that, like, holy crap, like, of course, right?

49:08 So everyone should be doing this so they can think about it and really think about where this stuff will go because it's definitely going to get better.

49:15 But if you're worried about the data leakage and the compliance and all this other stuff, use local models.

49:21 Go and buy expense a couple of GPUs.

49:24 3090s actually work fine with the newer, smaller models.

49:26 If you work for a richer employer, maybe you can get a couple of 5090s.

49:31 Sacrifice a gaming PC. Come on.

49:34 It's also a gaming PC.

49:35 It's also a gaming PC.

49:36 An M4 Mac with 64.

49:38 I have an M4 Mac with 64 gig of RAM.

49:40 And it's wonderful.

49:41 I've got DevStroll running.

49:42 I've got the OSS GPT running.

49:45 All those tools run on a, on just base model on base model.

49:49 I have Mac.

49:51 Yeah.

49:51 I have a 32 gig Mac mini running here and I'm running the 20 billion parameter open AI model on it just to be shared with all my computers, my laptop.

50:01 Yeah.

50:01 And there's, and there's, there's, there's also, you know, the Chinese models are really freaking good, you know?

50:07 And, and the, I mean, I think, I don't know what we'll see what happens with CES next year, but I feel like, this year was the year of small models. This year was the year, I mean, we started the year with DeepSeek, right? And so it's like not just Chinese labs saying, we don't need your stinking whatever,

50:21 but over the course of the year, we got Kimi, we got Gwen, we got GLM, we got, we're just going to

50:26 keep getting these. And that's not even, that's just on the code and the text prompting side.

50:30 That's not even on image generation. So the Chinese image and video generation models are

50:34 just jaw-droppingly good. So I think what we're going to see here is by the beginning of next year, well, this is a 25 slash 26 podcast, right?

50:41 So in 26, you probably have no excuse to say, why are you not, you know, like you're, you're, you know, professional CAD and engineering people have big workstations.

50:51 As a dev, maybe you just have a big workstation now or a fat 128 gig, you know, unified memory for Mac.

50:57 But like, you're just going to have that as your coding station and everything is local.

51:01 You're going to be careful with tool use, of course, but still like you just run all locally.

51:05 But I think as a developer, the key, one of the key skills you should learn is going to be context engineering

51:11 and using sub processes.

51:14 The models now support basically spinning off parallel instances of themselves.

51:18 And you can spin off parallel instances with a limited amount of context to kind of really shape how they understand things.

51:25 Because Google introduced the Gemini with like a 1 million token context window limit.

51:31 So what?

51:32 What are you going to do with that?

51:33 It's really not useful to just feed a million tokens into it because it can't, it just as much as you try to like stuff your brain.

51:39 Well, it tapers off at the end as well.

51:41 It's not really a million tokens.

51:43 Right, you don't get a million tokens.

51:44 And it's also, it's just going to be thoroughly confused by all the context you just threw at it.

51:48 But if you can give a really narrow focus context, small diffs, that's one of the things I liked about Aider Chat.

51:53 If you've not checked out Aider Chat, it has a diff mode

51:56 that really limits the amount of tokens it consumes.

51:58 So actually it's a little more efficient on tokens than like cloud code, even if you're using the Anthropic models the same way, because it'll do diffs and send smaller context.

52:07 And if you can leverage that with like sub models or sub prompts and Goose, the chat agent from Block has recipes that actually operate in like a sub model.

52:17 So it's basically like you're building your own little tools that are just descriptions of like what MCP pieces it should use, what tools should be available and use this context and only pass me back that bit and throw away the extra context once you're done.

52:29 So you're not polluting your context window with a whole bunch of unneeded operation.

52:34 and now you get back really what's needed for whatever you're trying to work on.

52:37 Yeah.

52:38 So I want to kind of take it a little bit higher level back real quick.

52:41 How about I'm with you?

52:42 If you have not seen this, and I've talked to a lot of really smart devs who are like, yeah, I tried Copilot or I tried one of these things and their experience is largely, I think with the multi-line autocomplete.

52:54 And to me, that, I just turned that off.

52:56 That's like garbage.

52:57 I mean, it's not garbage, but it's, I'll put it, let me put it more kindly.

53:01 Like half of the time hitting tab is glorious.

53:04 And the other half, I'm like, I want the first half, but the second half is wrong.

53:08 So do I hit tab and then go down and delete it again?

53:11 Like, you know what I mean?

53:11 I got to like, it's giving me too much and it's not quite right.

53:14 But the agentic tool using part is stunning.

53:18 Not with the cheap models, but with the models that cost like $20 a month.

53:22 It's a huge difference from the very cheap model to like.

53:24 Which is like, that's not even a latte a week, right?

53:27 Like just like we're talking to an audience of probably mostly professional developers, right?

53:32 Yes.

53:32 You know, a hundred bucks a month, $200 a month for what literally is transforming the future of your entire industry is worth it.

53:39 Like, why would you not subscribe to your and your employer should be paying for this?

53:42 Like they should be handing you all.

53:44 Well, but if they do actually.

53:45 So here's the thing.

53:46 I'm actually two minds of this.

53:47 I think every dev for their own purposes, for their own application, you're paying for their own because the employers will have limitations on what they're allowed to use.

53:54 They may have to sign up for an enterprise thing, which has then data, you know, data retention policies, yada, yada, yada. And you want to just go full blast. What is absolute cutting edge being released by various things. But I would still, again, my little, you know, nerd, like open source heart would not be stated unless I've made the comment here. Please play with local models. Please like have do work in a data sovereignty mode.

54:19 Because this is actually the closest, the first time I think we've had real tech that could potentially move people away from a centralized computing model, which has been, I think, so deleterious to our world, actually.

54:33 And the last thing that we don't have time for, but the last thing I was going to just throw a shout out for was for people to check out Beware, because that is the way that we can build Python mobile applications and really be shipping applications that don't necessarily, like, we should be deploying to mobile.

54:46 so many web Python developers are web devs storing state in the Postgres somewhere.

54:50 And we're part of that data concentration, data gravity problem.

54:53 Yeah.

54:53 Whereas if we flip the bit and just learn for ourselves, how do we vibe code an actual iOS platformer?

54:59 Like let's go do that.

55:00 Right.

55:00 Or an Android thing, which is a little bit easier to deal with.

55:02 These are things that we can actually do.

55:04 Yeah.

55:04 Yeah.

55:05 Totally.

55:05 I want to give a shout out to you, Peter and Anaconda in general for all the support for beware and some of the PI script and some of those other projects.

55:12 Those are important ones.

55:14 And yeah, good work.

55:15 Yeah.

55:15 Thank you.

55:16 fight the good fight. Yeah, for sure. Thank you. I do want to, I'm not quite done with this AI

55:20 thing though. I do, I do want to say, I do want to point out this thing called Klein that recently came out. That's really pretty interesting. Have you guys heard of this? Yep. Yep. Yep. Yeah. So it's open source. It's kind of like cursor, but the big difference is they don't charge for inference.

55:35 You just put in an API key and, or you put in a link to a URL to a local model. So you use local with it. Yeah. Yeah. Yeah. And I recommend if you're using local models and then we want to

55:44 really want to go all in on the data sovereignty pieces use tools like little snitch on your mac to know if it's sending something someplace you're you didn't request it to send to you can be totally eyes wide open and maybe exercise a little more reckless abandon if you know that the tool like that can catch an outbound connection that you didn't expect yeah i think i'll give you i'm gonna

56:05 how many how much i don't want to burn this i will give you guys an example that i think probably will If you've done a lot of web development and web design mix, this will probably catch your attention.

56:19 So I want to add some new features to talkpython.fm.

56:23 I got some cool whole sections coming and announcements.

56:27 But talkpython.fm was originally created and designed in 2015 on Bootstrap.

56:33 Do you know how out of date 2015 Bootstrap is with modern day front end frameworks?

56:37 A lot.

56:39 But there's like 10,000 lines of HTML designed in Bootstrap, early Bootstrap.

56:46 It still renders great on my phone, though.

56:48 And the LLMs are very aware of old Bootstrap documentation and issues.

56:53 Peter, it looks great and it works well.

56:55 But here's the thing.

56:56 I want to add a whole bunch of new features and sections to it.

56:58 I've got to design that from scratch.

57:00 I'm like, oh, I can't do this in Bootstrap 3.

57:02 I just don't have the willpower for it.

57:04 It's going to make it so hard, you know?

57:07 And so I'm like, well, I really should redesign it, but that's got to be weeks of work.

57:11 And one evening around four o'clock, I'm just hanging out, you know, enjoying the outside, sitting, working on my computer, trying to take in a little more summer before it's gone.

57:19 And I'm like, you know what?

57:20 I bet, I bet, Claude Sonnet and I bet we could do this quicker than two hours later, the entire site, 5,000 lines of CSS, 10,000 lines of template, HTML files, all rewritten in Bulma, modern, clean, doesn't look at all different, except for the few parts. I'm like, Oh, I don't like that. Rewrite that actually to the point where you just take a screenshot of what you do want, throw it in there and go, make it look like this. Oh yeah. Okay. I see the picture. Let's make it look like that. And it's just a couple hours that would be pulling your hair out the most tedious, painful work for a week or two. And now it's, if I want to add something to the site, it's just, Oh yeah, it's just modern Bulma off it goes. Or I could have chose tailwind or whatever. I think Bulma works a little better with AIs because it's, it doesn't have build steps and all that kind of stuff. It's a little more straightforward, But those are the kinds of things that like, literally I wrote down a markdown plan.

58:08 I said, here's what we're going to do.

58:10 And I planned it out with AI.

58:11 Then I said, okay, step one, step two.

58:13 And then we just worked it through till it was done.

58:14 There's a few little glitches.

58:16 I'm like, this looks weird.

58:17 Here's a screenshot, fix it.

58:18 Okay.

58:18 AI is really good at these kinds of tasks.

58:20 Yeah.

58:21 And if people have not seen this in action, I think it just doesn't.

58:24 They're like, I tried to use ChatGPT and it gave me an answer, but it doesn't help that much.

58:28 I could write that.

58:29 Or I used a free cheap model and it got it wrong and I had to fix more than it helped me.

58:34 There are these neutrals that are crazy.

58:36 There's something that people don't, I think, have an intuitive feeling for because they're encountering a cognitive reactive system for the first time.

58:45 I'm not saying sentient or conscious, by the way, but just cognitive.

58:48 And so it's sort of like it's going to be as deep as how you probe it.

58:55 So if you ask it a dumb, shallow thing, it will give you a dumb, shallow response.

58:59 But if you get really deep or nerdy, and I was using early incarnations.

59:04 actually a couple of years back. I remember when I first figured out this effect, I was reading some philosophy books, as one does. And I was thinking, well, I could use this as a co-reading tutor.

59:13 And I noticed I would just ask for some reason or give me some summaries. I'm like, well, that's reasonable, but you know, okay, whatever. But then as I got deeper into some of the content and I was asking for contrasting opinions about different from different other perspectives and some critiques and all this stuff, and I started getting into it, it would go very deep. And this is like GPT, just 4.0, it just come out kind of thing, like timeframe. So I think the same thing is true now, especially with like GPT-5 research. I've had feedback from friends who are like, yeah, some people say five is nothing burger, but five research is a thing because I'm able to do this. This is this other person, not me, but this other person saying, quote, I'm able to get graduate level feedback, like stuff that is deeply researched in arcane parts of mathematics.

59:54 And I check it. And I mean, I use Claude to check the GPT-5 and it basically is correct from as far as I can tell. So I think the thing to go to these people with is like, if you're not getting

01:00:04 anything out of it, it's because you're not squeezing hard enough, right? Approach it as

01:00:07 if it were a super intelligence and see how little it disappoints you. Because it will not disappoint

01:00:13 you that often if you really get into it. Yeah. I want to take a slightly different take, but I a hundred percent agree. I think you should treat it a little bit like a junior developer who knows 80% of what you want, but he's kind of guessing that last 20%. And if you gave this work to a junior dev and they got it 95% wrong and there's a little mistake and you had to go and say, hey, really good, but this part you got to fix up a little bit. That would be a success for that junior developer. I don't know why we expect 100% perfection if there's any kind of flaw whatsoever from such a creation process that like, well, it's broken, it's junk. You're expected to make a few mistakes and you got to be there to guide it, but it's the huge amount it gets right

01:00:53 is so valuable. This doesn't negate the standard like software development lifecycle process of in code review. Like you still need to have those kinds of things in place. And in the code reviews, you with, with your junior developer, who's the LLM now. Well, yeah, the SCLC isn't, isn't negated,

01:01:07 but, but the thing I think that's deeply counter to it is we're used to, I mean, the modality, think about the, the, the, the, how this manifests, we're typing things still into a text window.

01:01:16 Right. And so we, as developers are used to that being a very precise, predictable input, output, transformational process. We're not used to the idea of coding with a semantic paintbrush, right like a chinese or japanese calligrapher doesn't care exactly which horse hair got ink on which part of the paper they got a brush and they're like doing their calligraphy and i think we have to get over ourselves and think about i'm painting with a semantic paintbrush splattering it certainly using my fingers with keyboard but soon it'll be dictation right and and so we're really splattering ideas into this canvas and it's auto rendering the stuff for us into a formal system and i think just the modality of wow you can see the clouds are going over the sun and like my temperature changes in video.

01:01:59 It's the AI doing it.

01:02:01 The AI is doing it because I'm getting passionate about this, right?

01:02:04 So, no, but I think that's the key thing.

01:02:06 So we are used to this modality of fingers on keyboard textual input being an input to a formal system, not an informal probabilistic system, which is what these things are.

01:02:15 So once you make that mental bit flip, then it's like you just learn to embrace it, right?

01:02:20 Yeah.

01:02:20 I think voice is a great option here.

01:02:23 We use Fireflies for our meeting recording bot.

01:02:27 You can also just open up your phone and launch the Fireflies app and start talking to it.

01:02:30 And it has an MCP server.

01:02:32 So you can go into Claude Code and be like, grab the last transcript where I was just talking about this and pull it in or have a discussion about the specifications, about the journey, the epic, the customer's story, and bring those in as artifacts really, really quickly now.

01:02:47 Yeah.

01:02:48 Older ballgame.

01:02:49 It is a crazy ballgame.

01:02:49 That's something I learned.

01:02:50 It's a whole new ballgame.

01:02:51 Yeah.

01:02:53 All right. Anything else that is burning on your list of topics that we should do a lightning round?

01:02:57 Because we're out of time on.

01:02:58 We should lightning round on DuckDB.

01:03:00 I think I agree.

01:03:02 You two riffed on it because I'm knowledgeable, but you all are the ones who use it.

01:03:06 If you've not played with it, it is an incredible little, you know, embedded, you know, like kind of SQLite, but way more.

01:03:14 And if you've got files on a disk someplace, they're now your database.

01:03:18 If you've got stuff in an S3 bucket someplace, that's now your database.

01:03:22 Like it's incredibly flexible.

01:03:24 It's got so many like cool extensions built into it.

01:03:26 Like it can do geospatial stuff.

01:03:28 It's got JSON capabilities that are like really incredible.

01:03:31 I mean, the speed is a little bit mind blowing.

01:03:34 It's kind of like the first time you use uv or rough.

01:03:36 Like how is that so fast?

01:03:37 And then you use DuckDB and it's really, I think folks should go check it out and learn a little more because it may change how you think about deploying an at edge thing or a little local thing or even a big data analysis piece.

01:03:51 you may actually be able to fit that into memory on your machine and DuckDB and get some incredible results out of it.

01:03:56 I'm sure Peter has way more to talk about this than I do, but I don't use it that much.

01:04:01 But man, if I had a use case for it, I would be 100% picking that tool up.

01:04:05 Yeah, DuckDB is a fantastic little piece of technology.

01:04:08 I don't mean little in a pejorative sense here, but at a technical level, I would say it is a highly portable, very efficient and very versatile database engine.

01:04:19 So the name is almost wrong because it's exactly it liberates you from databases.

01:04:25 We are used to thinking of databases at places where data goes to, well, not die, but to be housed at rest and have an extreme amount of gravity attracted to it.

01:04:33 And then DuckDB takes the opposite of that, says any data representation you have should be searchable or queryable if only you had the right engine.

01:04:44 And it's sort of like it inverts the whole thing, which is the brilliant piece of it.

01:04:50 and again, what data isn't just representation it's somewhere on a disk or over a network or a memory

01:04:57 so it pairs very nicely with the PyData stack of tools and so I know one of the topics we had on here as well was Arrow

01:05:04 so if you care about representation for a variety of reasons

01:05:07 then Arrow is great if you want a query interface you want a SQL style query interface that's agnostic as to representation that's your DuckDB and of course the fact that it plays so well with WebAssembly means Edge, Cloudflare workers or whatever or PyScript and WebAssembly workers we have some demonstration examples using PyScript where you have an entire analytics stack running entirely within the browser full on you got Pandas and Psychidimage, Psychidimage, Psychidlearn, Map, Plotlib stuff going on and you're hitting S3 buckets with full blown SQL queries using DuckDB because it all runs on WebAssembly and this is just a taste i mean none of this is mainstream yet i think some of these use cases are a little bit on the edge but the vision this takes us to as a world where we really are just we were living a much more portable world so your fees can just move and give someone a web page a static web page it's a full-blown app and actually if you look at web gpu and transformers js web lm kinds of stuff you can fit a little tiny model in there actually and you have a totally local entirely client-side experience with AI in it.

01:06:15 So I'm very excited about this.

01:06:17 And DuckDB is really part of that equation.

01:06:19 Yeah, bring your query engine to where your data is.

01:06:22 Exactly.

01:06:22 That way around, which always takes time.

01:06:25 Yeah, excellent.

01:06:27 I know people are very excited about it.

01:06:28 It's got the built-in your program.

01:06:32 You don't have to run another server aspect, which I think is good as well.

01:06:35 And the WebAssembly stuff, maybe there won't be local DB and local SQL or WebSQL, all those things that we just do DuckDB in the browser with WebAssembly.

01:06:46 Be nice.

01:06:48 So very interesting.

01:06:49 We barely scratched the surface, you guys.

01:06:51 Like there's more people need to know, but I think these are probably some of the hotter topics.

01:06:57 We may have to do a part two, but a 2026 edition that's just a continuation.

01:07:01 But if people take the time, invest in putting some energy into these things, it's going to make a big difference, I think.

01:07:08 Thanks for being on the show.

01:07:09 And yeah, it's been great.

01:07:10 Yeah, this was awesome.

01:07:11 Thank you so much for having us.

01:07:12 Yeah, thanks, Michael.

01:07:13 I enjoy talking about all the cool new tech and tools.

01:07:15 Yep.

01:07:15 Bye, guys.

01:07:17 This has been another episode of Talk Python To Me.

01:07:20 Thank you to our sponsors.

01:07:21 Be sure to check out what they're offering.

01:07:23 It really helps support the show.

01:07:25 Take some stress out of your life.

01:07:26 Get notified immediately about errors and performance issues in your web or mobile applications with Sentry.

01:07:32 Just visit talkpython.fm/sentry and get started for free.

01:07:37 And be sure to use the promo code talkpython, all one word.

01:07:41 Agency. Discover agentic AI with agency. Their layer lets agents find, connect, and work together, any stack, anywhere. Start building the internet of agents at talkpython.fm/agency, spelled A-G-N-T-C-Y. Want to level up your Python? We have one of the largest catalogs of Python video courses over at Talk Python. Our content ranges from true beginners to deeply advanced topics like memory and async. And best of all, there's not a subscription in sight.

01:08:09 Check it out for yourself at training.talkpython.fm.

01:08:12 Be sure to subscribe to the show, open your favorite podcast app, and search for Python.

01:08:16 We should be right at the top.

01:08:18 You can also find the iTunes feed at /itunes, the Google Play feed at /play, and the direct RSS feed at /rss on talkpython.fm.

01:08:27 We're live streaming most of our recordings these days.

01:08:30 If you want to be part of the show and have your comments featured on the air, be sure to subscribe to our YouTube channel at talkpython.fm/youtube.

01:08:38 This is your host, Michael Kennedy.

01:08:40 Thanks so much for listening.

01:08:41 I really appreciate it.

01:08:42 Now get out there and write some Python code.

01:09:06 *MUHING*

Talk Python's Mastodon Michael Kennedy's Mastodon