New course: Agentic AI for Python Devs

Web Frameworks in Prod by Their Creators

Episode #533, published Mon, Jan 5, 2026, recorded Wed, Dec 17, 2025
Today on Talk Python, the creators behind FastAPI, Flask, Django, Quart, and Litestar get practical about running apps based on their framework in production. Deployment patterns, async gotchas, servers, scaling, and the stuff you only learn at 2 a.m. when the pager goes off. For Django, we have Carlton Gibson and Jeff Triplet. For Flask, we have David Lord and Phil Jones, and on team Litestar we have Janek Nouvertné and Cody Fincher, and finally Sebastián Ramírez from FastAPI is here. Let’s jump in.

Watch this episode on YouTube
Play on YouTube
Watch the live stream version

Episode Deep Dive

Guests and Background

This episode brings together an exceptional panel of Python web framework creators and maintainers for a deep dive into production deployment. Here are the guests:

Carlton Gibson - A former Django Fellow who spent five years working on Django itself. Carlton now builds production applications with Django and is part of the Django Steering Council. He maintains several packages in the Django ecosystem and recently contributed template partials to Django 6.0.

Jeff Triplett - Based in Lawrence, Kansas, Jeff is the newly elected president of the Django Software Foundation and a consultant at Revolution Systems. With 20 years of Django experience, he maintains Django Packages, produces the Django News newsletter, and has used nearly all the frameworks discussed in this episode.

Sebastian Ramirez - The creator of FastAPI, one of Python's fastest-growing web frameworks. Sebastian is now building FastAPI Cloud, a deployment platform designed to make deploying FastAPI applications as simple as running a single command.

David Lord - Lead maintainer of Pallets, the organization behind Flask, Jinja, Click, Werkzeug, ItsDangerous, and MarkupSafe. David has been the lead maintainer since 2019 and has recently created several new Flask extensions including Flask-SQLAlchemy-Lite and Flask-Email-Simplified.

Phil Jones - Creator of Quart (Flask with async/await support), author of the Hypercorn ASGI server, and a contributor to the Pallets organization. Phil uses Quart at work and has been instrumental in exploring HTTP/3 support in Python.

Janek Nouvertne - A Litestar maintainer for three years who works with Django, Flask, FastAPI, Quart, and Litestar deployments professionally. He brings a unique perspective of recommending different frameworks based on use cases rather than allegiance.

Cody Fincher - A Litestar maintainer for about four years who currently works at Google. He previously worked on cloud migrations for enterprises and has contributed many of Litestar's "optional batteries" that work across multiple frameworks.


What to Know If You're New to Python

Before diving into this episode's production-focused discussion, here are some foundational concepts that will help you follow along:

  • WSGI vs ASGI: WSGI (Web Server Gateway Interface) is the traditional Python web server protocol for synchronous apps, while ASGI (Asynchronous Server Gateway Interface) supports async/await patterns. Understanding this distinction is crucial for choosing the right server and deployment strategy.

  • async/await in Python: The async programming model allows handling many concurrent connections efficiently, but requires understanding when code is "blocking" versus "non-blocking." This episode heavily discusses the gotchas around this topic.

  • Process vs Thread Scaling: Python web apps can scale by running multiple processes (separate memory spaces) or multiple threads (shared memory). The GIL (Global Interpreter Lock) historically limited thread-based scaling, but free-threaded Python is changing this.

  • ORM and N+1 Queries: Object-Relational Mappers like Django ORM and SQLAlchemy can accidentally generate many database queries when iterating over related objects. Learning to spot and fix these patterns is essential for production performance.


Key Points and Takeaways

1. The Foundation: Choose Your Server Stack Wisely

The panel unanimously agreed that production deployments should start simple and scale as needed. Carlton Gibson articulated the "old school" approach that remains highly effective: Nginx as a reverse proxy, a WSGI server with pre-fork workers for the core application, and an ASGI sidecar for long-lived connections like WebSockets or Server-Sent Events. The key insight is that you do not need to overcomplicate your stack - a single well-configured server can handle substantial traffic. Sebastian Ramirez noted that most developers should not be dealing with Kubernetes and hyperscaler complexity for typical applications.

2. Database Performance Is Your Biggest Lever

Multiple panelists emphasized that database interactions dominate most web application performance bottlenecks. Cody Fincher highlighted two critical issues: N+1 query problems where SQLAlchemy or Django ORM accidentally executes hundreds of queries for what should be one, and oversized connection pools that consume database CPU and RAM just managing connections. Carlton Gibson recommended using Django Debug Toolbar to identify duplicate queries and running SQL EXPLAIN to find missing indexes. A proper index on a frequently filtered column can turn a full table scan into an instant lookup - potentially a 100x improvement.

3. The Critical Async Gotcha: Never Block in Async Code

Janek Nouvertne identified the single most common mistake in async Python applications: accidentally running blocking code in async functions. When you mark a function as async but call blocking operations inside it, your entire application server stops handling requests until that blocking call completes. Sebastian Ramirez seconded this, noting that frameworks like FastAPI, Litestar, and others automatically handle sync functions by running them in thread workers. The practical advice is to use regular def functions unless you are absolutely certain your code is fully non-blocking - let the framework handle the thread pool execution.

4. Django 6.0's Game-Changing Background Tasks API

Django 6.0 introduced a pluggable task framework that provides a standard API for background tasks. Carlton Gibson explained that this is like "an ORM for tasks" - third-party library authors can now use Django's task API without tying themselves to Celery, Django-Q2, or any specific queue implementation. Application developers then choose their preferred backend. Jeff Triplett recommended Django-Q2 for smaller projects because it can use the database itself as the queue, eliminating the need for Redis or other infrastructure. David Lord also emphasized the pattern of deferring non-urgent work to background tasks to keep request/response cycles fast.

5. HTMX and Template Partials Are Transforming Web Development

The panel expressed strong enthusiasm for HTMX as a way to add interactivity without full JavaScript frameworks. Carlton Gibson shared that his startup has "hardly a JSON endpoint in sight" after adopting HTMX - it changed how he writes websites entirely. Django 6.0 now includes template partials (originally Carlton's django-template-partials package), which allow defining reusable template fragments that work perfectly with HTMX partial updates. Janek Nouvertne noted that HTMX fills the gap between static HTML and full SPAs, making it possible to add reactivity with minimal overhead.

6. Upgrade Your Python Version for Free Performance

Sebastian Ramirez shared striking benchmarks: running FastAPI on Python 3.14 versus Python 3.10 shows nearly double the performance. This improvement comes from the Faster CPython initiative's continuous optimizations. David Lord almost convinced himself to drop the C extension from MarkupSafe because pure Python on modern versions got so much faster. Beyond CPU speed, newer Python versions also use significantly less memory, which compounds when running multiple worker processes.

7. Coolify: Self-Hosted Platform-as-a-Service

Jeff Triplett introduced Coolify as a "boring service" that simplifies deployment significantly. It provides one-click installs for Postgres, automatic backups, and easy Docker container orchestration. Once you have one Django or Flask site working with it, duplicating that setup for new projects becomes trivial. Coolify can run as open-source self-hosted or as a managed service for around five dollars per month. The key value is abstracting away the rsync-files-and-hope deployment pattern while remaining simpler than Kubernetes.

8. Free-Threaded Python: The Exciting (and Cautious) Future

The entire panel expressed excitement about free-threaded Python removing the GIL, while acknowledging real challenges ahead. Carlton Gibson noted that Django's sync-first nature means proper threads will help tremendously. Janek Nouvertne is "super excited" but cautious - msgspec only recently gained full free-threading support after significant work from core developers. The consensus is that third-party C extension libraries will be the sticking point. Phil Jones believes WSGI apps may benefit more than ASGI apps initially. David Lord ran Flask's test suite with pytest-freethreaded successfully, suggesting Flask applications may adapt well due to years of emphasizing thread-safe patterns.

9. Simplify Deployment with Container-Based Approaches

David Lord emphasized that for many applications, a single Docker container running Flask is dramatically more performant than legacy systems. His projects often have fewer than 100 users, and clients are surprised how little infrastructure is needed. Phil Jones runs Hypercorn behind AWS load balancers in ECS, noting that it is usually the database that needs scaling, not the application servers. Sebastian Ramirez pointed to pythonspeed.org for excellent Docker optimization guidance - following their patterns results in a 20-line Dockerfile that performs well.

10. Performance Quick Wins: JSON Serializers and uvloop

Phil Jones mentioned two low-effort performance improvements. First, swapping the JSON serializer to a faster alternative like orJSON can provide noticeable speedups since JSON serialization is common in API responses. Second, using uvloop as the event loop provides measurable performance gains for async applications. David Lord added that Flask now has a pluggable JSON provider, making it easy to substitute faster serializers without changing application code.

11. CDNs: Make Scaling Someone Else's Problem

Jeff Triplett emphasized that the best way to handle scale is to not handle it at all. Putting a CDN like Cloudflare or Fastly in front of your application means cached content never hits your servers. This is particularly valuable for content-heavy sites. Learning proper cache headers and vary headers is an investment that pays dividends - once configured correctly, traffic spikes become the CDN's problem rather than yours.

12. SQLite in Production: Yes, Really

David Lord revealed that the Pallets website runs Flask with SQLite, inspired by Andrew Godwin's "static dynamic sites" concept. Markdown files are loaded into an in-memory SQLite database at startup for fast querying. Janek Nouvertne mentioned running SQLite in the browser with DuckDB for analysis - deploying just static files to Nginx. The panel agreed that SQLite is perfectly viable for many production use cases when concurrency requirements are modest.

  • Links and Tools:
    • sqlite.org - The most deployed database engine
    • duckdb.org - Analytical database that can query SQLite files

Interesting Quotes and Stories

"I'm literally having the time of my life. I spent five years as a Django Fellow working on Django and I just built up this backlog of things I wanted to do. And every day I sit down on my computer thinking, oh, what's today? And every day, a delight." -- Carlton Gibson on building with Django instead of building Django

"If you maintain a framework yourself, you tend to always recommend it for everything. But I noticed it's not actually true. There's actually quite a few cases where I don't recommend Litestar. I recommend, you know, just use Django for this or use Flask for that or use FastAPI for this because, well, they are quite different after all." -- Janek Nouvertne on framework recommendations

"We suffer all the cloud pains so that people don't have to deal with that. And yeah, it's painful to build, but it's so cool to use it." -- Sebastian Ramirez on building FastAPI Cloud

"I almost convinced myself that I can drop a C extension for just a Python upgrade instead. That was pretty impressive." -- David Lord on Python performance improvements

"If you make something an async function, you should be absolutely sure that it's non-blocking. Because if you're running an ASGI app and you're blocking anywhere, your whole application server is blocked completely." -- Janek Nouvertne on the critical async gotcha

"The best way to scale something is just to not do it, avoid the process completely." -- Jeff Triplett on CDNs and caching

"It'll run on a potato." -- David Lord on Flask's minimal resource requirements

"The other day I had to change the account email in one of the AWS accounts. I think I spent four hours." -- Sebastian Ramirez on hyperscaler complexity

"HTMX really changed the way I write websites. We're three years in, we've hardly got a JSON endpoint in sight." -- Carlton Gibson on the impact of HTMX

"I would rather somebody start with HTMX than I would start with React if you don't need it. Because React can be total overkill." -- Jeff Triplett on choosing the right frontend approach


Key Definitions and Terms

WSGI (Web Server Gateway Interface) - The traditional standard interface between Python web applications and web servers. Synchronous by design, it processes one request at a time per worker.

ASGI (Asynchronous Server Gateway Interface) - The async counterpart to WSGI that supports long-lived connections, WebSockets, and concurrent request handling within a single worker.

N+1 Query Problem - A database anti-pattern where fetching N records results in N+1 queries because related objects are fetched one at a time in a loop instead of in a single batch query.

Connection Pooling - Managing a pool of database connections that are reused across requests. Oversized pools waste database resources; undersized pools create bottlenecks.

Pre-fork Workers - A scaling pattern where the main server process forks multiple worker processes at startup, each handling requests independently with its own memory space.

GIL (Global Interpreter Lock) - A mutex in CPython that allows only one thread to execute Python bytecode at a time. Free-threaded Python removes this limitation.

Free-Threaded Python - Python builds without the GIL, enabling true parallel execution of Python code across multiple threads. Available experimentally starting in Python 3.13.

Template Partials - Reusable named fragments within templates that can be rendered independently, enabling efficient partial page updates with HTMX.

Hypermedia - An architectural style where the server returns HTML (or other hypermedia) instead of JSON, letting the browser handle rendering and reducing client-side JavaScript.


Learning Resources

If you want to go deeper on the topics covered in this episode, here are some courses from Talk Python Training that can help build your foundation:


Overall Takeaway

This episode delivers a powerful message: production Python web development in 2026 is remarkably accessible, performant, and flexible. The creators behind FastAPI, Flask, Django, Quart, and Litestar demonstrated both deep expertise and refreshing pragmatism - they recommend competing frameworks when appropriate and emphasize simplicity over complexity.

The recurring theme was "start simple and scale as needed." A single Docker container, a Postgres database, and a well-chosen WSGI or ASGI server can handle far more traffic than most applications will ever see. The biggest performance gains come not from exotic infrastructure but from fundamentals: proper database indexes, avoiding N+1 queries, caching aggressively, and understanding when async helps versus hurts.

Looking ahead, free-threaded Python promises to unlock new levels of performance for synchronous code, potentially eliminating the need for multiple worker processes. The panel's measured optimism - excited yet cautious about third-party library compatibility - reflects mature engineering judgment.

Whether you are deploying your first web app or optimizing a high-traffic production system, the message is clear: Python's web framework ecosystem has never been stronger, the tools have never been better documented, and the community of maintainers represented in this episode is actively working to make your deployments simpler and faster. Trust the fundamentals, measure before optimizing, and remember that sometimes the best scaling strategy is simply getting a bigger box.

Carlton Gibson - Django: github.com
Sebastian Ramirez - FastAPI: github.com
David Lord - Flask: davidism.com
Phil Jones - Flask and Quartz(async): pgjones.dev
Yanik Nouvertne - LiteStar: github.com
Cody Fincher - LiteStar: github.com
Jeff Triplett - Django: jefftriplett.com

Django: www.djangoproject.com
Flask: flask.palletsprojects.com
Quart: quart.palletsprojects.com
Litestar: litestar.dev
FastAPI: fastapi.tiangolo.com
Coolify: coolify.io
ASGI: asgi.readthedocs.io
WSGI (PEP 3333): peps.python.org
Granian: github.com
Hypercorn: github.com
uvicorn: uvicorn.dev
Gunicorn: gunicorn.org
Hypercorn: hypercorn.readthedocs.io
Daphne: github.com
Nginx: nginx.org
Docker: www.docker.com
Kubernetes: kubernetes.io
PostgreSQL: www.postgresql.org
SQLite: www.sqlite.org
Celery: docs.celeryq.dev
SQLAlchemy: www.sqlalchemy.org
Django REST framework: www.django-rest-framework.org
Jinja: jinja.palletsprojects.com
Click: click.palletsprojects.com
HTMX: htmx.org
Server-Sent Events (SSE): developer.mozilla.org
WebSockets (RFC 6455): www.rfc-editor.org
HTTP/2 (RFC 9113): www.rfc-editor.org
HTTP/3 (RFC 9114): www.rfc-editor.org
uv: docs.astral.sh
Amazon Web Services (AWS): aws.amazon.com
Microsoft Azure: azure.microsoft.com
Google Cloud Run: cloud.google.com
Amazon ECS: aws.amazon.com
AlloyDB for PostgreSQL: cloud.google.com
Fly.io: fly.io
Render: render.com
Cloudflare: www.cloudflare.com
Fastly: www.fastly.com

Watch this episode on YouTube: youtube.com
Episode #533 deep-dive: talkpython.fm/533
Episode transcripts: talkpython.fm

Theme Song: Developer Rap
🥁 Served in a Flask 🎸: talkpython.fm/flasksong

---== Don't be a stranger ==---
YouTube: youtube.com/@talkpython

Bluesky: @talkpython.fm
Mastodon: @talkpython@fosstodon.org
X.com: @talkpython

Michael on Bluesky: @mkennedy.codes
Michael on Mastodon: @mkennedy@fosstodon.org
Michael on X.com: @mkennedy

Episode Transcript

Collapse transcript

00:00 Today on Talk Python, the creators behind FastAPI, Flask, Django, Quart, and Litestar

00:05 get practical about running apps based on their frameworks in production.

00:10 Deployment patterns, async gotchas, servers, scalings, and the stuff that you only learn

00:15 at 2 a.m. when the pager starts going off.

00:17 For Django, we have Carlton Gibson and Jeff Triplich.

00:21 For Flask, we have David Lord and Phil Jones.

00:23 And on Team Litestar, we have Yannick Noverde and Cody Fincher.

00:28 And finally, Sebastian Ramirez from FastAPI is here as well.

00:32 Let's jump in.

00:33 This is Talk Python To Me, episode 533, recorded December 17th, 2025.

00:55 Welcome to Talk Python To Me.

00:57 the number one Python podcast for developers and data scientists.

01:01 This is your host, Michael Kennedy.

01:02 I'm a PSF fellow who's been coding for over 25 years.

01:07 Let's connect on social media.

01:08 You'll find me and Talk Python on Mastodon, Bluesky, and X.

01:11 The social links are all in your show notes.

01:14 You can find over 10 years of past episodes at talkpython.fm.

01:18 And if you want to be part of the show, you can join our recording live streams.

01:21 That's right.

01:22 We live stream the raw uncut version of each episode on YouTube.

01:26 Just visit talkpython.fm/youtube to see the schedule of upcoming events.

01:30 Be sure to subscribe there and press the bell so you'll get notified anytime we're recording.

01:35 Hey, before we jump into the interview, I just want to send a little message to all the companies

01:39 out there with products and services trying to reach developers.

01:44 That is the listeners of this show.

01:46 As we're rolling into 2026, I have a bunch of spots open.

01:50 So please reach out to me if you're looking to sponsor a podcast or just generally sponsor

01:56 things in the community and you haven't necessarily considered podcasts, you really should.

02:00 Reach out to me and I'll help you connect with the Talk Python audience.

02:05 Thanks everyone for listening all of 2025. And here we go into 2026. Cheers.

02:11 Hello, hello, Carlton, Sebastian, David, Cody, Yannick, Phil, Jeff, welcome back to Talk Python,

02:19 all of you. Thanks for having us. Thank you for having us. Happy to be here again. We're here for

02:22 what may be my favorite topic for sure. Something I spend most of my time on is web API stuff,

02:30 which is awesome. So excited to have you here to give your inside look at how people should

02:36 be running your framework, at least the one that you significantly contribute to, depending on

02:42 which framework we're talking about, right? It's going to be a lot of fun, and I'm really excited

02:47 to talk about it. However, there's an interesting fact that I've been throwing out a lot lately is

02:51 that fully half of the people doing professional Python development have only been doing it for two

02:56 years or less. And some of you been on the show, it was maybe two years longer than that. Let's just

03:01 do a quick round of introductions for people who don't necessarily know you. We'll go around the

03:06 squares here in the screen sharing. So Carlton, you're up first. Oh, I get to go first. Brilliant.

03:11 Well, I'm Carlton. I work on the Django Red framework mostly. I'm a former Django fellow.

03:16 I maintain a number of packages in the ecosystem. And the last few years I've been back to building

03:20 stuff with Django rather than working on it. So I run a build startup that's, well, we're still

03:25 going. So I'm quite excited about that. Awesome. How is it to be building with Django than building

03:30 Django? Oh, I'm literally having the time of my life. Like I spent five years as a Django fellow

03:36 working on Django and I just built up this backlog of things I wanted to do and I had no time and no

03:42 capacity and no, no sort of nothing to work on with them. And it's just, it's just a delight.

03:46 And every day I sit down on my computer thinking, oh, what's today?

03:50 I look at the background.

03:51 Oh, yes.

03:52 And every day, a delight.

03:54 So I'm still just loving it.

03:56 That's awesome.

03:57 So more often you're appreciating your former self than cursing your former self

04:01 for the way you built.

04:04 Yeah, that's an interesting one.

04:05 I think we should move on before.

04:07 All right.

04:08 All right.

04:09 Speaking of building with and for Sebastian, FastAPI.

04:14 Hello.

04:14 Hello.

04:15 So, okay, intro for the ones that don't know me.

04:18 I'm Sebastian Ramirez.

04:19 I created FastAPI.

04:21 Yeah, that's pretty much it.

04:23 And now I have been building a company since the last two years, FastAPI Cloud, to deploy

04:27 FastAPI.

04:28 So, I get to drink from funny cups, as you can see.

04:33 The world's best boss.

04:35 Amazing.

04:36 So, I think you deserve to give a bit of a shout out to FastAPI Cloud.

04:39 That's a big deal.

04:40 Thank you.

04:40 Thank you very much.

04:41 Yeah, it's super fun.

04:42 And the idea is to make it super simple to deploy FastAPI applications.

04:47 The idea with FastAPI was to make it very simple to build applications, build APIs,

04:52 and get the idea from idea to product in record time.

04:57 That was the idea with FastAPI.

04:59 But then deploying that, in many cases, is just too cumbersome.

05:02 It's too complicated.

05:03 There are just so many things to that.

05:05 So I wanted to bring something for people to be able to say, like,

05:09 hey, just one command FastAPI deploy, and we take care of the rest.

05:12 And then we and the team, I have an amazing thing that I've been able to work with.

05:17 We suffer all the cloud pains so that people don't have to deal with that.

05:21 And yeah, it's painful to build, but it's so cool to use it.

05:25 You know, like that's the part when I say like, yes, this was worth it.

05:29 When I get to use the thing myself, that is super cool.

05:32 MARK MANDEL: Yeah, I'm assuming you build FastAPI Cloud with FastAPI somewhat.

05:35 FRANCISCO MOLIN: Yes, yes, yes, exactly.

05:37 FastAPI Cloud runs on FastAPI Cloud.

05:40 And I get just like now random things in there and like, yes.

05:44 Congrats to that again.

05:45 That's super cool.

05:46 David Lord, welcome.

05:47 Welcome back.

05:48 Yeah.

05:48 Hello.

05:49 I'm David Lord.

05:49 I'm the lead maintainer of Pallets, which is Flask, Jinja, Click, Berkswag.

05:55 It's dangerous, markup safe.

05:56 And now Pallets Eco, which is a bunch of the famous extensions for Flask that are getting

06:02 community maintenance now.

06:04 I've been doing that since, I think I've been the lead maintainer since like 2019, but a

06:08 maintainer since like 2017.

06:09 So it's been a while.

06:10 That's been a while.

06:11 We're coming up on seven, eight years.

06:14 That's crazy.

06:15 Time flies.

06:15 It's always funny because I always feel like I've been doing it for way, way longer.

06:18 And then I look at the actual date that I got added as a maintainer.

06:21 I'm like, well, it couldn't have been that late.

06:22 I was doing stuff before that, right?

06:24 Well, I'm sure you were deep in flask before you got added as a maintainer of it, right?

06:28 Yeah.

06:28 Phil Jones, since you are also on the same org, next.

06:32 Hey, welcome back.

06:32 Hello.

06:33 Yeah, I'm Phil Jones.

06:34 I am the author of Quartz, which is also part of Palette.

06:37 I also work on Berkshagen and Flask and help out there.

06:42 And I've done a server called Hypercorn as well.

06:44 So a bit of interest in that part of the ecosystem.

06:47 What is Quart for people who don't know?

06:50 Quart is basically Flask with async await.

06:53 And that was the idea behind it really to make it possible to do async await.

06:57 So yeah, that's pretty much it.

06:58 If we, when we manage to merge them, we will.

07:00 And the goal now with Quart as part of palettes is to eventually have it be one code base with Flask.

07:07 But given that we both have small children now, we're moving a lot slower.

07:13 Having kids is great.

07:14 I have three kids.

07:15 Productivity is not a thing that they are known to imbue on the parents, right?

07:20 Especially in the early days.

07:21 I want to say, Phil, thank you.

07:23 I've been running Quart for a couple of my websites lately, and it's been amazing.

07:26 Nice.

07:27 Yeah, I also use it at work.

07:29 We've got all our stuff in Quart, which is, yeah, it's really good fun.

07:31 A bit like Carlton.

07:32 So when people, if they get, if they listen to the show or they go to the website of the show and they're not on YouTube, then that somehow involves court.

07:39 Janek, welcome.

07:40 Hey.

07:41 Yeah, I'm Janek de Vietni.

07:42 I work on Litestar.

07:44 I just looked up how long it's been because I was curious myself.

07:48 I also had the same feeling that it's been a lot longer than, it's actually only been three years.

07:53 Yeah.

07:53 And I also, I noticed something with all you guys here in the room.

07:57 I use almost all of the projects you maintain at work,

08:01 which is quite nice.

08:04 We have a very big Django deployment.

08:06 We have some Flask deployments.

08:08 We have a few FastAPI deployments.

08:10 I think we have one core deployment and we also have two Light Store deployments,

08:15 which obviously is a lot of fun to work with.

08:17 And I find it really, really nice actually to work with all these different things.

08:23 It's super interesting also because like everything has its own niche that it's really good at.

08:29 And even, you know, you think if you maintain a framework yourself,

08:33 you tend to always recommend it for everything.

08:36 But I noticed it's not actually true.

08:38 There's actually quite a few cases where I don't recommend Litestar.

08:42 I recommend, you know, just, you know, use Django for this or, you know,

08:47 use Flask for that or use FastAPI for this because, well, they are quite different after all.

08:52 And I find that really, really interesting and nice. And I think it's a good sign of a healthy ecosystem if it's not just, you know,

09:00 the same thing, but different, but it actually brings something very unique and different to

09:04 the table. I think that's a great attitude. And it's very interesting. You know, I feel like

09:08 there's a lot of people who feel like they've kind of got to pick their tech team for everything.

09:13 I'm going to build a static site. Like, well, I've got to have a Python-based static site builder.

09:17 Like, well, it's a static site. Who cares what technology makes it turn? You're writing Markdown,

09:21 and out comes HTML.

09:22 Who cares what's in the middle, for example, right?

09:24 And, you know, I feel like that's kind of a life lessons learned.

09:28 Absolutely, yeah.

09:29 Yeah, that's awesome.

09:30 Cody, hello, hello.

09:31 Yeah, hey guys, I'm Cody Fincher.

09:32 I'm also one of the maintainers of Litestar.

09:34 I've been there just a little bit longer than Yannick.

09:37 And so it's been about four years now.

09:40 And Yannick actually teed this up perfectly because I was going to say something very similar.

09:43 I currently work for Google.

09:44 I've been there for about three and a half years now.

09:46 And we literally have every one of the frameworks you guys just mentioned,

09:50 and they're all in production.

09:51 And so one of the things that you'll see on the Litestar org and part of the projects

09:56 that we maintain are that we have these optional batteries

09:59 and most of the batteries that we have all work with the frameworks for you guys.

10:03 And so it's nice to be able to use that stuff, you know, regardless of what tooling you've got

10:08 or what project it is.

10:10 And so, yeah, having that interoperability and the ability to kind of go between the frameworks

10:14 that work the best for the right situation is crucial.

10:16 And so I'm glad you mentioned that, Yannick.

10:18 But yeah, nice to see you guys on the show. Cody, tell people what Litestar is. I know I had both you guys and Jacob on a while

10:25 ago, but it's been a couple of years, I think. Litestar at its core is really a web framework

10:30 that kind of sits somewhere in between, I'd say, Flask and FastAPI and Django. So whereas, you know,

10:36 Flask doesn't really, you know, bundle a lot of batteries. There's a huge amount of, you know,

10:41 third-party libraries and ecosystem that's built around it that people can add into it, but there's

10:44 not really like, for instance, a database adapter or a database plugin or plugins for

10:50 VEAT or something like that, right, for front end development. And so what we have been doing

10:54 is building a API framework that is very similar in concept to FastAPI that is also extensible.

10:59 So if you want to use the batteries, they're there for you. But if you don't want to use

11:03 them, you don't have to, right? And so a lot of the tooling that we built for LightStore

11:07 was birthed out of a startup that I was in prior to joining Google. And so having all

11:12 this boilerplate, really, it needed somewhere to go.

11:15 And so a lot of this stuff ended up being plugins, which is what we bundled into Litestar

11:19 so that you can kind of add in this extra functionality.

11:22 And so I know I'm getting long-winded.

11:24 It's somewhere between Django and Flask, if you were to think about it in terms of a spectrum,

11:29 in terms of what it gives you in terms of a web framework.

11:32 But in short, it does everything that all the other guys do.

11:35 Very neat. It's definitely a framework I admire.

11:37 Jeff Triplett, so glad you could make it.

11:39 Yeah, thanks for having me.

11:40 Yeah, I'm Jeff Triplett. I'm out of Lawrence, Kansas.

11:42 I'm a consultant at a company called Revolution Systems.

11:45 I was on, some people know me from being on the Python Software Foundation board.

11:48 I've been off that for a few years.

11:50 As of last week, I'm the president of the Django Software Foundation.

11:53 So I've been on that board for a year.

11:54 I'm kind of a Django power user, I guess.

11:56 I've used it for about 20 years.

11:58 And I've kind of not really worked on, I don't even think I have a patch anymore in Django.

12:02 But I've done a lot with the community.

12:04 I've done a lot with contributing through conferences and using utilities.

12:09 I try to promote Carleton's applications like Neapolitan.

12:12 And if I like tools, Python tools in general, I try to advocate for it.

12:16 I've also used all of these applications.

12:18 Litestar, I haven't, but I have a friend who talks about it a lot.

12:21 And so I feel like I know a lot from it.

12:23 As a consultant, we tend to go with the best tool for the job.

12:25 So I've done a little bit of FastAPI.

12:27 I worked with Flask a lot over the years, even though we're primarily a Django shop.

12:31 It just depends on what the client needs.

12:32 And you see a lot of different sizes of web app deployments.

12:36 So I think that's going to be an interesting angle for sure.

12:38 Yeah, absolutely.

12:39 Small ones to hundreds of servers.

12:42 We don't see it as much anymore the last four or five years, especially with like CDNs and caching.

12:46 We just don't see load like we did, you know, 10 years ago or so.

12:50 And then I also do a lot of like small, I kind of call them some of them little dumb projects, but some are just fun.

12:55 Like I've got a FastAPI web ring that I wrote a year ago for April Fool's Day.

13:00 And for some reason that kind of took off and people liked it, even though it was kind of a joke.

13:03 So I started like peppering it on a bunch of sites and I maintain like Django packages.

13:08 I do a newsletter, Django News newsletter, just kind of lots of fun stuff.

13:11 Definitely looking forward to hearing all of your opinions.

13:14 So I've got a bunch of different your app in production topics

13:17 I thought we could just work around or talk over.

13:20 So I thought maybe the first one is what would you recommend,

13:24 or if you don't really have a strong recommendation, what would you choose for yourself to put your app in your framework in production?

13:32 I'm thinking app servers, reverse proxies like Nginx or Caddy.

13:36 Do you go for threaded?

13:37 Try to scale out with threads.

13:39 you try to scale out with processes, Docker, no Docker, Kubernetes. What are we doing here,

13:44 folks? Carlton. I think we'll just keep going around the circle here. So you may get the first

13:49 round of everyone. No, I'll try to mix it up, but let's do it this time.

13:52 I do the oldest school thing in the book. I run Nginx as my front end. I'll stick a

14:00 WSGI server behind it with a pre-fork, a few workers, depending on CPU size, depending on

14:05 the kind of requests I'm handling. These days, in order to handle long-lived requests,

14:10 like server-sent events, that kind of, or WebSocket type things, I'll run an ASGII server as a kind

14:14 of sidecar. I've been thinking about this a lot, actually. But yeah, this is interesting.

14:18 If you're running a small site and you want long-lived requests, just run ASGII. Just use

14:22 ASGII. Because any of the servers, Hypercorn, Uvacorn, Daphne, Grannion is the new hot kid on

14:29 the bot, right? All of those will handle your traffic, no problem. But for me, the scaling

14:34 paddles and whiskey are so well known and just like i can do the maths on the back of the pencil i know

14:39 exactly how to scale it up having done it for so long for me for my core application i would still

14:45 rather use the whiskey server and then limit the async stuff to just to the use cases where it's

14:51 particularly suited so i'll do that um process manager i deploy using systemd if i want to if i

14:58 want a container i'll use podman by systemd it's as old school as it gets i'll very often run a

15:03 a Redis instance on localhost for caching, and that will be it.

15:08 And that will get me an awful long way.

15:09 If I have to schedule, I just get a bigger box.

15:12 And a bigger box.

15:13 Yeah, yeah, yeah.

15:13 I really, really, really need multiple boxes.

15:16 Well, then we'll talk.

15:16 I feel like you and I are in a similar vibe.

15:18 But one thing I want to sort of throw out there to you,

15:20 but also sort of the others is, what are we talking with databases?

15:24 Like, who is bold enough to go SQLite?

15:26 Anyone's going SQLite out there?

15:28 Yeah, it depends, right?

15:30 It just depends on what you're doing, right?

15:31 And how many concurrent users you're going to have.

15:33 It really is amazing there.

15:34 The Palette's website is running on Flask, which I wasn't doing for a while.

15:38 I was doing a static site generator.

15:39 Then I got inspired by Andrew Godwin's static dynamic sites.

15:43 And so it loads up all these markdown files, static markdown files into a SQLite database at runtime

15:50 and then serves off of that because you can query really fast.

15:53 Oh, that's awesome. I love it.

15:54 So I am using SQLite for the Palette's website.

15:56 Yeah, I also do have a few small apps that use SQLite.

16:00 And one recently that's Cody's fault because he put me on that track

16:05 where it's running a SQLite database in the browser because nowadays it's quite easy to do that.

16:12 And then you can do all sorts of stuff with it, like hook into it with DuckDB and perform some analysis.

16:19 So you don't actually need to run any sort of server at all.

16:23 You can just throw some files into Nginx and serve your data.

16:26 And as long as that's static, you have a super, super simple deployment.

16:30 So yeah, definitely SQLite.

16:32 If you can, I like it.

16:34 I agree.

16:35 It's interesting.

16:36 The database probably won't go down with that, probably.

16:38 Let's do this by framework.

16:40 So we'll do vertical slices in the visual here.

16:42 So Jeff.

16:43 Yeah, Django, Postgres, pretty old school stack.

16:46 I think putting a CD in front of anything is just a win.

16:49 So whether you like Fastly or Cloudflare, you get a lot of mileage out of it.

16:52 You learn a lot about caching because it's kind of hard to cache Django by default.

16:56 So you get to play with curl and kind of figure out why very headers are there.

16:59 And it's a good learning experience to get through that.

17:02 I also like Coolify, which is kind of new, at least new to me and new to Michael.

17:06 We talk about this in our spare time a lot.

17:08 It's just kind of a boring service that'll launch a bunch of containers for you.

17:12 There's a bunch of one-click installs, so Postgres is a one-click.

17:15 It also does backups for you, which is really nice to have for free.

17:18 I run a couple dozen sites through it and really like it.

17:21 You can either do a hosted forum, I don't get any money from it, or you can run the open-source version.

17:26 I do both.

17:27 I've got like a home lab that I just run stuff using the open-source version.

17:30 And for five bucks a month, it's worth it to run a couple servers.

17:33 And like Carlton said, you can just scale up.

17:35 Yeah, it's got a bunch of one-click deploy for self-hosted SaaS things as well.

17:40 Like I want an analytics stack of containers that run in its own isolated bit.

17:44 Just click here and go.

17:46 Yeah, one-click, it's installed and you're up.

17:48 And then once you get one Django, Flask, FastAPI site working with it,

17:53 and it uses like a Docker container.

17:54 Once you get that set up, it's really easy to just kind of duplicate that site,

17:58 plug it in to GitHub or whatever your Git provider is.

18:01 And it's a nice experience for what normally is just our syncing files

18:05 and life's too short for that.

18:06 Sebastian, I want to have you go last on this one because I think you've got something pretty interesting

18:12 with FastAPI Cloud to dive into.

18:14 But let's do Litestar next. Cody.

18:16 I have actually bought all the way in on Gradient.

18:19 So for the ASCII server, I've actually been running Gradient now

18:22 for I'd say a year in production.

18:24 It's worked pretty well.

18:26 There's a couple of new things that I'm actually kind of

18:28 with. I don't know how well they're going to work out. So I'm going to go ahead and throw this out

18:31 there. But Granian is one of the few ASCII servers that supports HTTP2. And it actually can do HTTP2

18:38 clear text. And so this is part of the next thing I'm going to say. Because I work for Google, I'm

18:42 actively using lots of Kubernetes and Cloud Run mainly. And so most of the things that I deploy

18:47 are containerized on Cloud Run. And I typically would suggest if you're not using something like

18:53 SystemD and deploying it directly on bare metal, then you are going to want to let the container

18:57 or whatever you're using to manage your processes, manage that and spin that up.

19:01 And so I typically try to allocate, you know, like one CPU for the container

19:05 and let the actual framework scale it up and down as needed.

19:09 Cloud Run itself has a, like an ingress, like a load balancer that sits in front

19:13 that it automatically configures.

19:14 And you're required to basically serve up clear text traffic in when you run Cloud Run.

19:19 And because now Gradient supports HTTP2 and Cloud Run supports HTTP2 clear text,

19:25 you can now serve Granian as HTTP2 traffic.

19:28 The good thing about that is that you get an unlimited upload size.

19:31 And so there are max thresholds to what you can upload into the various cloud environments.

19:35 HTTP2 usually circumvents that or gets around it because of the way the protocol works.

19:39 And so you get additional features and functionality because of that.

19:42 So anyway, that's what I typically do.

19:44 And most of my databases are usually Postgres, AlloyDB if it needs to be something that's on the analytical side.

19:50 Yeah, I'm on Team Granian as well.

19:52 I think that's a super neat framework.

19:53 I had Giovanni on who's behind it a while ago.

19:57 It seems like it's not as popular, but it's based on Hyper from the Rust world,

20:02 which has like 130,000 projects based on it or something.

20:05 So, you know, at its core, it's still pretty battle-tested.

20:11 this portion of talk python to me is brought to you by our course just enough python for data

20:16 scientists if you live in notebooks but need your work to hold up in the real world check out just

20:21 enough python for data scientists it's a focused code first course that tightens the python you

20:26 actually use and adds the habits that make results repeatable we refactor messy cells into functions

20:33 and packages, use Git on easy mode, lock environments with uv, and even ship with Docker.

20:39 Keep your notebook speed, add engineering reliability.

20:42 Find it at Talk Python Training.

20:44 Just click courses in the navbar at talkpython.fm.

20:47 Yannick, how about you?

20:49 You've got a variety, it sounds like.

20:50 Yeah, definitely.

20:53 There's a pretty clear split between what I do at work and what I do outside of that.

20:59 So at work, it's Kubernetes deployments.

21:01 And we managed that pretty much the same way that Cody described.

21:05 So it's one or two processes per pod max.

21:10 So you can have Kubernetes scaled or even manually easily scale that up.

21:14 You can just go into Kubernetes and say, OK, do me one to five more pods or whatever.

21:20 And don't have to worry.

21:21 Don't have to start calculating whatever.

21:23 Most of the stuff we run nowadays with uv corn or Django deployment

21:28 up until I think three months ago or so was running under the Unicorn,

21:34 but we switched that actually.

21:35 And it's been a really great experience.

21:38 I think we tried that a year ago and it didn't work out quite so well.

21:42 There was some things that didn't work as expected or didn't perform great

21:47 or Django was throwing some errors or Uvicorn was throwing some errors.

21:52 And then apparently all of that got fixed because now it runs without any issue for the production.

21:58 Yeah, for people who don't know, the vibe used to be run G Unicorn, but then with UVicorn workers, if you're doing async stuff.

22:06 And then UVicorn kind of stepped up its game and said, you can actually treat us as our own app server.

22:12 We'll manage lifecycle and stuff.

22:14 And so that's the path you took, right?

22:16 Yeah, exactly.

22:16 Before that.

22:17 Well, no, actually, before that, we didn't because our Django is fully synchronous.

22:22 It doesn't do any async.

22:24 So it was just bare metal G Unicorn.

22:26 And it's still synchronous with just running it under UVcorn.

22:30 But interestingly, still quite a bit faster in a few cases.

22:34 We tried that out and we low tested it in a couple of scenarios

22:38 and we found that it makes a lot of sense.

22:41 But outside of that, I do have a lot of, well, very simplistic deployments that are also just systemd

22:48 and a couple of Docker compose files and containers that are managed through some old coupled together Ansible things.

22:59 But I think the oldest one that I have still running is from 2017.

23:03 And it's been running without a change for like four or five years.

23:07 That is awesome.

23:08 I don't see a reason to do anything about it because the app works.

23:11 It's being used productively.

23:14 So why change anything about that?

23:16 No need to introduce.

23:17 Just don't touch it.

23:18 Yeah, I was actually looking into Coolify that you two guys mentioned.

23:22 I was thinking about, you know, maybe upgrading it to that, but I played around with it and I thought, well, why?

23:28 You know, if I have to look into that deployment maybe once a year.

23:31 So that's really nothing to gain for me to make it more complicated.

23:36 David, Team Flask.

23:38 I mentioned this before the show started, but I think I'm pretty sure I've said this the last time I was on Talk Python,

23:45 but the projects I do for work typically have less than 100 users.

23:51 And so my deployment is usually really simple.

23:54 And usually they've chosen like Azure or AWS already.

23:58 So we just have a Docker container and we put it on the relevant Docker container host

24:03 in that service and it just works for them.

24:05 We have a Postgres database and we have like Redis.

24:08 But I never really had to deal with like scaling or that sort of stuff.

24:13 But the funny thing is like, at least for my work, I'm always, we're often replacing older systems.

24:19 And so even a single Docker container running a Flask application is way more performant and responsive than anything they're used to from like some 20 year old or 30 year old Java system.

24:32 Right. And it can just respond on a small container with like a little bit of CPU and a little bit of memory.

24:38 They're always shocked at like, how much do we need to pay for?

24:41 Oh, just like it'll run a potato.

24:44 You know, there's only 100 users and they're like, that's a lot of users.

24:48 So my recommendation is always start small and then scale up from there.

24:52 Don't try to overthink it ahead of time.

24:55 Yeah, for my personal stuff, I'm using like Docker containers now and fly.io.

24:59 I haven't gotten in.

25:00 So I do want to look into Granian and Coolify, but I haven't gotten there yet.

25:04 And for the Docker container, I can definitely recommend pythonspeed.org.

25:09 I don't remember off the top of my head who writes that, but it's somebody in the Python

25:13 ecosystem.

25:14 And they have a whole series of articles on how to optimize your Docker container.

25:18 And that sounds really complicated, but you end up with a Docker file that's like 20 lines

25:22 long or something.

25:23 So it's not like there's crazy things.

25:26 It's just you have to know how to structure it.

25:28 And then I just copy and paste that to the next project.

25:30 Nice.

25:30 Yeah.

25:31 I resisted doing Docker for a long time because I'm like, I don't want that extra complexity.

25:34 But then I realized the stuff you put in the Docker file is really what you just type in

25:38 the terminal once and then you forget.

25:41 I mean, always using Postgres, Redis, probably if I need some background.

25:44 tasks, just plain SMTP server for email. I wrote for all three of those things. I wrote new

25:51 extensions in the Flask ecosystem that I'm trying to get more people to know about now. So Flask

25:56 SQLAlchemy Lite, L-I-T-E, instead of Flask SQLAlchemy, takes a much more lightweight approach to

26:02 integrating SQLAlchemy with Flask. And then Flask Redis, I revived from like 10 years of

26:08 non-maintenance. And then I wrote this whole system, this whole pluggable email system called

26:12 email simplified, kind of inspired by Django, Django's pluggable system, except, and so there's

26:18 like Flask email simplified to integrate that with Flask. But unlike Django, you can use email

26:23 simplified in any library you're writing, in any Python application you're writing. It doesn't have

26:27 to be a Flask web framework. It's pluggable as the library itself. And then you can also integrate

26:32 it with Flask or something else. So Flask email simplified. I get like three downloads a month

26:38 right now. So it needs some popularity. Awesome. I've been doing the non-simplified email lately.

26:43 So I'm happy to hear that there might be a better way. Yeah. I think people do underappreciate just

26:48 how much performance you got out of Python web apps. You know, they're like, oh, we're going to

26:53 need to rewrite this and something else because the GIL or whatever. Like I decided just to make

26:59 a point to pull up the tail till my log running court, by the way. And each one of these requests

27:04 doing like multiple db calls and it's like 23 milliseconds six milliseconds three milliseconds

27:10 you know nine milliseconds it's like that's good enough for that's a lot of requests per second

27:16 per worker until you gotta you gotta have a lot of traffic speaking of court phil what's your take

27:21 on this one i think it's very similar i also build docker containers and uh with a postgres database

27:27 on the back end and i run hypercorn as the ascii server and put them behind a aws load balancer

27:34 and just run them in ECS.

27:36 And I think it's pretty simple, but I guess it depends on your biases.

27:39 But yeah, that's all we do really.

27:41 And it goes a long way.

27:42 There are multiple ECS tasks, mostly because if one falls over rather than scaling,

27:47 it's usually the database that you need to scale, I find.

27:50 But yeah, that's how we run it.

27:52 The nice thing for me about Hypercorn is that I can play with HTTP 3.

27:56 So that's what we're doing at times.

27:57 Oh, HTTP 3, okay.

27:59 I've just been getting my HTTP 2 game down, so I'm already behind the game.

28:03 What's the deal with HTTP/3?

28:05 It's obviously a totally new way of doing it over UDP now rather than TCP.

28:10 Although at the application level, you can't tell any difference really.

28:13 But I mean, I just find it interesting.

28:15 I'm not really sure it will help too much.

28:17 And it's probably best if you've got users who have not that

28:21 great a network connection.

28:22 But for most other cases, I don't think it matters too much.

28:25 Just keep blasting packets until some of them get through.

28:29 OK, fine.

28:30 We'll give you a page eventually.

28:31 There's three pages, actually.

28:32 All right, Sebastian, you are running not just FastAPI from your experience, but you're running FastAPI for a ton of people through FastAPI Cloud at, I'm sure, many different levels.

28:43 This probably sounds like a shameless plug, and it kind of is, but it's sort of expected.

28:48 I will deploy FastAPI or FastAPI Cloud.

28:51 Just because, well, the idea is just to make it super simple to do that.

28:54 You know, like if you are being able to run the command FastAPI run.

28:58 So FastAPI run has like the production server that is using Ubicorn underneath.

29:03 And if you can run that, then you can run also FastAPI deploy.

29:06 And then like, you know, like it will most probably just work.

29:10 And, you know, we just wrap everything and like deploy,

29:13 build, install, deploy, handle HTTPS, all the stuff without needing any Docker file or anything like that.

29:19 And I think for many use cases, it's just like simpler being able just to do that.

29:23 There are so many projects that I have been now building,

29:25 like random stuff that is not really important, but now I can.

29:30 And before it was like, yeah, well, I know how to deploy this thing like fully with like

29:34 all the bells and whistles, but it's just so much work that yeah, I know later.

29:38 So for that, I would end up just like going with that.

29:41 Now if I didn't...

29:42 Well, what I was going to ask is how much are you willing to tell us how things run inside

29:47 FastAPI Cloud?

29:48 Oh, I can't, it's just so much stuff that is going on.

29:52 And it's also, it's fun that nowadays that they're like,

29:57 we have Docker and we have Docker Swarm and there was Nomad

30:00 and Kubernetes and oh, Kubernetes won.

30:02 And then we have the cloud providers and there's AWS and Google and Azure.

30:08 And you will expect that all these things and all this complexity is like, now that it's like, okay,

30:14 these are the clear winners.

30:15 So it's like a lot of complexity to take on, but once you do it all works, but it doesn't.

30:21 And it's just like so much work to get things to work together, to work correctly.

30:27 And the official resources from the different providers and things,

30:32 in many cases, it's like, oh, the solution is hidden in this issue somewhere in GitHub

30:36 because the previous version was obsolete, but now the new version of this package or whatever is like, it's just, it's crazy.

30:43 But like, yeah, so if I didn't have FastAPI Cloud, I will probably use containers.

30:50 I will probably use Docker.

30:51 If it's like something simple, I will deploy with Docker Compose,

30:55 probably try to scale minimum replicas.

30:57 I don't remember Docker Compose has that.

30:59 I remember that Docker Swarm had that, but then Docker Swarm sort of lost against where Net is.

31:05 I will put a traffic load balancer in front to handle HTTPS and, yeah, well, like regular load balancing.

31:12 And, yeah, just regular YubiCorn.

31:14 What some of the folks we were talking about before, At some point, we needed to have Unicorn on top of Uvicorn

31:22 because Uvicorn wouldn't be able to handle workers.

31:25 But now Uvicorn can handle its workers like everything

31:27 and handle the main thing was some VE processes and reaping the processes and handling the stuff.

31:34 Now it can't just do that.

31:35 So you can just run plain Uvicorn.

31:37 So if you're using FastAPI and you say FastAPI run, that already does that.

31:41 So if you're deploying on your own, you can just use the FastAPI run command.

31:44 Then, of course, you have to deal with the scaling and HTTPS and a lot of balancing

31:48 and all the stuff, but the core server,

31:51 you can just run it directly.

31:53 If going beyond that, then there will probably be

31:56 some cluster Kubernetes and trying to scale things,

32:00 figure out the ways to scale things based on the load of the requests,

32:05 like scaling automatically.

32:08 Having normally one container per process to be able to scale that more dynamically

32:13 without depending on the local memory for each one of the servers

32:15 and things like that, I'm probably saying too much.

32:17 But yeah, actually, you know, like if I didn't have a CPI cloud,

32:20 I will probably use one of the providers that abstract those things a little bit away,

32:27 you know, like render, railway, fly, like, I don't know.

32:31 Like, I don't really think that a regular developer should be dealing with,

32:36 you know, like the big hyperscalers and like Kubernetes

32:39 and like all that complexity for a common app.

32:42 Most of the cases, I think it's just really too much complexity to real with.

32:47 It's kind of eye-watering to open up the AWS console or Azure or something.

32:52 Whoa.

32:52 Oh, the other day, you know, like the other day I had to, in one of the AWS accounts, I had to change the account email.

32:59 I think I spent four hours.

33:01 I know.

33:01 Because I had to create the delegate account that has the right permissions to roll.

33:05 And they're like, oh, no, this is, you know, like, sometimes it's just overwhelming the amount of complexity

33:11 that needs to be dealt with.

33:13 And, yeah, I mean, it's great to really have, like, you know, like the infra people that I have working with me

33:20 at the company that I can deal with all that mess and, like, can make sure that everything is just running perfectly

33:26 and it just works.

33:27 So it's like, you know, like, sort of SRE as a service,

33:30 DevOps as a service for everyone.

33:32 It's like a cloud product that provides DevOps as a service,

33:36 I spent a number of years doing nothing but cloud migrations to these hyperscalers for enterprises.

33:42 And I can tell you that when you mentioned the eye-watering comment about the network and all that stuff,

33:48 it's so incredibly complicated now, right?

33:49 There's literally every kind of concept that you need to know to deploy these enterprises now,

33:55 move them from on-prem to the cloud.

33:56 So it does get incredibly complicated.

33:58 Having something simple like what Sebastian is talking about, I think, is super helpful

34:02 when you're just trying to get started and get something up and running quickly.

34:06 I've got a lot of questions and I realize that we will not be getting through all of them.

34:10 So I want to pick carefully.

34:12 So let's do this one next.

34:15 Performance, what's your best low effort tip?

34:18 Not like something super complicated, but I know there's a bunch of low hanging fruit

34:23 that people maybe missed out on.

34:26 And this time let's start with Litestar.

34:28 Cody, back at you.

34:29 I'm going to stick to what I know, which is databases because I deal with that.

34:32 every single day. There's a couple of things that I see as like gotchas that I constantly see over

34:38 and over. One, SQLAlchemy kind of obfuscates the way it's going to execute things and what kind of

34:45 queries it's going to actually execute. So it's really easy if you're not kind of fluent in how

34:49 it works to create N plus one types of issues. And so when people start talking about sync or async,

34:55 it's really, in my mind, it's less of that because you're going to spend more time waiting on the

34:59 network and database and those kind of things, then you're going to spend serializing just

35:04 generally, right? And or processing things on the web framework. And so, one, making sure that you,

35:10 your relationships dialed in correctly so that you don't have N plus one queries. The other thing is

35:15 oversized connection pooling into Postgres and just databases in general, because what people don't

35:21 tend to know is that each of those connections takes up CPU cycles and RAM of the database.

35:26 And so when you slam the database with hundreds of connections, you're just taking away processing power that can be done for other things, right?

35:33 And so you end up ultimately slowing things down.

35:35 So I've seen databases that have had so many connections that all of the CPU and all the stuff is actually doing things, just managing connections and can't actually do any database work.

35:44 And so what about this socket?

35:45 Is it busy?

35:46 What about this socket?

35:46 Is it busy?

35:47 It's just round robin that, right?

35:48 Paying attention to the database is kind of my first kind of rule of thumb.

35:52 100%.

35:52 I like that one a lot.

35:53 I'll throw in putting stuff or identifying work that doesn't need to be done immediately for the user and putting in a background task.

36:02 Having a background worker defer things till later.

36:05 So sending email is an example, although there's nuances there about knowing that it's sent and everything.

36:10 But yeah, if you user kicks off some process and then you wait to do that process in the worker, you're holding that worker up, which is more relevant in WSGI than ASGI.

36:20 but and you're making them wait for their page to load again versus record what they wanted to do

36:26 send it off to the background let them see the status of it but let the background worker handle

36:30 it all right yeah like i said as you guys go for it i'm not sure if that's some sort of

36:35 it's not really a trick or a tip or more more like a i think the most common mistake i see when i

36:41 is ascii specific but when i look at ascii apps that people have written who are maybe not as

36:46 familiar with ASCII or async Python at all, if you make something an async function, you should be

36:53 absolutely sure that it's non-blocking. Because if you're running an ASCII app and you're blocking

36:59 anywhere, your whole application server is blocked completely. It doesn't handle any other requests

37:04 at the same time. It's blocked. I don't think I've had any mistake more times when I've looked through

37:11 some apps that someone has written or that i've came across somewhere so this is really it's super

37:18 super common and it has such a such a big impact on the overall performance in every every metric

37:26 imaginable so i would say unless and that's nowadays what i tell people unless you're 100

37:33 sure that you know what you're doing and you know it's it's non-blocking don't make it async put it

37:39 in a thread pool, execute it in a thread, whatever.

37:42 All of the ASCII frameworks and Django give you a lot of tools at hand to translate your stuff

37:49 to from sync to async so you can still run it.

37:52 Do that unless you're very sure that it actually fully supports

37:57 async.

37:57 MARK MANDEL: Yeah, that's good advice.

37:58 Sebastian.

37:59 SEBASTIAN BASTIAN: Hey, I'm actually going to second, Yannick.

38:02 I think, yeah, like it's--

38:04 and it's maybe counterintuitive that one of the tips of performance is to try to not optimize that much performance at the beginning.

38:13 You know, like, I think the idea with async is like, oh, you can get so much performance

38:17 and throughput in terms of accuracy, whatever.

38:20 But the thing is, in most of the cases, you know, like, till apps grow so large, they

38:26 actually don't need that much extra throughput, that much extra performance.

38:30 And in a framework like, you know, like, as Yannick was saying, well, in my case, I know

38:35 FastAPI, but like, you know, like also many others.

38:38 If you define the function with async, it's going to be run async.

38:41 If you define it non-async and regular def, it's going to be run on a thread worker automatically.

38:46 So it's just going to do the smart thing automatically.

38:50 So it's like fair, you know, like it's going to be good enough.

38:54 And then you can just start with that and just keep blocking code everywhere.

38:58 You know, like just not use async until you actually know

39:01 that you really need to use async.

39:03 And once you do, you have to be, as Yannick was saying,

39:05 you know, like 100% sure that you are not running blocking code inside of it.

39:11 And if you need to run blocking code inside of Async code,

39:14 then make sure that you are sending it to a thread worker.

39:17 Sending it to a thread worker sounds the own thing, but yeah, like, you know, like Django has tools,

39:23 any IO has tools.

39:23 I also built something on top of any IO called AsyncR,

39:27 that is just to simplify these things, to asyncify a blocking function,

39:31 keeping all the type information so that you get autocompletion and inline errors and everything.

39:35 even though it's actually doing all the stuff of sending the thing to the thread work.

39:40 So the code is super simple.

39:42 You keep very simple code, but then underneath it's just like doing

39:45 all the stuff that should be done.

39:46 But you know, like that's normally when you actually need to hyper-optimize things.

39:51 In most of the cases, you can just start with just not using async at first.

39:55 Also, now that you're going to have Python multi-threaded,

39:58 then suddenly you're going to have just so much more performance out of the blue

40:02 without even having to do much more.

40:05 So, yeah, actually that's, you know, like, sorry, I kept speaking so much, but here's a tip for improving performance.

40:12 Upgrade your Python version.

40:14 I was just chatting today with Savannah.

40:17 She was adding the benchmarks to the, you know, like Python benchmark, the official Python benchmarks that they run for the CPython, the faster CPython program.

40:29 And the change from Python 3.10 to Python 3.14 when running FastAPI is like almost double the performance

40:40 or something like that.

40:40 It was like, it was crazy.

40:42 It was just crazy improvement in performance.

40:44 So you can just upgrade your Python version.

40:46 You're gonna get so much better performance just out of that.

40:50 - Yeah, that's an awesome piece of advice that I think is often overlooked.

40:53 And it's not only CPU speed, it's also memory gets a lot lower.

40:57 Whoever's gonna jump in, go ahead.

40:58 Last year, I was looking at MarkupSafe, which is an HTML escaping library that we use and

41:04 has a C extension for speedups.

41:05 And I almost convinced myself that I can stop maintaining the C extension because just Python

41:11 itself got way faster.

41:13 But then it turned out that I could do something to the C extension to make it faster also.

41:17 So I'm still maintaining.

41:18 But just the fact that I almost convinced myself like, oh, I can drop a C extension for just

41:23 a Python upgrade instead was pretty impressive.

41:26 They've done a lot, especially with like string handling and, you know, which you're going to use

41:30 for templating for web apps.

41:32 Phil.

41:32 Yeah, well, I definitely echo looking at your DB queries because by far and large, that's always where

41:38 our performance issues have been.

41:39 It's either badly written query or we're returning most of the database when the user just wants to know

41:44 about one thing or something silly like that.

41:46 I was thinking about low-hanging ones, which I think you asked about.

41:48 So I'd say uv loop, which is still a noticeable improvement.

41:53 And also, because I think it's likely a lot of us are returning JSON, often changing the

41:59 JSON serializer to one of the faster ones can be noticeable as well and obviously quite easy to do.

42:04 So yeah, that's my key.

42:05 That's really good advice.

42:06 I didn't think about the JSON serializer.

42:08 What one do you recommend?

42:09 I think, is it you, JSON?

42:11 Or is it all JSON?

42:12 I can't remember which one was deprecated.

42:15 But yeah, if you look at the Tech Empower benchmarks, everyone's changing the JSON serializer

42:21 to get that bit extra speed.

42:22 But yeah, you're like, our framework looks bad because our JSON serializer is like third

42:27 of the performance.

42:28 We changed, well, David added a JSON provider to Flask.

42:31 And yeah, you could see it make a difference in the tech and power benchmarks.

42:35 So that was really good.

42:36 Yeah, cool.

42:36 Yeah, it's pluggable now.

42:37 But if you're not installing Flask or JSON, I mean, I don't know what other JSON library

42:43 you'd be using at this point, unless you're already using one.

42:45 But or JSON is very, very fast.

42:47 Okay, this is something I'm going to be looking at you later.

42:49 So over to Django, Jeff, David talked about running stuff in the background and was it Django 5 or Django 6 that got the background task thing?

42:57 Yeah, Django 6 just came out a couple of weeks ago.

43:00 And I'll hand that off to Carlton in a second because I think Carlton's had more to do with the actual plumbing being on the steering council.

43:07 My advice to people is the best way to scale something is just to not do it, avoid the process completely.

43:12 So like I mentioned to CDN earlier, it's content heavy sites, cache the crap out of stuff.

43:16 It doesn't even have to hit your servers.

43:17 You can go a lot, as we mentioned earlier, too, just by doubling the amount of resources a project has.

43:22 Django is pretty efficient these days, especially with async views.

43:25 Like everybody else has said, too, any blocking code, move off to threads, move off to a background queue.

43:31 Django Q2 is my favorite one to use because you can use a database.

43:35 So for those little side projects where you just want to run one or two processes, you can use it.

43:39 It works great.

43:40 And Carlton, if you want to talk about Django internals.

43:43 Yeah, OK.

43:43 So the new task framework I just mentioned, the main thing, the main sort of bit about it is that it's, again, this pluggable Django API.

43:51 So it gives a standard task API.

43:53 So if you're writing a third-party library and you, I know, you need to send an email.

43:57 It's the canonical example, right?

43:58 You need to send an email in your third-party library.

44:01 Before, you'd have had to tie yourself to a specific queue implementation, whereas now Django is providing a kind of like an ORM of tasks.

44:07 Right, right.

44:08 You got to do Redis, you got to do Celery, and you got to manage things and all that.

44:11 You don't have to pick that now as the third-party package author.

44:14 You can just say, right, just use Django, wrap this as a Django task and queue it.

44:18 And then the developer, when they come to choose their backend,

44:22 if they want to use Celery or they want to use Django Q2

44:25 or they want to use the Django task backend, which Jake Howard, who wrote this for Django provided as well,

44:30 you can just plug that in.

44:32 So it's a pluggable interface for tasks, which is, I think, the really nice thing about it.

44:37 In terms of quick wins, everybody's mentioned almost all of mine.

44:40 I'm going to, Cody and Phil, they mentioned the database.

44:43 That's the big one.

44:44 Django, the ORM, because it does lazy related lookups,

44:48 it's very easy to trigger in M plus one where, you know,

44:51 the book has multiple authors and suddenly you're iterating through the books

44:55 and you're iterating through the authors and it's a lookup.

44:57 So you need to do things like prefetch related, select related.

45:00 You need to just check that you've got those.

45:02 Django debug toolbars are a great thing to run in development

45:05 where you can see the queries and it'll tell you where you've got the duplicates.

45:08 And then the slightly bigger one is to just check your indexes.

45:11 The ORM will create the right indexes if you're leaning,

45:14 if you're going through primary keys or unique fields.

45:16 But sometimes you're doing a filter on some field, and then there's not the right index there,

45:21 and that can really slow you down.

45:22 So again, you can do the SQL explain on that and find that.

45:26 And then the thing I was going to say originally was caching,

45:30 is get a Redis instance, stick it next to your Django app,

45:34 and as Jeff said, don't do the work.

45:36 If you're continually rendering the same page and it never changes,

45:40 cache it and pull it from the cache rather than rendering.

45:43 Because template DB queries are one of your biggest things.

45:45 The second one's always going to be serialization.

45:47 It's either serialization or template rendering.

45:49 So if you can avoid that by caching, you can save an awful lot of time on your account.

45:53 Yeah.

45:54 I was wondering if somebody would come back with database indexes,

45:56 because that's like a 100x multiplier for free almost.

46:01 It's such a big deal.

46:03 It really can be.

46:03 If you're making a particular query and it's doing a full table scan

46:06 all of a sudden you put the index in, it's instant. It's like, oh, wow. You don't have to be a DBA or

46:12 master information architect sort of thing. I don't know about Postgres. I'm sure it has it.

46:16 Somebody can tell me. But with Mongo, you can turn on in the database, I want you to log all

46:21 slow queries and slow for me means 20 millisecond or whatever. Like you put a number in and then

46:27 you run your app for a while and you go, look at what's slow by slowest. And then you can see,

46:31 well, maybe that needs an index, right? Like just let your app tell you what you got to do.

46:35 Yeah, there is a post.

46:37 I'm just trying to see if I can quickly look it up now.

46:38 There's a Postgres extension, which will automatically run explain

46:42 on the slow queries and log them for you.

46:44 So it'll...

46:45 There you go.

46:46 See if I can find...

46:47 It's pgstat statements, I think, is what you're thinking about.

46:49 Right, okay.

46:49 If you're unsure about your database indexes, do this, or at least go back and review your queries.

46:55 Yeah, I agree.

46:56 Very good.

46:57 All right, I can see we're blazing through these questions.

47:00 I had one more.

47:01 If I can mention one.

47:01 No, please go ahead.

47:02 Yeah, go ahead, David.

47:03 If you want to like get some more responsive parts of your website, like make your website a little more responsive or interactive with the user, HTMX or Datastar, especially like if you're using Cort or another ASGI where you can do SSE, server sent events or WebSockets, streaming little bits of changes to the web front end and then rendering them with the same HTML you're already writing can make things a lot more responsive.

47:28 We had talk about that from Chris May at FlaskCon last year, which you can find on YouTube.

47:33 This is not one of the questions, but let me just start out for a quick riff on this, folks.

47:38 Out in the audience, someone was asking, what about HTMX?

47:41 And I think more broadly, I am actually a huge fan of server-side-based, template-based apps.

47:48 I think it just keeps things simpler in a lot of ways, unless you need a lot of interactivity.

47:51 But things like HTMX or a little bit of JavaScript can reduce a lot of the traffic and stuff.

47:57 Where do people land on those kinds of things?

47:59 I absolutely love HTMLX, not just because you don't have to write a lot of JavaScript or whatever,

48:06 but mostly because if I'm just building a simple app that needs a bit more than just be a static HTML page,

48:14 it needs some interactivity, a little bit of reactivity.

48:18 I feel like having the whole overhead of building an SPA or whatever tools you need for the whole JavaScript, TypeScript, whatever stack, it's just so much work to get a little bit to make a simple thing a little bit nicer, a little bit more reactive.

48:34 And I feel like HTMX just fits right in there.

48:37 It's super great.

48:39 I've built a couple of things with it now, a few of my own projects, a few things that work.

48:45 And it makes things so much easier where the work probably wouldn't have been done

48:50 if it was just because it's too much.

48:52 If you're doing a whole front end thing that you have then to deploy and build and whatever,

48:57 or it would have been less nice.

48:59 So it's an amazing, really amazing thing.

49:02 As the maintainer and author, though, one of the things that is not frustrating, but it's understandable is that HTMX is not for everybody, right?

49:10 There's not like you can't use HTMX in all occasions or Datastar, right?

49:15 And so there are people that are always going to want to use React and there's going to be people that want to use all these other frameworks.

49:20 And so having some cohesive way to make them all talk together, I think, is important.

49:24 I don't have that answer yet, but I just know that like I can't always say HTMX is it, right?

49:29 And then you'll have a great time because I'll inevitably meet somebody that says I need to do this.

49:33 And it's right. And it's in a single page application or something is more appropriate for that.

49:37 And so it's obviously the right tool for the right job when you need it.

49:41 But, you know, I want to make something that is cohesive depending on whatever library you want to use.

49:45 I would throw one thing in there, though.

49:47 I would rather somebody start with HTMX than I would start with React if you don't need it.

49:51 Because React can be total overkill. It can be great for some applications.

49:54 But oftentimes the consultant, we see people like having an about page and they throw a React at it.

49:59 Like, why do you need that?

50:00 Like, especially for small things with partials.

50:02 Do you mean you don't want to start with Angular?

50:03 You know, it's fine if you need it, but I don't think you really need it.

50:07 Like, introduce tools as you need them.

50:10 Django 6.0 just added template partials, and I guess my job here is to hand off to Carlton

50:14 because this is his feature.

50:15 Yeah, I was happy to see that come in there, Carlton.

50:17 Nice job.

50:17 No, it's okay.

50:19 Plug the new feature.

50:20 So, I mean, I stepped down as a fellow in 2023 into a new business,

50:25 and I read the essay about template fragments on the htmx website where it's this like named reusable bits in the in the templates and i was

50:34 like i need that so i built django template partials released as a third-party package and it's now just

50:38 been merged into core for django 6.0 and i have to say about htms it's really changed the way i write

50:44 websites before i was the fellow i used to write mobile applications and do the back do the front

50:49 end of the mobile application then the back end in django using django rest framework and i was

50:52 that's how i got into you know open source was by django rest framework and since starting the

50:58 we're three years in we've hardly got a json endpoint in sight it's like two three four of

51:02 them in the whole application and it's oh it's just a delight again you know you asked me at the

51:08 beginning michael am i having fun yeah i really am having fun and htmx is the reason i do grant

51:12 there are you know these use cases awesome all right let's talk about our last topic and we have

51:18 five minutes ish to do that so we gotta we gotta stay on target quick but let's just go around

51:24 run real quick here we talked about upgrading the python version getting better performance out of

51:29 it i mentioned the lower memory side but i think one of the underappreciated aspects of this you

51:36 know the instagram team did a huge talk you know on it a while ago is the memory that you run into

51:43 when you start to scale out your stuff on the server because you're like oh i want to have four

51:47 workers so i can have more concurrency because of the gills so now you've got four copies of

51:51 everything that you cache in memory and just like the runtime and now you need eight gigs instead of

51:56 what would have been one or who knows right but with free threaded python coming on which i've

52:02 seen a couple of comments in the chat like hey tell us about this like it's we could have true

52:08 concurrency and we wouldn't need to scale as much on the process side i think giving us both better

52:13 performance and the ability to say well you actually have four times less memory so you could

52:17 run smaller servers or whatever what's the free threaded story for all the frameworks carlton

52:22 let's go back to you for do it in reverse i'm really excited about it i don't know how it's

52:26 going to play out but i'm really excited about it all it can do is help django the async story in

52:31 django is nice and mature now uh but still most of it's sync like you know you're still going to

52:36 default the sync you're still going to write your sync views you still got template rendering you

52:39 know django's template template based kind of framework really you're still going to want to

52:43 run things synchronously, concurrently, and proper threads are going to be, yeah, they can't but help.

52:50 I don't know how it's going to roll out. I'll let someone else go because I'm getting locked up.

52:53 Yeah, I just like elaborated that for people out there before we move on is you could set up your

52:59 worker process to say, I want you to actually run eight threads in this one worker process.

53:05 And when multiple requests come in, they could both be sent off to the same worker to be processed.

53:10 And that allows that worker to do more unless the GIL comes along and says, stop, you only

53:15 get to do one thing in threads in Python.

53:17 And all of a sudden, a lot of that falls down.

53:19 This basically uncorks that and makes that easy all of a sudden.

53:22 Even if you yourself are not writing async, your server can be more async.

53:26 Yeah.

53:26 And this is the thing that we found with ASCII is that you dispatch to a, you know, using

53:31 to async or you dispatch it to a thread, a thread pool executor, but Python doesn't run

53:36 that concurrently.

53:37 And so it's like, or in parallel.

53:39 So it's like, ah, it doesn't actually go as fast as you want it to.

53:42 And so you end up wanting multiple processes still.

53:45 All right, let's keep it with Django.

53:46 Jeff, what do you think?

53:47 I'm going to defer to the others on this.

53:48 I have the least thoughts.

53:49 All right, write down the stack, Sebastian.

53:51 Write down the video, not website, web framework.

53:55 I think it's going to be awesome.

53:56 This is going to help so much, so many things.

53:59 The challenge is going to be third-party libraries used by each individual application

54:05 and if they are compatible or not.

54:07 That's where the challenge is going to be.

54:08 But other than that, it's just going to be free extra performance for everyone.

54:13 Just, you know, like just upgrading the version of Python.

54:15 So that's going to be us.

54:16 Cody.

54:16 Yeah, I'm going to echo what Sebastian just said.

54:18 The third party libraries, I think, are going to be the big kind of sticky point here.

54:21 I'm looking forward to seeing what we can do.

54:23 I'm going to kind of hold my thoughts on that.

54:24 Yannick kind of speak a little bit on it because I know that he's looked at msgspec specifically

54:28 and some of the other things that might, you know, give some better context here.

54:31 But yes, the third party libraries are going to be the kind of the sticky issue.

54:36 but I'm looking forward to seeing what we can make happen.

54:38 I'm super excited, actually, specifically about async stuff,

54:43 because for most of the time, it's like, if you can already saturate your CPU,

54:50 async doesn't help you much.

54:51 Well, now, if you have proper threads, you can actually do that in async as well.

54:56 And I think it's going to speed up a lot of applications

55:00 just by default, because almost all async applications out there

55:06 use threads in some capacity because, well, most of things aren't async by nature.

55:12 So they will use a thread pool and it will run more concurrently.

55:16 And so that's going to be better.

55:18 But I'm also a bit scared about a few things that mainly, as a few others have said now,

55:26 third-party libraries extension is specifically those that are Python C extensions.

55:33 just recently, I think like three weeks ago, so got a msgspec released for Python 3.14

55:40 and proper free threading support.

55:42 And that took a lot of work.

55:44 Fortunately, a few of the Python core devs chimed in and contributed to PRs

55:49 and helped out with that.

55:52 And all around the ecosystem, the last few years,

55:55 there's been a lot of work going on.

55:57 But especially for more niche libraries that are still here and there,

56:02 I think there's still a lot to do and possibly also quite a few bugs lurking here and there

56:09 that haven't been found or are really hard to track down.

56:13 I'm curious and a bit maybe scarce, too hard of work, but I'm cautious.

56:18 It's going to be a little bit of a bumpy ride as people turn that on

56:22 and then the reality of what's happening.

56:24 However, I want to take Cody's warning and turn it on its head

56:28 about these third-party libraries, Because I think it's also an opportunity for regular Python developers who are not async fanatics to actually capture some of that capability.

56:39 Say if some library says, hey, I realize that if we actually implement this lower level thing, you don't actually see the implementation of in true threading.

56:48 And then you use it, but you don't actually do threading.

56:50 You just call even a blocking function.

56:52 You might get a huge performance boost, a little bit like David was talking about with markup safe.

56:57 And you could just all of a sudden, with doing nothing

57:00 with your code, goes five times faster on an eight core

57:04 machine or something in little places where it used to matter.

57:06 I'm super excited for--

57:09 we currently focused on the things that are out there

57:12 right now and that might need to be updated.

57:15 But I'm super excited for what else might come of this,

57:19 new things that will be developed or stuff that we are currently not thinking about

57:25 or have that hadn't been considered for the past 30 years or so,

57:29 because it just wasn't feasible or wasn't possible or didn't make sense at all.

57:33 I think it would pay off definitely.

57:36 All right. Team Flask.

57:37 You guys got the final word.

57:39 I think it would probably be more advantageous to whiskey apps

57:42 than it will for ASCII apps.

57:44 And when I've been playing with it, it's mostly on the whiskey flask side

57:47 where I'm quite excited about it.

57:48 At the same time, like the others are a bit worried because not clear to me,

57:52 for example, that green threading is going to work that well

57:55 with free threading.

57:56 And that may have been fixed, but I don't think it has yet.

57:58 And that might then break a lot of whiskey apps.

58:02 So next, I think.

58:03 But yeah, very excited for Flask in particular.

58:06 Thanks for bringing up green threading.

58:07 I added that to my notes of mention right now.

58:11 So Flask already has emphasized for years and years and years

58:16 that don't store stuff globally, don't have global state,

58:19 bind stuff to the request response cycle if you need to store stuff,

58:23 look stuff up from a cache otherwise.

58:24 And my impression is that that emphasis is pretty successful.

58:27 I don't, there's any well-known extensions using like global state or anything like that.

58:31 It's helped that the dev server that we have is threaded by default.

58:36 Like it's not going for performance, obviously, it's just running on your local machine, but

58:39 it's already like running in a threaded environment, running your application in a

58:42 threaded environment, not a process-based one by default.

58:45 I don't know if anybody even knows that you can run the dev server as process-based.

58:49 And we also already had for a decade or more than a decade,

58:53 Gevent to enable the exact same thing that free threading is enabling for Flask,

58:59 which is concurrent work and connections.

59:03 And so plenty of applications are already deployed that way

59:06 using Gevent to do what kind of ASCII is enabling.

59:10 I've run all the test suites with pytest FreeThreaded, which checks that your tests can run concurrently in the free threaded builds.

59:18 so go check that out by Anthony Shaw.

59:20 And I'm pretty sure Granian already supports free-threading.

59:23 Not sure though, I haven't looked into Granian enough.

59:25 But like-

59:26 You know, I'm not sure either.

59:27 It does have a runtime threaded mode but I don't know if that's truly free-threaded or not.

59:32 All of those things combined make me pretty optimistic that Flask will be able to take advantage of this

59:38 without much work from us.

59:40 I mean, I know that's a big statement right there and I haven't tested it

59:43 but the fact that we've emphasized all these different parts for so long already

59:47 makes me confident about it.

59:48 I'm also super excited about it.

59:49 And just one final thought I'll throw out there before we call it a show,

59:52 because we could go on for much longer, but we're out of time.

59:55 I think once this comes along, whatever framework out of this choice you're using out there,

01:00:01 there's a bunch of inner working pieces.

01:00:04 One of them may have some kind of issue.

01:00:06 And I think it's worth doing some proper load testing

01:00:08 on your app, you know, point something like locus.io at it

01:00:12 and just say, well, what if we gave it 10,000 concurrent users

01:00:14 for an hour?

01:00:15 Does it stop working?

01:00:16 Does it crash?

01:00:17 Or does it just keep going?

01:00:18 You're like, so that seems like a pretty good thing to do the first time before you deploy

01:00:22 your first free threaded version.

01:00:23 Yeah.

01:00:24 All right, everyone.

01:00:25 I would love to talk somewhere.

01:00:26 This is such a good conversation, but I also want to respect your time and all that.

01:00:31 So thank you for being here.

01:00:32 It's been an honor to get you all together and have this conversation.

01:00:35 Thank you very much for having us.

01:00:37 Thank you.

01:00:37 Yeah.

01:00:37 Thanks for having us all.

01:00:38 Thanks, everybody.

01:00:39 Yeah.

01:00:39 It's nice being here.

01:00:40 Yeah.

01:00:40 Thanks for having us.

01:00:41 Thanks for having us all.

01:00:42 Bye.

01:00:42 Bye-bye.

01:00:44 This has been another episode of Talk Python To Me.

01:00:47 Thank you to our sponsors.

01:00:48 Be sure to check out what they're offering.

01:00:49 It really helps support the show.

01:00:51 If you or your team needs to learn Python, we have over 270 hours of beginner and advanced courses

01:00:57 on topics ranging from complete beginners to async code,

01:01:00 Flask, Django, HTMX, and even LLMs.

01:01:03 Best of all, there's no subscription in sight.

01:01:06 Browse the catalog at talkpython.fm.

01:01:09 And if you're not already subscribed to the show on your favorite podcast player, what are you waiting for?

01:01:14 Just search for Python in your podcast player.

01:01:16 We should be right at the top.

01:01:17 If you enjoyed that geeky rap song, you can download the full track.

01:01:20 The link is actually in your podcast blog or share notes.

01:01:23 This is your host, Michael Kennedy.

01:01:24 Thank you so much for listening.

01:01:26 I really appreciate it.

01:01:27 I'll see you next time.

01:01:38 I started to meet.

01:01:40 And we're ready to roll.

01:01:43 Upgrade the code.

01:01:45 No fear of getting whole.

01:01:48 We tapped into that modern vibe over King Storm.

01:01:53 Talk Python To Me, I-Sync is the norm.

01:02:24 Редактор субтитров А.Семкин Корректор А.Егорова

Talk Python's Mastodon Michael Kennedy's Mastodon