New course: Agentic AI for Python Devs

Talk Python in Production

Episode #531, published Thu, Dec 18, 2025, recorded Wed, Nov 26, 2025
Guests and sponsors
Have you ever thought about getting your small product into production, but are worried about the cost of the big cloud providers? Or maybe you think your current cloud service is over-architected and costing you too much? Well, in this episode, we interview Michael Kennedy, author of "Talk Python in Production," a new book that guides you through deploying web apps at scale with right-sized engineering.

Watch this episode on YouTube
Play on YouTube
Watch the live stream version

Episode Deep Dive

Guest Introduction and Background

Christopher Trudeau serves as the guest host for this special episode, turning the tables to interview Michael Kennedy about his new book. Christopher is a well-known figure in the Python community, active on Bluesky at trudeau.dev, and has authored books and video courses himself. He brings a Django-centric perspective to the conversation, offering thoughtful counterpoints to Michael's Flask and FastAPI experiences. His background includes working with various hosting providers and teaching Python to students, making him an ideal interviewer to draw out practical insights about production deployments.

This episode features an unusual format where the regular host becomes the guest, with Michael Kennedy discussing his journey from a math PhD student to running Talk Python as a full-time business, and the lessons learned deploying Python web applications over the past decade.


What to Know If You're New to Python

Before diving into this episode's production-focused content, here are some foundational concepts that will help you get the most from this analysis:

  • Virtual environments are isolated Python installations that keep project dependencies separate - this episode discusses how they compare to Docker containers for development
  • Web frameworks like Flask, Django, FastAPI, and Pyramid are tools for building web applications - the episode covers how to choose between them
  • Docker containers package applications with their dependencies for consistent deployment - a major topic in this episode
  • Understanding basic Linux command line and SSH will help contextualize the deployment discussions

Key Points and Takeaways

1. Right-Sized Engineering: Rejecting Over-Architecture

The central thesis of Michael's book "Talk Python in Production" is that most developers over-engineer their deployments based on aspirational thinking rather than actual needs. Michael observed that much advice in the tech space is "a little bit of a flex" - recommending seven different services, auto-scaling, and geo-distributed databases when a simple $30 server would suffice. This "resume-based architecture" mentality leads developers to build for Netflix-scale traffic when they have a small product that just needs to work. The book advocates for understanding your actual requirements and choosing infrastructure that matches them, not what might theoretically be needed if your product goes viral.

2. The One Big Server Philosophy

Rather than managing multiple small servers (Michael once had eight), the book advocates consolidating to one appropriately-sized server running multiple Docker containers. This approach provides isolation between applications without the operational overhead of managing separate machines. Michael's current production server has 8 cores, 16GB RAM, costs $30/month, and includes 2-4 terabytes of free bandwidth - which would cost approximately $1,700 on AWS. The key insight is that modern hardware is so powerful that a single well-configured server can handle substantial traffic, and the included bandwidth at providers like Hetzner or DigitalOcean eliminates surprise bills.

3. Docker as Infrastructure Abstraction

Michael initially resisted Docker but came to appreciate it as "just writing down in a file what I would normally have to type in the terminal." Docker provides isolation between applications running on the same server without the complexity of managing separate VMs. A key insight is that for self-hosted applications (not distributed libraries), larger container images with developer-friendly tools like Oh My Zsh are perfectly acceptable - disk space is cheap, and comfort during debugging is valuable. The book recommends installing terminal tools that make you comfortable rather than optimizing for minimal image size when you're the only one running the container.

  • Links and Tools:
    • docker.com - Container platform
    • ohmyz.sh - Zsh framework for enhanced terminal experience

4. Monitoring Docker Containers in Production

When running 27+ containers on a single server, the cloud provider's basic metrics (CPU, memory) become insufficient - you need to know which specific container is causing issues. Michael recommends two complementary tools: Glances provides a dashboard with container-specific performance data including memory, CPU, and IO per container. BTOP offers moving graphs over time showing network traffic, per-core CPU usage, and process details. Both tools can run as Docker containers themselves and surface individual process names, making it critical to set meaningful process names in your Python applications.

5. CDNs and the True Cost of Cloud Bandwidth

One of Michael's motivations for leaving major cloud providers was bandwidth pricing - AWS charges approximately $100 per terabyte while providers like Hetzner include 20TB free with a $30 server. Talk Python ships about a terabyte of XML monthly just for podcast RSS feeds, plus video courses and MP3s. CDNs like Bunny.net not only reduce origin server load but provide geographic distribution with 119+ locations including coverage across Africa. The key lesson is that bandwidth-heavy applications can see dramatic cost reductions by choosing providers with generous included bandwidth rather than paying per-gigabyte.

6. Mixing Static and Dynamic Content with Nginx

A powerful architectural pattern is using a front-end web server like Nginx to route different URL paths to different backends - static sites, Python apps, or CDN-served content can all appear as one cohesive site. This approach lets you choose the best technology for each component: Hugo for a blog, Django or FastAPI for dynamic features, and static file serving for documentation. The insight is that you don't need to force everything through one framework - tools like Nginx, Caddy, or Traefik can seamlessly stitch together disparate technologies under a single domain.

7. Choosing Tools Beyond Your Language Identity

Developers often choose tools based on what language they're written in rather than fitness for purpose - "I'm a Python developer, so I'll use the Python static site generator." Michael challenges this by using Hugo (written in Go) for his blog because it's excellent at its job. The operating system isn't written in Python, nor is your word processor - why should your static site generator be? This pragmatic approach extends to choosing the best monitoring tools, CDNs, and other infrastructure components regardless of their implementation language.

  • Links and Tools:

8. Web Framework Selection: Context Matters

Chapter 13 of the book provides a nuanced comparison of Python web frameworks that Christopher called "one of my favorites." Michael's framework journey went from Pyramid (chosen in 2015 because Flask didn't properly support Python 3) to Quart/FastAPI as async support and Pydantic became important. The key factors include: async support for modern ORMs like Beanie, active maintenance and security updates, typing support, and community momentum. The chapter avoids religious wars by presenting pros and cons in context rather than declaring winners.

9. AI-Assisted Development: A Pragmatic View

Michael draws a parallel between modern AI coding tools and earlier developer aids like autocomplete - using tools that help you code faster doesn't make you less of a programmer. He shares a concrete example: converting 8,000+ lines of Bootstrap 3 to Bulma CSS in four hours using Claude, a task that would have taken weeks manually. The key distinction is between using AI as a productivity tool versus using it to generate content that falsely represents expertise you don't have - the book carries a "Made by Humans" badge because Michael wrote every word himself.

10. Self-Publishing with Python and Markdown

Michael wrote the entire book in plain Markdown files (one per chapter) and built a Python program to assemble them into various ebook formats via Pandoc. This approach enabled unique features like "galleries" - separate sections that extract and list all code samples, figures, and links by chapter for easy reference. The Python tooling also generates high-resolution images for a web gallery since Kindle compresses embedded images. This demonstrates how Python's text processing capabilities can solve real publishing workflow problems.

  • Links and Tools:

11. Database Security: The Case Against Public Endpoints

Michael points out that some managed database services expose databases on the public internet with only password protection. His preferred approach keeps databases on a private Docker network where they're not even accessible from the host - only containers in the same Docker network can connect. This defense-in-depth strategy means a password compromise alone isn't sufficient for access. The tradeoff is that you become responsible for backups and maintenance, which requires discipline but provides better isolation.

12. Coolify: Self-Hosted PaaS Alternative

For those who find direct Docker management too complex, Coolify provides a self-hosted alternative to Heroku, Vercel, or Railway. It offers a web UI for deploying Docker Compose applications, automatic SSL certificate generation, and database backup configuration to S3-compatible storage. Michael notes it's "two steps forward, 1.8 steps back" - easier in some ways but with quirks like awkward environment variable management. It's positioned as a stepping stone for those not ready to manage Docker directly.

13. UV for Python Environment Management

Michael highlighted UV as a game-changer, especially in Docker contexts. Instead of requiring a Python-specific Docker base image, you can use any base image and simply run uv venv --python 3.14 to install Python in seconds. UV unifies multiple tools (pip, virtualenv, pip-tools) into one fast package, with cached downloads making repeated builds nearly instant. This flexibility means you can choose Docker base images optimized for other requirements without sacrificing easy Python installation.

14. The setproctitle Library for Process Identification

Michael's "notable library" pick was setproctitle - a package with one function that sets your Python process name in the operating system. This seemingly simple capability becomes invaluable when monitoring shows "Python, Python, Python" and you need to identify which process is consuming resources or needs to be killed. It's especially useful when AI coding tools spawn runaway servers that claim ports. The library works in development and production, making Activity Monitor and task managers actually useful for Python applications.


Interesting Quotes and Stories

"A lot of the advice out there is a little bit of a flex in the sense that, oh, look at this, we're using these seven services and then this technology, and then we're auto scaling it with that... a lot of people who just have an app they just want to go from it works on my machine to look what I made. They see this complexity and think I can't do that." -- Michael Kennedy

"It's sometimes referred to as resume-based architecture, right? Like, do we need this or is this because I'm trying to learn it?" -- Christopher Trudeau

"I realized actually, Docker is just writing down in a file what I would normally have to type in the terminal to make things happen. Except I put RUN in front of the command or COPY instead of cp and I get repeatability." -- Michael Kennedy

"I think there's a lack of recognition of the cost of both in energy and money - energy as in human effort - of the next 1%. So getting from 90% uptime to 95% uptime costs something. Getting from 95 to 96 costs more than that." -- Christopher Trudeau

"The reason technology is interesting is not just because there's an API I can call, but it's a journey and there is a story." -- Michael Kennedy

"Does it justify me rewriting that much UI so that I can feel a little bit better and use a little bit nicer UI tooling? No, it definitely doesn't. And then I was sitting on my porch... I thought, you know what? I bet Claude could do it. Four hours later, the whole thing was redone, 20,000 lines of edits." -- Michael Kennedy on converting Bootstrap 3 to Bulma

"Pandora is on the loose, right? So given that, putting your head in the sand is not going to make it go away. Should you use it or not? It's a very powerful tool." -- Michael Kennedy on AI coding tools

"Setting a process name might save you a reboot." -- Christopher Trudeau on the setproctitle library

Story: The Vercel Bill Horror Story

Michael recounts the story of a photographer in South Korea who created a service to detect AI-generated art. When it reached #6 in the app store, her Vercel cloud bill hit $96,000 in a single week - not as a business, but as someone who built something fun as a side project. This illustrates the danger of pay-per-use cloud pricing without hard spending limits.

Story: Cloud Outage Season

In the weeks before recording, AWS, Azure, GitHub, and Cloudflare all experienced significant outages. The AWS outage was caused by a DynamoDB DNS problem that cascaded to Lambda and other services - illustrating how complexity creates interdependencies where one failure brings down everything.


Key Definitions and Terms

  • PaaS (Platform as a Service): A cloud service model where the provider manages infrastructure and you just deploy code. Examples include Heroku, Railway, and Vercel.

  • CDN (Content Delivery Network): A geographically distributed network of servers that cache and serve content from locations closer to users, reducing latency and origin server load.

  • Docker Compose: A tool for defining and running multi-container Docker applications using a YAML configuration file.

  • Reverse Proxy: A server that sits in front of web applications and forwards client requests to them. Nginx, Caddy, and Traefik are popular choices.

  • ODM (Object Document Mapper): Similar to an ORM but for document databases like MongoDB. Beanie is an async ODM for MongoDB.

  • Agentic AI: AI systems that can take actions autonomously, like coding agents that can edit files and run commands rather than just generating text.

  • Right-Sized Engineering: Choosing infrastructure and architecture complexity that matches your actual needs rather than theoretical future scale.


Learning Resources

For those looking to go deeper on the topics discussed in this episode, here are relevant courses from Talk Python Training:


Overall Takeaway

This episode delivers a refreshingly honest perspective on deploying Python web applications: you probably don't need as much infrastructure as the internet tells you. Michael Kennedy's decade-long journey from Python Anywhere to managing 27 Docker containers on a $30 server demonstrates that right-sized engineering - choosing tools and architecture that match your actual needs - leads to both cost savings and operational simplicity.

The conversation challenges the prevailing wisdom that more services, more redundancy, and more complexity equals better infrastructure. Instead, it advocates for understanding your requirements deeply, starting simple, and adding complexity only when genuine need arises. Whether you're deploying your first web app or reconsidering an over-engineered stack, the message is clear: a single well-configured server with Docker, a CDN, and monitoring tools can take you remarkably far.

Most importantly, this episode reminds us that behind every technical choice is a human story - the midnight debugging sessions, the surprise cloud bills, and the gradual accumulation of hard-won knowledge. Michael's willingness to share not just what he built but why he made each decision, including the mistakes along the way, makes this book and episode invaluable resources for anyone serious about running Python in production.

Christopher Trudeau - guest host: www.linkedin.com
Michael's personal site: mkennedy.codes

Talk Python in Production Book: talkpython.fm
glances: github.com
btop: github.com
Uptimekuma: uptimekuma.org
Coolify: coolify.io
Talk Python Blog: talkpython.fm
Hetzner (€20 credit with link): hetzner.cloud
OpalStack: www.opalstack.com
Bunny.net CDN: bunny.net
Galleries from the book: github.com
Pandoc: pandoc.org
Docker: www.docker.com

Watch this episode on YouTube: youtube.com
Episode #531 deep-dive: talkpython.fm/531
Episode transcripts: talkpython.fm

Theme Song: Developer Rap
🥁 Served in a Flask 🎸: talkpython.fm/flasksong

---== Don't be a stranger ==---
YouTube: youtube.com/@talkpython

Bluesky: @talkpython.fm
Mastodon: @talkpython@fosstodon.org
X.com: @talkpython

Michael on Bluesky: @mkennedy.codes
Michael on Mastodon: @mkennedy@fosstodon.org
Michael on X.com: @mkennedy

Episode Transcript

Collapse transcript

00:00 Have you ever thought about getting your small product into production,

00:02 but are worried about the cost of the big cloud providers?

00:05 Or maybe you think your current cloud service is over architected

00:08 and costing you too much?

00:10 Well, in this episode, we interview Michael Kennedy, author of Talk Python

00:14 in Production, a new book that guides you through deploying web apps

00:17 at scale with right sized engineering.

00:20 This is Talk Python To Me, episode 531, recorded November 26, 2025.

00:44 Welcome to Talk Python To Me, a weekly podcast on Python.

00:48 This is your guest host, Christopher Trudeau.

00:50 Follow me on bluesky, where I'm trudeau.dev.

00:53 You can follow the podcast or this week's guest on Mastodon, @talkpython for the show

00:59 and @mkennedy for the guest, both on fosstodon.org.

01:03 And keep up with the show and listen to over nine years of episodes at talkpython.fm.

01:09 If you want to be part of our live episodes, you can find the live streams over on YouTube,

01:14 subscribe to our YouTube channel at talkpython.fm/youtube and get notified about upcoming shows.

01:21 Look into the future and see bugs before they make it to production.

01:25 Sentry's Seer AI code review uses historical error and performance information at Sentry

01:30 to find and flag bugs in your PRs before you even start to review them.

01:35 Stop bugs before they enter your code base.

01:37 Get started at talkpython.fm/seer-code-review.

01:42 And it's brought to you by Agency.

01:44 Discover agentic AI with Agency.

01:47 Their layer lets agents find, connect, and work together.

01:50 Any stack, anywhere.

01:51 Start building the internet of agents at talkpython.fm/agency spelled A-G-N-T-C-Y.

01:58 Michael, welcome to Talk Python To Me.

02:00 You know, I looked it up.

02:01 You've been on the show more than anyone else.

02:03 But in case there's new listeners, tell us a bit about yourself.

02:06 Incredible.

02:07 Good to be here with you, Christopher.

02:09 A bit of a turn of the tables, I would say.

02:13 And it's, you know, long time listeners, I'm sure they know all the little details because

02:18 I work them in here and there.

02:20 I think that's kind of fun to just make things personal.

02:22 But I've also said this on the show, and I'm sure it's a surprising fact that you know as well,

02:27 but over half of the people in the Python space have only been here for two years.

02:31 Yeah, I keep seeing that stat.

02:33 Yep.

02:34 That's crazy, right?

02:34 So even if people, you know, I told this story about my background five years ago,

02:39 like those people weren't here, like half of them.

02:41 So crazy, crazy stuff.

02:43 All right, so my backstory, I was, I thought I would be a mathematician.

02:48 I studied math.

02:49 stuff like that in college, was working on my PhD, started doing a bunch of work with silicon graphics,

02:57 mainframe, supercomputer type stuff so that I could do my math research.

03:00 And I realized, wow, this programming stuff is way more fun than math.

03:04 How do I change gears?

03:05 And so that was like 1998, 99.

03:09 Haven't looked back.

03:10 I've been programming since then and super fun, a couple of languages and around 10, 11 years ago,

03:16 started Talk Python, year after that, quit my job.

03:19 Made Talk Python my full-time job.

03:22 Started offering courses as well.

03:23 That's something that people don't necessarily know.

03:26 That sometimes they'll ask, well, what do you do for your job, actually?

03:28 I'm like, well, we're doing it.

03:31 So anyway, that's me.

03:34 A super, super fan of Python.

03:36 Super fan of programming.

03:38 Every day I wake up just like, wow, how awesome is today?

03:40 Look at all the new stuff that came out that we learned that we can do.

03:43 Like new libraries, AI stuff these days.

03:47 Yeah, there's always plenty to talk about.

03:49 It's incredible times.

03:50 It's incredible times.

03:51 And you've added a new trophy to the mantle, I guess.

03:57 You've written a book.

03:58 I have written a book.

04:00 You know, that's a little bit, I'll put it this way.

04:02 It's not something I ever saw myself doing, but I'm really excited that I did.

04:07 And yeah, it took, I spent a couple of months properly writing it.

04:12 You know, I really put in my energy in and like all projects, you think you're about done.

04:18 Yeah, that first 80% is nothing like the last 80%.

04:22 No, and the last 5% is long.

04:25 You know, and it's not just the book.

04:28 It's like, okay, well, where am I going to sell it?

04:30 Okay, well, Amazon, and then I'll self-publish, or do I use a publisher?

04:34 You end up self-publishing it, but then you're like, how do I?

04:37 You know, all these things you learn for the first time.

04:39 Like, how do I get it into Amazon to sell even?

04:42 And there's a bunch of decisions.

04:44 I can tell you, even with having taken the publishing route, it's no easier.

04:48 It's just that it goes dark for two months and then all of a sudden it's like, you need

04:52 to do this by yesterday.

04:53 So yeah, it's not necessarily an advantage either way, I think.

04:57 Yeah.

04:57 Yeah.

04:58 Yeah.

04:58 I was really on the fence and I thought, look, let me just try this myself.

05:03 I got a few podcast listeners.

05:04 I can let them know about it.

05:06 An audience helps.

05:08 I honestly think it was probably the right choice for me.

05:11 And for those who haven't come across it yet, do you want to give us the one paragraph version?

05:17 Yeah.

05:17 So the book is called Talk Python in Production.

05:20 There are other books that are, I'm pretty sure one is called Python in Production

05:24 or other things about how do you get your Python app into production.

05:27 But this is Talk Python in Production because it's sort of a dual role storytelling vehicle.

05:33 Obviously it's nonfiction, it's a technical book.

05:35 But the idea was, let me tell not a generic story of how you might run some of your Python code in production,

05:42 mostly APIs, web apps, databases, like that kind of stuff, right?

05:46 not a general story of that but my story right i i've been on this journey of not complete noob

05:55 but pretty significantly lost getting my original python app out into the world to pretty confident

06:03 running stuff in i think a simpler than usual way in a good way right i one of the things i really

06:11 liked about the book is it's not quite changing gears but you you do a nice mix of sort of the

06:19 decision making process versus the here's exactly what i did um and so you get a little bit of both

06:25 and and honestly the decision making process is something i find often isn't there in a lot of

06:30 work uh you know your your standard blog post is always well and then and then add exactly this to

06:36 exactly this file um but i think i really really sort of enjoyed the and and this is what i tried

06:41 and this is this is why i changed and you're very kind of humble about it like it like a lot of

06:47 folks who write this kind of content it's thou shalt do this and you're like this is the way

06:51 i've told you yeah yeah this worked for me and uh yeah so so i really like the fact that you've

06:57 you've kind of blended that in what what made you decide to do this like that what there so you said

07:03 you didn't really have the itch to go down the path so uh what i'm not sure i didn't have the

07:08 itch i just didn't think that it was something i was capable of ah i see okay you know what i mean

07:14 not not that i didn't think if i literally took two years of my life and went into like a cabin

07:20 the rose style or something i could come out with a book i'm pretty sure but given all the

07:25 constraints of like i have a family and i gotta keep top python running like in that sense i didn't

07:30 I didn't think I would be able to do it, but.

07:32 - Yeah, it's perseverance more than anything else, I think.

07:35 Yeah, yeah, for sure.

07:36 - Exactly.

07:38 So, yeah, go ahead.

07:39 - Sorry, go ahead.

07:39 No, no, go ahead.

07:40 - You know, so why did I write it?

07:43 Two reasons.

07:44 One, I think it's an interesting story, and I thought people would enjoy hearing it,

07:48 like the personal side that you mentioned a little bit.

07:50 I thought people would appreciate that.

07:52 And maybe more significantly, I feel like a lot of the advice out there

07:58 in the tech space in general, but for now we're focused on like,

08:02 how do I deploy my app sort of like Python plus DevOps

08:05 type of thing.

08:05 But I think a lot of the advice out there is a little bit of a flex in the sense that,

08:12 oh, look at this, we're using these seven services and then this technology,

08:17 and then we're auto scaling it with that.

08:18 And then we have these logs pumped over to this other thing.

08:21 You're like, whoa, okay, that's kind of cool.

08:23 But a lot of people who just have an app they just want to go from like it works on my machine to look what I made they see that and go

08:32 I can't do that you know what it's not for me it's just like I can't spend $500 on this

08:38 infrastructure and I don't feel worthy if I don't have all you know like completely geo distributed

08:46 redundant databases and like you don't need that you know what I mean and people keep asking me

08:50 like hey Michael can you give me some advice I'm like well not that and finally I'm like let me

08:55 just tell the story you know and so that was a big motivation you see it in industry a lot

09:01 it's sometimes referred to as you know resume based architecture right like it's a do we need

09:06 this or is this because i'm trying to learn it um and i think there's always that oh some of it's

09:12 aspirational right i we will be netflix and so you know we need to be on every continent and all

09:19 the rest of it and uh right right it's it's very aspirational it's like i'm going to build this app

09:23 And the reason I'm building it is it's going to take off.

09:26 And that day when the hockey stick hits, I'm ready.

09:30 Yeah.

09:30 You know what I mean?

09:31 There's also a, you know, I think there's a lack of recognition of the cost of both in

09:40 energy and money, energy as in human effort.

09:42 I'm not talking about electricity, of the next 1%, right?

09:47 So like getting from 90% uptime to 95% uptime costs something.

09:52 getting from 95 to 96 costs more than that and getting from 96 like and once you're getting into

09:58 like the four nines five nines thing and then you know cloud flare goes down and you're all screwed

10:03 anyways right so it's so it's so ironic it is 100 ironic that you take all these steps and you employ

10:11 all these services and it's the complexity of those services that went down like yeah you know

10:16 this show will come out in a couple of weeks, but we're just on the eve of basically three weeks

10:22 of important things going down.

10:24 First AWS, then Azure, and then GitHub.

10:28 And also, and then Cloudflare, so let's put that as four,

10:30 within three weeks, right?

10:32 And the AWS one was like, the reason it went down is DynamoDB had some sort of DNS problem.

10:40 Even if you're not using that, the thing you're using,

10:42 Like Lambda depends upon DynamoDB for itself to work.

10:46 So it was just like a cascade of kabang, right?

10:49 And that's a little bit of this complexity.

10:51 Like the more complexity you buy in, even if it's not yours, it is yours in a sense.

10:56 Yeah, yeah.

10:56 And there's always humans involved, right?

10:58 So there's always fallibility somewhere, right?

11:03 Although one of the arguments I have seen recently

11:05 in response to the Cloudflare outages, the good news is if you're, you know,

11:10 I saw some articles that were like, well, you shouldn't be dependent on Cloudflare.

11:13 And I saw the counter articles were basically, you know what, when half the internet's down,

11:17 no one's hassling you that your app is down because half the internet's down.

11:21 So there is an excuse when it isn't your fault.

11:25 So yeah, anyways.

11:26 That is true.

11:27 And you don't see what Cloudflare saved people.

11:31 Yes.

11:31 Right?

11:32 I'm not using Cloudflare.

11:34 I actually use bunny.net.

11:35 But CDNs make it possible for your app to survive these spikes in ways

11:40 that they very well may not with without and certainly the ddos type of stuff that they protect

11:45 against well and i use it simply for certificates like google decided everyone shall be https even

11:52 my sites that don't need it and rather than try to figure out automation for let's encrypt has

11:58 gotten a lot better but when i first started it it was like and i need this and i need this and i

12:03 need this and then the cron job could go down or it's like or i can stick cloud fire in front of it

12:07 and I never have to think about it ever again, right?

12:09 So yeah, there's a little bit of value that way.

12:12 Yeah, there definitely, definitely is.

12:14 Another thing I want to kind of bring back a little bit

12:16 is that you opened this segment on.

12:19 You said, like I shared the human side of the story in kind of a humble way.

12:24 Like that was certainly something, that was one of the main goals, like I said.

12:27 I think it's just a continuation of the podcast, right?

12:30 I started the podcast 10 years ago and I'm like, when I got into Python, there were no Python podcasts.

12:36 there had been but there were none at the time and i'm like there's all these cool libraries i want

12:40 to hear the stories and the humanity and you go to the documentation and you're like cool

12:45 technology sterile as can be and the reason technology is interesting is not just because

12:52 there's an api i can call this but it's like it's a journey and it's a story and it's so i just

12:58 wanted to do that again in the book it's all problem solving right and and there has to have

13:02 have been a problem for someone to want to solve it, which means there's going to have been people

13:06 involved in trying to figure out what that is. Yeah. I think a lot of people maybe, I don't know,

13:10 I shouldn't speak for people. It seems to me though, like a lot of people, they look at a

13:15 technology and they think, they just assess it as a dry, sterile sort of thing on its own. That was

13:22 created in a context, right? Why was celery created? Not just so I can send events to it and

13:30 like, you know, add more complexity and asynchronous, it solved a real problem.

13:35 And if you, if you hear and you understand and you, you follow that journey, you're like,

13:39 I see this is where this come from and why it exists.

13:42 Then you can decide, is it for me?

13:45 Right.

13:45 Well, and, and I think doubly so in the, in the open source space, because like this is

13:51 all volunteer work.

13:52 And so knowing a little bit about who's doing what and, you know, humanizing that a little

13:58 bit.

13:59 Right.

13:59 Their motivation.

14:01 Yeah.

14:01 And it's also easier to be grateful, right?

14:03 Like this isn't some soulless corporate machine.

14:05 There was a reason behind this and a driver behind it.

14:09 This portion of Talk Python To Me is brought to you by Sentry.

14:13 Let me ask you a question.

14:15 What if you could see into the future?

14:17 We're talking about Sentry, of course.

14:19 So that means seeing potential errors, crashes, and bugs before they happen.

14:24 Before you even accept them into your code base.

14:26 That's what Sentry's AI Sears code review offers.

14:30 You get error prediction based on real production history.

14:34 AI Seer Code Review flags the most impactful errors your PR is likely to introduce before merge using your app's error and performance context, not just generic LLM pattern matching.

14:46 Seer will then jump in on new PRs with feedback and warning if it finds any potential issues.

14:52 Here's a real example.

14:53 On a new PR related to a search feature in a web app, we see a comment from seer by sentry bot in the PR.

15:02 And it says, potential bug, the process search results function can enter an infinite recursion when a search query finds no matches.

15:10 As the recursive call lacks a return statement and a proper termination condition.

15:15 And Seer AI code review also provides additional details which you can expand for further information on the issue and suggested fixes.

15:23 And bam, just like that, Sear AI Code Review has stopped a bug in its tracks without any

15:29 devs in the loop.

15:30 A nasty infinite loop bug never made it into production.

15:33 Here's how you set it up.

15:34 You enable the GitHub Sentry integration on your Sentry account, enable Sear AI on your

15:40 Sentry account, and on GitHub, you install the Sear by Sentry app and connect it to your

15:45 repositories that you want it to validate.

15:47 So jump over to Sentry and set up Code Review for yourself.

15:50 Just visit talkpython.fm/seer-code-review.

15:54 The link is in your podcast player show notes and on the episode page.

15:58 Thank you to Sentry for supporting Talk Python and me.

16:02 Inside the book, you've added a couple of things that are a little sort of non-standard,

16:08 like the audio reader briefs and the galleries.

16:11 You want to give a quick rundown?

16:13 And speaking of motivation, what motivated you to include those things?

16:18 - So let me describe what they are first, 'cause they are weird,

16:21 but they're weird in a good way, I think.

16:23 So if you go to the book, my vision was somebody's gonna be reading this,

16:29 very likely on a Kindle, and if I go and put really nice diagrams, pictures, whatever,

16:36 how good is that gonna look in a Kindle paper white, black and white, you know what I mean?

16:40 Like how hard is that going to be to read?

16:42 I think it's gonna be hard is what I decided.

16:44 And so what I ended up doing is I said, okay, How can I make it better for people so that when they want to work with code, it's not trapped inside your Kindle or your iPad, Apple Books or wherever you read it, but it's super accessible, right?

17:00 So what I did is I created some things I called galleries, and there's a code gallery, a figure gallery, and a links gallery.

17:07 And these are just like, they're kind of like an index of those things.

17:11 So like the links one just says, hey, here's all the URLs that we talked about in chapter 10 or chapter 11.

17:17 and just the sentence that contains them.

17:19 So instead of trying to go back through and flipping through the book,

17:22 like, where was that thing they talked about, right?

17:24 Like, no, you just go to the gallery and you click on the chapter

17:27 or you just do command F.

17:29 There it is, you know what I mean?

17:30 And also for, especially for the figures, like it has like 2,000 or 4,000

17:37 by whatever level pictures that you're not even allowed to put into

17:42 like a Kindle book.

17:42 They're like, no, we're going to redo those, rescale those images for you

17:46 down to something fuzzy, right?

17:47 So if you want to read like little tiny texts, I put it there.

17:51 So that's the galleries.

17:51 And I was just maybe a little bit more backstory here is when I wrote this,

17:56 I've worked with other type of editing things, any tools.

17:59 I'm just like, I need to write this and I need to get this done in a super fluid

18:04 way.

18:05 So I'm just going to write in Markdown.

18:07 Right.

18:07 Just writing in Markdown.

18:09 And so what I did is I, of course there's book publishing things that you can put

18:13 Markdown into and so on.

18:14 But I'm like, I'm just going to write one markdown file per chapter and then write some Python program to assemble that in interesting ways.

18:23 Right.

18:23 And then turn that into an EPUB or PDF or Kindle or whatever through something called Pandoc.

18:29 Are you familiar with Pandoc?

18:30 I've heard of it.

18:31 Yeah.

18:31 If you go and look, for people who don't know what Pandoc is, if you go look at Pandoc, it has right on the web page, it has this like fuzzy thing on the right.

18:40 It's like gray fuzzy.

18:41 You know, what is that?

18:43 this thing on the right shows you all the formats that go in and all the formats that could come out

18:49 and it's it's insane like you can't even the line is just the lines connecting the graphs of these

18:55 things it's just a black blob like i could put a haddock uh a haddock document whatever that is

19:02 it convert that to a doc book four right i mean it's insane okay so what i did is i built this

19:07 Python, simple Python thing that reassembles Markdown in certain ways and then feed the

19:13 final thing that it built into Pandoc to get the different ebook formats.

19:16 Right, right.

19:17 Okay.

19:17 But then it occurred to me, like, so I didn't start out with these gallery type things or

19:22 other stuff, but I'm like, well, this is just Python against Markdown.

19:27 Surely I can start pulling out all the links and pulling out the images and then writing

19:30 them somewhere else and then just committing that to GitHub.

19:33 So once, you know, it's kind of just the standard story of Python or programming in general,

19:38 but I think it's extra common in Python.

19:40 It's like, I started solving the problem with Python.

19:42 And once that was in place, it's like the next block and the next thing is just like,

19:47 that's easy now.

19:47 And that's easy.

19:48 And these three things are also easy.

19:50 Let's just do that and just keep adding to it.

19:52 So that's where they came from is one, wanting to make sure people had a good experience

19:57 with like code resources, pictures, and so on.

20:00 But also it's just kind of following the lead of like, hey, let's just keep going.

20:04 Well, and it's one of the beauties of an ebook.

20:08 If dead tree copies, those things cost money.

20:11 And so it's like, oh, I've got a great idea for six more appendices.

20:15 And that's when you start going, oh, wait a second.

20:18 I'm not going to add 300 pages to a 200 page book.

20:21 Yeah, exactly.

20:22 With an ebook, you can go, oh, yeah, here, we can make this referenceable in a couple

20:28 different ways.

20:29 Right.

20:29 Yeah.

20:29 It's like it duplicates the images into, you know, maybe 20 more pages or something, but it's an ebook.

20:35 Who cares?

20:36 Exactly.

20:36 Yeah.

20:36 Yeah.

20:37 So, you know, over the history of the show, I think I've become familiar with your AI journey.

20:45 And recently, it sounds like you've bought.

20:48 It's your fairly big proponent.

20:51 That being said, there's still a Made by Humans logo inside.

20:55 Yeah.

20:57 So I'm going to put you on the spot.

21:01 Do you believe in it or not?

21:04 Do I believe in it or not?

21:06 So why made by humans?

21:09 Yeah, it's a really good question.

21:11 So I think there's a weariness of content generated by AI or assisted by AI meant to attract attention and build authoritative feelings about a thing.

21:26 when that authority or that skill set is not necessarily yours.

21:31 And that I still very much do not like.

21:34 If I wanted to create a blog, I mean, I guess I could do it.

21:37 It'd be a fun experiment, I guess, in sort of an evil way.

21:40 Like, what if I just go create a completely random blog

21:43 and I just have chat just bust out an article twice a day, every day,

21:48 of the thing that's on the top of Hacker News or something?

21:50 You know, just like, you could do that.

21:52 And actually, it might even be interesting.

21:54 I don't really even know.

21:56 but I don't want it. I don't want that. Right. And for this, I wanted to share my story,

22:01 not have AI create a book that has bullet points of my story. Right. Right. Yeah. So for me,

22:07 it was important to like write this. I wrote it a hundred percent by Michael, right. It took me

22:13 a lot of work. People's, I know like it got posted on like Reddit and I think hacker news somewhere.

22:19 There was a bunch of comp and they're like, Oh, this thing is definitely just AI generated. I'm

22:25 felt not AI generated to me. If it makes you feel any better, I've had actually comments pop up on

22:32 some of my video courses claiming that my voiceover was AI. So that's just the world we live in now.

22:39 It's the world we live in and there's not a lot you can do with it. So just kind of put a little

22:43 bit of a pushback against that. I did put like a prefix sort of thing and a label that says

22:50 made by humans. And you know, what's really funny is I don't know if I can actually find that section.

22:55 I don't think I can on just the web part, but I made a picture.

22:59 Maybe I did.

23:00 Humans?

23:01 No.

23:01 Anyway, I made a picture that I drew.

23:03 I literally, I'm a big fan of Pixelmator Pro.

23:07 I went into Pixelmator Pro and I drew it.

23:09 And they said, proof that this is AI generated.

23:12 Look at that stupid made by humans graphic.

23:14 It's clearly AI generated.

23:15 It would be way better if it wasn't generative.

23:17 Yes.

23:19 Okay.

23:20 So how do I square that with me actually being quite a fan of AI stuff these days?

23:24 Like I'm, let's do like a looking back and then looking forward.

23:28 So let's go back 30 years.

23:30 I'm also a fan of using editors that when I hit dot, tell me what I can do with that function, class, variable, et cetera.

23:38 So I'm not constantly in the documentation, right?

23:42 Does that make me not a programmer?

23:44 I don't think so.

23:45 I'm still writing code.

23:46 I'm still thinking architecture.

23:47 I'm just not in the documentation constantly.

23:50 And honestly, I maybe don't memorize every little function and field of everything in the standard library, right?

23:57 It's fine.

23:57 That's not where our time is best spent.

23:59 And I feel that way about AI programming.

24:01 I think there's a lot of, there are pitfalls and there are worrisome aspects of it.

24:06 But you can use some of these agentic AI tools these days to think in so much higher level building blocks.

24:13 Think of like, I'm working in a function and I'm writing this part.

24:16 or I'm working in design patterns.

24:19 And I can think of these big sort of concepts.

24:22 Well, with this AI stuff, you can just say, what if we had a login page?

24:25 Oh, we have a login page.

24:27 Now, what other building block do I need?

24:29 Like the building blocks are so big and sort of non-critical software,

24:35 non-super important software becomes so much cheaper than before.

24:40 You're like, I wish I had a little utility or app that would do this,

24:43 but it just definitely doesn't justify two weeks of work to have it. Like, what if it was a couple of prompts and half an hour? Like,

24:50 yeah, well then I'll have it. You know what I mean? And you can, that is transformative

24:55 for how we work. Yeah. So much of coding is boilerplate, right? So if we can figure out how

24:59 to make that easier, then why not? Right. And I haven't got there with it myself. I don't know

25:05 whether I will. I'm definitely a little more suspicious of it than you are, but I copy and

25:11 paste code all the time.

25:12 And it's not like I'm like, oh, I have to hand tune that.

25:16 No, it's like, well, I got to copy something that does that.

25:18 - Yeah, let me give you a concrete example because I think it's easy to talk in generalizations

25:24 and people are like, well, that's not for me, a bunch of AI slop, which is fair.

25:28 But I'll give you an example of one thing I'm like, this was just such a nuisance and I'm gonna fix it.

25:33 So when I first built Talk Python, the website 10 years ago,

25:37 Bootstrap was all the rage, not modern Bootstrap, like old Bootstrap, right?

25:41 Which they've completely redone the way that you have to structure your HTML and CSS

25:48 completely, incompatibly, several times since then.

25:51 And until very recently, until this summer, every time I wanted to add a feature or an aspect,

25:56 like for example, this whole section that hosts the book,

25:59 I wanna add that, well, you know what I had to do?

26:01 I had to go write like 10-year-old Bootstrap.

26:03 And I'm like, I hate it so much.

26:04 There's so much nicer tools I could be doing this.

26:07 but there's 8,000 lines of HTML and almost as many CSS.

26:12 Does it justify me rewriting that much UI so that I can feel a little bit better

26:18 and use a little bit nicer UI tooling?

26:20 No, it definitely doesn't.

26:22 And so for a couple of years, I'm like, oh, I wish I could do something else,

26:27 but it's not worth it.

26:28 And then I was sitting on my porch, little back area with my computer in the summer,

26:31 hanging out, I'm like, you know what?

26:32 I bet Claude could do it.

26:34 Hey, Claude, rewrite all of this site.

26:37 make a plan and rewrite it, move it from Bootstrap 3 to Bulma, which is like simple tailwind,

26:43 and just completely do it. Four hours later, the whole thing was redone, like 20,000 lines of edits.

26:49 Wow.

26:49 Done. And it wasn't perfect. I had to go, you messed up a little bit here. And actually,

26:53 that was right, but that doesn't look good anymore. So could you actually just make it look,

26:57 you know what I mean? But I mean, it was like half a day. That work was done.

27:00 Right.

27:00 And that is a different level.

27:04 No, it would have been weeks to a month.

27:07 And it's the worst kind.

27:08 It's like, okay, here's how the grid system used to work.

27:11 Let me restructure the HTML.

27:12 Oh, you lost a div?

27:14 Whoopsie.

27:15 Now how are you going to untangle this?

27:16 You know what I mean?

27:16 Like really, really not good stuff.

27:19 And you can just turn these tools on it.

27:20 And I'm like, you know, love it or hate it.

27:22 That is a skill and a power and a tool that is unlike things we've had before.

27:27 And so when I started having some of those kinds of experiences, I'm like, all right,

27:31 I need to pay attention to this.

27:32 I honestly think a lot of these AIs and LLMs, they're kind of copyright theft and other types of things.

27:39 And there's the environmental aspect and all that.

27:41 But the thing is out of the box.

27:44 Pandora is on the loose, right?

27:46 So given that you're putting your head in the sand, it's not going to make it go away.

27:51 Should you use it or not?

27:52 It's a very powerful tool.

27:53 And so that is what I'm excited about, but I'm not excited about when I go to YouTube

27:58 and I see a video and you can just tell that it's a voiceover plus some general,

28:02 or I go to read a blog and you can tell that it's like,

28:05 they didn't even put enough energy into like, they spent less time writing than I have to read it.

28:10 That's not right.

28:10 There's something going wrong here.

28:12 - Well, as with all tools, we'll figure out what works and what doesn't work.

28:16 Those 8,000 files that you're talking about though,

28:19 those are 8,000 files you have and built over time.

28:25 I suspect- - Lines, by the way, not files.

28:27 - Lines, I'm sorry.

28:28 - I think it's a couple hundred files probably.

28:31 so, but, so that might be something that folks listening aren't really aware of,

28:36 right?

28:37 Like, you're not just the, you know, the podcasts and the courses, but you're

28:42 the guy behind the engineering behind all of it.

28:44 So why, you know, why do that?

28:47 Why not, you know, Squarespace or something along those lines didn't exist when you came

28:51 out, but you get the idea.

28:52 Like what, what, what, why spin it up yourself?

28:55 How did you, how did you get there?

28:56 So it's a good question.

28:58 When I started, there were places I could have gone and hosted the podcast.

29:05 You know, they were very unappealing.

29:08 Not in the sense, like, as a consumer of it, like, they put your show there.

29:13 They were really ugly.

29:15 And they would do things like, next to your show, here's some other shows you might like.

29:18 You're like, no.

29:20 No, I don't.

29:21 I just got people to come to my page.

29:23 You're sending them away.

29:24 Like, don't do that.

29:25 Right.

29:25 But those sites are like Squarespace or whatever, and they're hosting a bunch of them.

29:29 And so they want engagement on their platform broadly.

29:33 They're not for you.

29:34 So initially I thought, well, plus I don't have a ton of experience writing Python code.

29:38 And if I'm going to do the podcast, the more I can get my hands on this stuff, get experience.

29:43 So I just sat down and really in like three days, I wrote the Talk Python website.

29:48 I'm like, I'm doing it this weekend.

29:50 You know what I mean?

29:50 I had a job at the time.

29:51 So I'm like, I got to do it.

29:52 It's a long weekend.

29:53 We're doing it.

29:53 And so I just sat there and cranked it out and really got a really good appreciation for building all the way to the end, you know, like not 60% or following a demo, but like, no, here's a fully working app and all the little nuances.

30:08 But then honestly, that's like the genesis of this, the story that is the book is, well, now how do I get it out there?

30:15 I built it.

30:16 It's fine.

30:17 It works great here locally.

30:19 Now, like, where do I take it?

30:21 Right.

30:21 And a lot of places said, well, you just fire up your Linux terminal and your SSH.

30:24 And I'm like, these words are making me very nervous.

30:27 I need them to not do that.

30:28 I need you to stop saying that.

30:31 Don't forget to swing the chicken over your head.

30:35 Exactly.

30:37 So I actually started in Python Anywhere, even before Anaconda owned them, which was the

30:45 selling point was you go to your browser.

30:47 You, I think you give it a get URL, or maybe, maybe you go into a terminally do a get plot.

30:53 I can't remember how it worked 10 years ago, but it was basically you go to the webpage,

30:57 you type in your domain, you get a terminal, which is basically an SSH connection, but

31:02 in the terminal, and then you give it some commands and then they manage it for you.

31:05 And I'm like, okay, I don't really have to know any Linux.

31:08 I just have to do the two things that says in the terminal.

31:10 And then they keep it running and they, they do the SSH key, the SSH certificates, the DNS,

31:17 that i'm like this i can do this and i got it going there and i was really proud and i ran

31:21 i ran talk python on basically python anywhere and sqlite for like six months first six months

31:27 but then it occurred to me that python anywhere is not really intended to host production level

31:33 applications and it occurred to me when i got an email from them one day again this is pre-anaconda

31:39 and what it said was we're going to be down for four hours as we do some maintenance that's not

31:46 going to be the best look for my podcast which is just now starting to gain some traction and getting

31:50 a lot of people talking on social media and saying hey there's a new podcast you should check

31:54 it out i'm like the four hours are not making me psyched like i understand that things might have

31:59 to reboot they might be down for 30 seconds but hours and hours seems a little like this is not

32:06 really what they intend this for it's like for hobbyists to put up a thing and i probably don't

32:10 belong there anymore and once you shifted why not aws or azure or something like that so

32:16 So I looked around and I went to DigitalOcean.

32:19 So I'd done stuff with both Azure and AWS.

32:23 A little part that I left out about this web journey is I had actually run some pretty big websites

32:30 in both AWS and Azure, but they were Windows-based.NET type things, right?

32:36 So literally GUI sort of configuration hosting them or platform as a service.

32:42 And I don't know, I looked at both of them, especially Azure at the time,

32:46 I'm like, "Well, this is complicated," and like unnecessarily so,

32:49 and I'm afraid I'm trading one level of complexity for another, and also really expensive,

32:55 like no joke expensive.

32:57 The podcast, like in terms of people viewing the pages,

33:00 is nothing insane, I mean, it's certainly popular, but it's nothing like, how are you gonna handle that?

33:05 But the amount of traffic the podcasts and courses do

33:07 in terms of video and MP3s and even XML, Like I think Hawk Python ships about a gigabyte of,

33:15 no, a terabyte of XML.

33:17 Think about a terabyte of XML every month.

33:18 Like it's basically a distributed denial, a DDoS, and a welcome DDoS attack,

33:25 because you'll think how many podcast players are out there

33:27 going, got a new one, got a new one, got a new one, got a new one.

33:30 And each one of those requests is like a meg of XML or more.

33:35 You know, it's like, got a new one, got a new one

33:42 certainly the courses and my bill was well over a thousand bucks and just bandwidth right and then

33:47 I looked at DigitalOcean and I'm like oh you mean it all that bandwidth is free it's included you

33:54 get like terabytes of free traffic with DigitalOcean or Hetzner or some of these smaller ones and I'm

34:01 just like yeah this is better like I don't know what I don't know what this stuff for is here

34:05 where they charge you know a hundred dollars a terabyte yeah just to ship stuff around it's crazy

34:10 I have found, particularly professionally, it's like, oh, you're going to charge me for how many DNS lookups there are.

34:16 That doesn't seem like something I can predict.

34:20 Yes, I know.

34:22 And quite frankly, if it reaches a certain level, I need you to turn it off because something's gone horribly awry.

34:31 This portion of Talk Python To Me is brought to you by Agency.

34:34 The Agency, spelled A-G-N-T-C-Y, is an open source collective building the Internet of Agents.

34:40 We're all very familiar with AI and LLMs these days, but if you have not yet experienced the

34:46 massive leap that agentic AI brings, you're in for a treat. Agentic AI takes LLMs from the world's

34:53 smartest search engines to truly collaborative software. That's where agency comes in. Agency is

34:59 a collaboration layer where AI agents can discover, connect, and work across frameworks.

35:05 For developers, this means standardized agent discovery tools, seamless protocols for interagent

35:11 communication, and modular components to compose and scale multi-agent workflows.

35:16 Agency allows AI agents to discover each other and work together regardless of how they're

35:21 built, who built them, or where they run.

35:24 And they just announced several key updates, including interoperability with Anthropics,

35:29 Model Context Protocol, MCP, a new observability data schema enriched with concepts specific to

35:35 multi-agent systems, as well as new extensions to the OpenAgentic Schema Framework, OASF.

35:43 So are you ready to build the internet of agents? Get started with Agency and join

35:47 Crew AI, LangChain, Llama Index, BrowserBase, Cisco, and dozens more. Visit talkpython.fm

35:54 slash agency to get started today.

35:56 That's talkpython.fm/agency.

35:58 The link is in your podcast player's show notes and on the episode page.

36:02 Thank you to the agency for supporting Talk Python and me.

36:06 There's certainly areas where the cloud can go like sideways.

36:10 In the book, I mentioned a story about Kara, I believe.

36:13 And that was this project that this woman, I think in Hong Kong or South Korea,

36:19 I'm afraid I can't remember which, I think it's South Korea.

36:22 Anyway, created this.

36:23 she's a photographer and really hates ai generated art so created this service that would say hey

36:29 give me a piece of art and i'll tell you if it's ai generated or not or something vaguely like this

36:35 and her thing took off in the app store and was like number six and her cloud bill at versell was

36:41 ninety six thousand dollars in a week right as not as a business just as a human who built something

36:47 fun as a side project like oh my yeah and in fairness to a lot of those tools they tend to

36:52 have ways of saying, please limit this and do that.

36:55 Yeah.

36:55 They're not the default.

36:57 And if you're not thinking about that problem and protecting yourself, you know, you'd, you'd

37:03 kind of hope that it would be the other way around.

37:06 It should be, yeah, you know, here's your cap.

37:08 And if you want more than that, you need to do something about it.

37:11 Yeah, absolutely.

37:12 And to their fairness, they did send her a message saying your bill is going way higher

37:17 than you might expect.

37:18 And she didn't look at her email or something, but still.

37:22 And so one of the things that really appeals to me is when you choose something like a

37:27 Hetzner or a DigitalOcean or something, and you say, I'm going to pay for the server.

37:31 You're like, okay, that's $40 I'm committing to.

37:34 Maybe double that, you know, whatever.

37:37 But the bandwidth is basically free or it's included, right?

37:40 But for 40 bucks, it feels free.

37:42 And then it's only going to cost as much as it costs.

37:44 You might have to go, oh my gosh, it's too much traffic.

37:47 We're going to have to deal with it.

37:48 but it's the upper bound of those systems these days is so high.

37:53 It is so high that we, you know, back to that aspirational thing that you mentioned, right?

38:00 Like the chances that you blow past what a $50 server can handle,

38:05 you're going to be really, really popular with the SaaS or something.

38:10 You were joking earlier about SQLite.

38:13 It's gotten so much better as well.

38:14 And I'm not saying it's the answer to everything,

38:17 But I could probably come pretty close to running your site now too.

38:21 Like it's scary between the processor improvements and the improvements in the software.

38:27 It's made a big difference for that kind of stuff.

38:30 It takes very, very little hardware to handle something that's pretty, pretty impressive.

38:36 Yeah.

38:38 So I think the title of chapter four is one of my favorites.

38:41 It gives a little hint as to what approach you took.

38:45 The title is Docker, Docker, Docker.

38:48 so what approach did you take you know what i was really not wanting to do docker on genuinely i mean

38:53 that and so what i did when i originally switched over to some vms i'm like okay the story i'm told

38:59 of what the cloud is you know i bought the aw the ec2 story well we've got all this extra capacity

39:05 oh instead of getting like really expensive heavy metal you know big metal sort of servers you get

39:11 a bunch of small ones and they're kind of like cheap and you just make them throw them away

39:15 whatever right so I went and made a bunch of small servers in in DigitalOcean I

39:20 think I had eight servers at one point and I thought this is gonna give me lots

39:22 of isolation if I got to work on this one thing it won't mess with that and

39:25 what I realized is they're interconnected enough like that really I

39:30 end up just having to reboot eight servers in an orchestrated way than

39:34 managing I'm like this is just worse I gotta patch eight servers instead of one

39:38 now because this is not better so how do I end up with docker docker docker I

39:42 I realized that it would be better to just have one server and basically stepping back just a little bit.

39:48 Like, what if you could completely isolate yourself from almost all the complexity of the cloud and all of their services and all that junk and just say, I have a place where I can run apps that's got enough capacity that I can just keep throwing more apps in there if I want.

40:04 And it doesn't have any tie in with the specific APIs and services of a particular provider.

40:10 So I said, well, what if I just get a big server and I just run all my apps in there?

40:15 And if I want a database, I put the database there.

40:17 If I want like a Valkey caching, I can put a Valkey caching and things like that.

40:22 And that's sort of as much autonomy as I can exert on running something in the cloud.

40:27 It's almost like I went and got a big machine and stuck it in my closet.

40:31 But that's insane because you get million dollar networking equipment and, you know, failover.

40:36 But that doesn't mean you have to go fully pass managed database, this other service.

40:41 Like you could just say, just give me a Linux machine where I can then go do my, do my,

40:46 my hosting and all my apps and let them party and talk to each other and stuff in there.

40:52 Right.

40:52 So then I thought, well, I don't have all these little servers for isolation.

40:57 I'm not really sure I want to throw all this random stuff together, like completely just

41:02 in the same soup in that one big server.

41:05 And by the way, the big server right now that it's running

41:07 has eight cores, 16 gigs of RAM, and costs $30.

41:11 Right.

41:11 It comes with two terabytes of four terabytes traffic,

41:13 something like that.

41:14 Lots.

41:15 $400 of included bandwidth for $30.

41:18 So I said, well, what if I took that over?

41:19 I think autonomy is a big motivator of this whole journey as well.

41:24 Like, I don't want to be tied into all these different things.

41:27 I just want a space where I have reliable compute and networking

41:30 and Linux, and I can just do whatever.

41:32 So then I said, all right, well, I better figure out some of the stuff with Docker just so that there is some

41:38 isolation of all the different pieces living in the same space.

41:41 So I forced myself to learn Docker and what it occurred to me was, Oh,

41:45 Docker is just writing down in a file what I would normally have to type in the

41:50 terminal to make the thing happen.

41:52 Except for I put run in front of the command or copy instead of CP and I

41:58 get a repeatability and, and, and someone else has packaged a bunch of this stuff.

42:02 so you don't have to do it yourself.

42:04 Exactly.

42:05 And I'm like, okay, I don't know what all my concern was about

42:08 because it's not much more complicated.

42:10 One of my concerns was sort of monitorability.

42:13 Like if I just go there and I just create a bunch of virtual environments

42:16 and run my code, I can actually go and see the source code.

42:19 I can see the config files.

42:20 I can see where the logs are being written and I can sort of manage it through SSH.

42:25 And I thought, well, if I put a bunch of disconnected Docker things together,

42:28 that's going to be challenging.

42:29 And I realized actually, not really.

42:32 like if you set it up the same you could still tail all the logs and you can you can ssh into

42:38 the containers if you really have to you know look at something running inside them like what is the

42:44 process actually i don't know what does it do how much memory is it using relative to other stuff

42:48 but and i also talk about a bunch of tools for monitoring them yeah so how has that changed over

42:53 time like you started with some fairly bare bones you've got you've got some extra tools what what's

42:59 What does that evolution look like?

43:01 Well, I used to rely more on whatever the cloud provider,

43:07 DigitalOcean or Hetzner, offered.

43:09 You know, they always have like a metrics section.

43:11 So I can go see, well, what's the CPU doing?

43:13 What's the memory looking like over time?

43:16 And that works okay.

43:19 And if you've got one app that you're running there, you're like, okay, well, that must be how much memory the app is using.

43:24 But right now, if I go to Talk Python and I ask, I think there are 27 different containers running,

43:31 which you can't ask how's the server doing.

43:34 I know very much.

43:35 You know, it really matters much more.

43:36 Well, it's busy.

43:37 I get it.

43:38 But which one is the problem?

43:39 Which one is busy?

43:40 Which one's using all of them?

43:41 So I started to look around and there's actually a bunch of recommendations

43:45 that I have for the book.

43:47 So one of them, the first one I used was this thing called Glances

43:51 and Glances is excellent.

43:52 And by the way, Glances, the way they talk about often getting it,

43:56 I think, where do they talk about installing it here?

43:59 Probably is often like apt install glances or something like that, right?

44:04 But a lot of these tools even have Docker versions.

44:07 If you share the volumes and sockets just right, they function just the same.

44:11 So you could say Docker run glances XYZ and it doesn't even install,

44:16 it doesn't even touch your one big server that is kind of like your building space.

44:20 So it leaves it a little more pure, right?

44:22 So glances is super cool.

44:23 And what it does is it gives you this really nice dashboard of what's going on with your app, like your server.

44:33 How much memory is being used?

44:34 How much CPU is being used?

44:36 How has that been over time?

44:37 Has there been like extended spikes and so on?

44:40 And one of the things that's new to Glances, and I don't think it's in this picture that's on their home screen.

44:46 I'm pretty sure it is not.

44:48 Oh, no, it is.

44:48 It just, mine is inverted because I have so many.

44:51 It has a container section.

44:52 So when you run it, it actually shows you not just the processes, but also gives you a performance, memory, IO, etc.

45:00 for all the running containers and their uptimes and those kind of things.

45:04 So this is super cool.

45:05 So you construct a certain Docker command and then you have this running dashboard that just goes.

45:11 So this is the first thing that I started with and I really like that.

45:14 But then I also found something called BTOP.

45:16 Are you familiar with BTOP?

45:18 No, I haven't used this one.

45:19 Oh my gosh, BTOP is incredible.

45:21 This is so good.

45:22 it's really something.

45:25 Zoom in on it.

45:26 So this gives you moving graphs over time of all sorts of things.

45:31 It shows you graphs of network, inbound, outbound traffic.

45:36 It shows you the CPUs.

45:37 It gives you a breakdown of like, here's all the different distributed work

45:41 across your eight CPU cores and over total.

45:44 It's really something else.

45:45 And so this one is really nice.

45:47 You can configure the UI a lot to show and zoom in on disk activity or whatever.

45:53 This is really a nice way to view it.

45:55 And again, when you run all these Docker containers, they feel like they're super isolated and tucked away.

46:00 And from their perspective, they are.

46:02 But when you look in the process list here, it just shows the process that Docker is running.

46:06 So I have all my web apps and APIs and stuff setting a process name.

46:11 So instead of just Python, Python, Python, Python, Python,

46:13 it'll say like, talk Python, Granian Worker 1, talk Python, Granian Worker 2.

46:19 Versus indexing service daemon.

46:22 And then when you look into any of these tools, you can see, oh, exactly what is busy.

46:26 And those are actually the names inside of Docker, but they still surface exactly like

46:31 that to all these tools.

46:32 One of the things you said kind of hit home for me, like it was subtle and it kind of

46:37 moved on, which was like, if you interconnect it correctly, right?

46:41 Like if you get the files and sockets going, this goes smoothly.

46:45 And I think it's one of the things you've done very, very well in the book is sort of

46:49 through that, like, as you talk about the different Docker configurations, like, okay,

46:53 well, this is why we're putting this here rather than in the container, this is going to be shared.

46:57 And, you know, and, and there's, and the reason for this, I assume some of that was experimental.

47:03 You just sort of over time, you kind of went, oh, okay, wait, I need that somewhere else.

47:08 Yeah.

47:08 Or, or was it, you know, did, was there somebody's knowledge that you, depended

47:14 on a lot there?

47:14 How did you get there?

47:16 How, how organic was the journey?

47:18 the, I would say half and half, like some of it, for example, the glances stuff, I just found

47:25 when I went looking for it, that there was like ways to install. And I said, it just said, Oh,

47:29 you could just install it by running this Docker thing. And it's like a big, long command. And I'm

47:33 like, Oh, that's cool. Because it doesn't matter how long the command is, what I would do is I'll

47:39 go into my dot ZS HRC and say alias glances equals paste. And then I saved that somewhere. And I

47:46 never I couldn't tell you what it is at all I just know it has to like do a yeah it has a few things

47:51 so it can like look back into the host to see you know what's running and so on um yeah so a lot of

47:57 it was was like that and then some of it was definitely you know two whole days of like why

48:03 won't that talk to that let me build it again let me do some more devote you know what I mean and

48:07 eventually like okay all right but once you get it kind of dialed then it's once you get a little bit

48:11 of it working it's a blueprint you just like again again again so so you seem to have taken a bit of

48:16 a like a heavier weight approach here you're just it's it's everything in the kitchen sink that that

48:23 implies that it's not the right amount but it's counter to some of the advice that's out there

48:28 uh sometimes folks talk about you know wanting to have things as minimal as possible why why what

48:34 you've done versus the other how how how are they wrong can we start a flame war on the internet you

48:40 Let's do it. Let's see how many, Michael, you're wrong. I can get into the YouTube comments.

48:45 Actually, please, that's not a challenge. So here's the deal. I want, especially at the

48:53 beginning of this journey, when I was like, I want as much comfort and visibility as I can get

48:59 in these containers and other areas. You know what I mean? And I wanted to make it as close to,

49:04 if I just set up a VM and just, you know, uv, V, ENV, and just roll from there, right? So what I did

49:10 is I said, okay, I could try to go for like the super slim Docker image, or I could just get like

49:17 a basic Docker image, but then put, you know, I put like, oh, my Z shell and ZSH on it, right?

49:23 Does that need it? No, you could use SH, but do you know what happens when you use SH and you go

49:28 in there? It's a brand new start. It doesn't remember anything you've ever, any command you've

49:32 ever run, it doesn't give you any help. You know, you hit tab, it doesn't tell you nothing, right?

49:37 You're like, oh gosh. But if you use like, oh my Z shell, it'll show you, hey, what version of Python

49:42 is your virtual environment activated in? And I can just hit command R and see, you know, filter all

49:48 my history and I can hit get tab and it'll auto complete all the get commands that I forgot what

49:53 I was supposed to use because I'm freaked out because the site is down and how do I fix this?

49:56 I mean, I wouldn't actually be in the container for that, but a lot of times you're in there kind

50:01 of exploring because you're like, it's been fine for six months, but I need to see something.

50:05 And so in the book, at least in the beginning, I recommended to people that they install

50:11 some of these tools that you might install into your own terminal to make you feel more comfortable.

50:15 So that my assumption is you're kind of new to Docker. You're feeling a little uncomfortable.

50:20 Like who cares if it's another hundred megs on your hard drive? You're not shipping your app

50:27 to Docker Hub. You're not going to take your web app, probably. It's not a reusable component.

50:33 You've got your source code and you want the thing to just run here. You're not shipping it. So

50:38 whoever wants to run, you know, indeed.com can just Docker pull that and run it. Like it's,

50:43 that's not what it is. And in that context, you're not so worried about the space. And there's a

50:48 couple of tips that you can use for like really, really fast build times, right? So I mean, like

50:53 container build times for me are like seconds, a few seconds, even though, you know, there's 250

50:59 pip install packages for Talk Python training, you know, build times, and build time is also

51:06 installing Python, right? You could make these things fast, so it's not like a huge impedance,

51:11 but I think for people who are new to it, having something other than just sh, not even bash,

51:17 you're a lot better off. So that's what I promoted. And I think it's, you know, it kind of comes back

51:21 to sort of the thesis of the book as well, right?

51:23 Which, which is right for you.

51:25 so, you know, if you are going to be running a thousand of these spread

51:30 across a whole bunch of different cores, then yeah, if you optimize this, that

51:34 might change your cost framework and everything else.

51:37 Well, right.

51:37 Or if I was, if you're 27, if you're 27 on eight CPUs works fine, then, you

51:44 know, go for it.

51:45 You know, why, why, why get in your own way?

51:48 And that advice is not like, this is why I really emphasize the

51:51 context sort of thing, right?

51:53 Is this advice is bad if your goal is to ship a container to Docker hub so that people can

51:58 self host your open source thing.

52:00 You don't want that to have extra crap that they don't need.

52:03 But when there's only one of them for your machine and you're building it and you're managing

52:08 it, you know, make it as comfy and helpful as possible.

52:12 That was my philosophy.

52:13 The, the structure of your site is, it has a lot of different pieces to it using

52:20 different technology. You spend some time talking about like static sites and using static sites for

52:25 part of it versus, you know, Python applications and those kinds of parts. How did you end up here?

52:33 Like oftentimes the answer when you're looking at this kind of stuff is, well, I need a CMS for

52:37 everything. And then I will try to figure out how to square peg my round hole of a CMS or whatever.

52:43 So how did you end up with a collection? Well, you know, like many things that start simple,

52:48 And you're like, well, just one more thing.

52:50 So I tried, I'd kept pretty much the same web framework across all my different sites thinking,

52:55 okay, that's, I'm going to just pick one and go with it.

52:57 I think a lot of people do that.

52:59 You know, there's people who are like, I use Django.

53:00 There's people, I use Flask and so on.

53:02 And then just slowly over time, you're like, really, this is, this part is really a hassle.

53:08 I'd be a lot better off if I made that part served through the CDN or why am I, you know,

53:14 One of the things that I see a lot, and it doesn't drive me crazy, but I'm just like,

53:19 yeah, it's probably not necessary, is a lot of people in technology X.

53:24 For us, that's Python.

53:25 It could be JavaScript.

53:26 It could be.NET, whatever, right?

53:28 People who work extensively in that and have a lot of their identity tied into that, like

53:32 I do and others.

53:34 Like, I'm a Python developer, right?

53:36 So if I'm going to choose a tool, like, let's say, a CMS or a static site generator or something

53:41 like that, I'm going to choose the Python one.

53:43 I'm a Python person like okay but are there better options out there than the Python ones for what

53:48 you're trying to do because are you going to extend this tool no then what do you care of

53:52 what it's written in right your operating system is not written in Python it's written in yeah yeah

53:57 yeah C or I'm not going to use this word processor because it's not written in Python

54:02 exactly so I have to go back in no I need a new service like it doesn't you don't see it you don't

54:07 have to work with it you don't care and so I ended up a little bit with the mish match of just trying

54:11 to say like what are the best tools like for example for the blog and some of the other static

54:16 elements i've used hugo which is written in go it's like okay i type the command the command hugo

54:22 you know and it it does its thing i don't really care what it's written in the templating extension

54:26 is a little bit annoying but um i kind of just went around and said okay well what what do i think

54:31 would be the best fit to make to make my life easy not to reinforce my identity as this type of

54:37 of developer or that type of developer you know yeah it's um the one of things you know i'll show

54:45 my own stripes here and and you can defend your beloved flask if you like but having come from

54:51 the django side uh some of the things that you've kind of learned organically here are forced on you

54:57 in django um so when you so so when you when you like the instructions for putting together a

55:03 production site are, and you will run this command and it will move all of your static content over

55:07 here. So like your mental model, when you come from that side is, oh, my site is actually built

55:12 of these, at least these two different things. And I think coming from Flask, you might've sort of

55:18 that, that discovery might've been a little more organic. You might not have been forced into it

55:24 immediately, but once you've come to that realization of, oh, wait, I have these pieces

55:29 and I can use something like Nginx to tie it all together,

55:33 that means, well, then I don't have to figure out

55:35 how to use a CMS for this thing that's very unnatural for a CMS.

55:39 I can just mount it under slash blog and it'll work fine, yeah.

55:44 Yeah, Django is very powerful.

55:46 It definitely is.

55:46 And I actually talked a lot about that in the book, which evaluating web frameworks.

55:52 But I would say before we, if we're going to that, but before we do, I think your point about using Nginx

55:57 to piece them together, things together or caddy or traffic or whatever it doesn't matter like some front end web server

56:03 they all do it yep yeah is so often people think i have this python app let's say i have a django

56:10 app so i want to add a cms to it what could i possibly add is it static content well maybe what

56:15 you should add is hugo i don't know i'm just making this up right like it might actually be a bad

56:19 option but well hugo is not a python thing so how do i put it into my django app i mean they're very

56:24 very different in the way they work so they don't really go that super well together if you were to

56:28 like how does one literally source code wise go into another but if you just made like a hugo site

56:35 or other static site however you make it and then put it on the same server and then in nginx when

56:40 you say if you go to this domain slash docs it goes completely over here and if it goes anywhere

56:46 else it goes just to django and all of a sudden from the outside it looks like a very cohesive

56:52 single thing with just different sub urls but you get to choose the best technology for the static

56:56 bits and you get to choose the best technology for your dynamic data-driven bits and that is all just

57:02 done by configuring the front-end web server that you don't even have visibility to in python and i

57:08 think that that's a big mental shift but it's like those kinds of things that bring a both the

57:13 flexibility to make these choices and the simplicity to not try to jam them together

57:17 there's a uh there's a a third-party library for django that i use once in a while which is called

57:22 distill and it's a static site generator based on django so say you had your url was like books

57:29 you know it's books slash one book slash two book slash three well you tell distill i want book and

57:35 i want all the possible queries of this number and it will generate the results as static so even

57:42 when you've got a dynamic site you can actually carve off the static portion and then have that

57:47 fed straight out of nginx and if it isn't if there's no actual dynamic content on the page and

57:53 if it only updates when the database updates or something like that and you can do it nightly like

57:57 this gives you all sorts of other options and you know to come back to your my eight processor

58:02 whatever's well the static sites are almost free you do not even need that you don't even need it

58:09 it's nothing so like you can scale way down and have an absolutely mammoth site just by properly

58:15 fine-tuning what's dynamic and what's static yeah you could go to millions and millions of requests

58:20 if you just converted all that stuff to static and then put the x the extra resources css image

58:28 javascript etc on a cdn yeah like i mean that is like almost web that's like web on easy mode right

58:34 there because you it can't go down unless the server literally almost almost can't yeah i mean

58:41 Let's not say we're in the eve of like, are we in the eve of a fifth going down?

58:46 But I mean, like it can't go down because the code is wrong or there's a race condition

58:51 or you're out of memory.

58:52 Like it's really close to just if, if the, if the web server is up and you put a CDN

58:57 in front of it and then it's not even necessarily that it's like the CDN has to go.

59:02 You've got to have a cloud flare level incident.

59:04 Yeah.

59:04 And, and fully distributed in often cases.

59:07 Right.

59:07 So people in, you know, in continents, other than where you are based are getting fantastic load times because it's cached locally for them.

59:17 Yeah, I just want to give a little shout out.

59:19 I'm going to give a little shout out to Bunny.net.

59:21 Like, I know people are all about Cloudflare, but this is a cool European company that focuses on privacy, has some really nice features.

59:29 The pricing is great.

59:30 And they have, you go here, go to the CDN.

59:33 They've got somewhere way down here, you know, like 119 different places, including all over Africa.

59:41 And this is just super, super cheap for traffic.

59:45 Nice.

59:45 Yeah.

59:46 So I wasn't, earlier there, I wasn't intending to force a fistfight.

59:52 And, you know, we're on the opposite side of the continent, so that would be a challenge when I, Jango versus Flash, go.

01:00:00 But, you know, I think one of my favorite chapters was actually chapter 13, which was titled Picking a Python Web Framework.

01:00:07 I really liked the nuance of this.

01:00:11 It's unusual for folks to sort of reveal their reasoning.

01:00:16 And honestly, I think, like, because I had no intention of tomorrow going and using Docker, the Docker chapters were interesting because I like to see how other people do things.

01:00:26 But like I could grab the picking a Python web framework, pull that chapter out and hand it to almost any of my students.

01:00:33 Right. Like it's this, you know, how do I make these kinds of decisions?

01:00:37 How is this different? Why do I think about these things?

01:00:40 And so often the content on this is really just religious war.

01:00:44 And you've done like a really, really good job there of just sort of conveying this, you know, hey, here are pros and cons for each.

01:00:51 And this is why I picked this.

01:00:53 And so I really, really liked how you, how you covered that.

01:00:58 Thank you so much.

01:00:59 What you, I guess it's maybe the answer is obvious, but, but why?

01:01:05 Like you were fine.

01:01:07 You were just doing configuration file after configuration file and then a little bit editorial.

01:01:12 What, what caused you, what, what was the emphasis for the impetus for spicing it up a little?

01:01:18 Well, I mean, I think it's an important part of the journey is, is picking a technology to run your code on. So there's actually a couple of places that I kind of have that like I have that for I'm trying to create a term because I don't think we do a good job of disambiguating it from from like engine X of Python, like production app servers, like where your code runs, I think these a little more disambiguation, I'm talking like granian, unicorn, those kinds of things.

01:01:46 Vercorn, Uveacorn now, all those places you run your Python code.

01:01:50 So I kind of went through a debate on those from Michael's context, right?

01:01:55 And then I did the same for Python web frameworks.

01:01:57 And it was, you know, I told the story of the bootstrap and how I just, every time I have

01:02:02 to write new code, I'm like, here we go.

01:02:04 I'm in the relic, right in the relic code.

01:02:08 I kind of felt the same way for, so everything was based on Pyramid and I loved Pyramid and

01:02:13 I still have a lot of respect for it.

01:02:15 The reason I chose Pyramid in 2015 was when I went to the Flask page, it said, you may potentially be able to use Python 3, but we are not supporting it and we don't recommend it.

01:02:27 And I'm like, wait a minute, didn't Python 3 come out in 2008?

01:02:31 That's like seven years later.

01:02:33 You know what?

01:02:34 No, I'm not doing that.

01:02:37 I'm starting this project beyond this problem and I'm not going back to be in the, you know what I mean?

01:02:44 As they've since obviously moved on from that.

01:02:46 So Flask was out.

01:02:48 I looked at Django and I thought, I'm really like a microservice guy.

01:02:51 I really want to use Mongo.

01:02:52 A lot of things were not quite good fits.

01:02:54 They actually would be better fits now, right?

01:02:56 Even then.

01:02:57 Yeah, no, if you want to do Mongo, that's, yeah, that's almost a deal breaker.

01:03:02 Yeah.

01:03:02 Yeah, I know.

01:03:03 Almost.

01:03:03 And so I'm like, all right, well, maybe not Django.

01:03:06 Well, you had a pyramid.

01:03:07 They're like, we are trying to embrace the latest standards.

01:03:11 We're Python 3 first, et cetera, et cetera.

01:03:13 And I'm like, all right, I'm gonna give this a chance,

01:03:16 even though it wasn't as popular, like this is great.

01:03:17 And I used it for eight years, seven years, something like that, it was really good.

01:03:22 But things evolved over time, right?

01:03:24 Like Pydantic came out and Pydantic was great.

01:03:27 What's a really nice way to talk to the database with Pydantic, Beanie, okay?

01:03:32 So I can do Beanie and I can do Pydantic and wow, what a really nice, clever way to write databases.

01:03:38 And oh, Beanie's all async only, Pyramid's synchronous only.

01:03:43 When was the last commit to Pyramid?

01:03:45 Oh, it was two and a half years ago.

01:03:47 Chances that it gets async support are low 'cause that was just like a minor bug fix.

01:03:51 You know what I mean?

01:03:52 It's just like, it's fine.

01:03:54 Open source projects, they ebb and they flow and they come and they go.

01:03:57 But I'm just like, I should really move this forward to something that feels like it's active, right?

01:04:03 I mean, stuff in the web makes me nervous.

01:04:05 I'm always just, did you put a port open on the internet?

01:04:08 Well, that's scary.

01:04:09 - Yep.

01:04:10 And so a framework that felt like things were not on top of it as it could have been made

01:04:16 me nervous.

01:04:17 To be fair, I don't know that they had any security incidents or very, very few because

01:04:21 it did so little, right?

01:04:23 It's not like it had a bunch of admin pages or something where there's like accepting

01:04:27 input, but still, still same reason.

01:04:29 So I'm like, I really want to use these more modern tools, typing, async, identic, et cetera.

01:04:37 And I kind of would not like to continue building on something that feels like it's no longer being supported.

01:04:42 And similarly, you, with chapter 13, sort of that, you know, the different thought process there.

01:04:49 You also provide chapter 15, which is a retrospective on Hetzner, which is the hosting provider that you chose.

01:04:55 and again, I think it's pretty clear.

01:04:59 I think I've said it three different ways.

01:05:01 My favorite stuff in the book really is sort of this, you know, that the little, the little, insight into Michael's brain, right?

01:05:08 Like how did he make this decision and how happy is he with these decisions?

01:05:12 Right.

01:05:13 I think that's the stuff that's, that's, globally applicable to a reader, which is nice.

01:05:18 so you've, it's now even a few months further on with Hetzner.

01:05:22 So you, you still happy?

01:05:24 Any regrets yet?

01:05:26 Yeah, no, no regrets.

01:05:28 It hasn't been absolutely a hundred percent smooth.

01:05:30 Let's see.

01:05:31 I could tell you how long it's been if I can get past all the ads.

01:05:34 There we go.

01:05:35 So I actually blogged about this.

01:05:38 And yeah, so it's been about a year, I guess.

01:05:42 No regrets.

01:05:43 I would say if people are out there looking around, to me, it really,

01:05:48 and you want to follow the philosophy of Michael, like carve yourself a space out in a multimillion dollar data center

01:05:54 that doesn't have anything to do with it.

01:05:55 And you just run your code in your own space.

01:05:58 DigitalOcean and Hetzner are the two ones.

01:06:01 And I did DigitalOcean for a long time.

01:06:03 When Hetzner came out, I thought they just had some really interesting appeal.

01:06:07 I started seeing a lot of people talking about them.

01:06:09 And they are a German company.

01:06:12 And they were just in Europe.

01:06:14 And I'm like, I love Europe, but the majority of my customers are in the U.S.

01:06:17 So what is the best place for my server?

01:06:21 Probably the east coast of the United States, because that serves the U.S. really well.

01:06:25 But then it's like one hop across the ocean to all of Europe as well.

01:06:28 So it's still really fast from there and so on.

01:06:32 So I didn't want to move my server to Europe when I felt like being closer to the US was more important.

01:06:38 Not so much because I needed to manage it.

01:06:40 I could SSH to wherever, but just East Coast to the US.

01:06:43 And then they're like, hey, we have two new US data centers.

01:06:47 One near Virginia, right by the US East 1, the infamous AWS US East 1.

01:06:53 And the other one actually in Hillsborough, Oregon, just down the street from me, which is funny.

01:06:58 Yeah, it's like I could drive to it in like 20 minutes,

01:07:02 which of all places in the world is relatively close.

01:07:04 So I went and looked at it and I said, let me just check it out.

01:07:07 And the prices are super cheap.

01:07:09 You get a little bit less support and I think a little bit less top tier data center

01:07:16 than DigitalOcean, but the prices are like insane there.

01:07:19 Like I said, eight core server for 30 bucks.

01:07:23 You know, that's insane.

01:07:25 And when I first signed up, that came with 20 terabytes of free traffic.

01:07:30 Wow.

01:07:30 Which is about $1,700 out of AWS.

01:07:36 Right.

01:07:36 Included in your $30 bill.

01:07:38 You know what I mean?

01:07:39 Like, oh my gosh.

01:07:41 Yeah.

01:07:41 So yeah, I talk a lot about it in the book, but yeah, I went over and moved my stuff over there

01:07:46 and it's been good.

01:07:48 I've had one incident where the machine that it was on died.

01:07:52 The one big server, wherever it was, it died and they had to move it,

01:07:57 which blows my mind, they were able to hot relocate it to another server.

01:08:01 But the problem is it has an external, like a 300 gig external drive,

01:08:06 and that didn't move location.

01:08:07 So all of a sudden, a lot of the IO operations were much slower 'cause they weren't close

01:08:12 to the server anymore.

01:08:13 - Right, right.

01:08:14 - Why, why did my Docker builds take two minutes?

01:08:17 They used to take about three or four seconds.

01:08:18 I cannot figure it out.

01:08:20 And I wrote them, they're like, no, we've tested it.

01:08:22 There's no problem.

01:08:22 I don't care what you say, there's a huge problem.

01:08:25 eventually there's like we moved it again it's fine and then it was fine right so you know if

01:08:31 if folks are looking for something slightly lighter weight and this is going to sound like a commercial

01:08:37 i'm just a happy customer i don't know sponsorship or whatever but i use opal stack with a lot of my

01:08:42 clients um what do you go opal stack opal um yeah and you wouldn't go full docker with it um but

01:08:49 but they do give you access full SSH to the box.

01:08:54 And they've got a neat little sort of packaging thing.

01:08:56 They don't support a lot of things, but if you've got like Django or Flask

01:09:00 or static files for Nginx or whatever, you hit a couple of buttons in the dashboard

01:09:04 and it spawns it up.

01:09:06 But a lot of tools like that, it spawns it up and then you're not allowed to touch it.

01:09:10 What they do is create all the entries in the directories

01:09:12 and then you can SSH into the box and get at the files themselves.

01:09:15 So I find it's a nice little compromise between the two.

01:09:19 It would not scale to what you're doing.

01:09:22 But if folks are looking for a relatively inexpensive thing to experiment with, I find it's a nice little stopgap.

01:09:28 Yeah, that's awesome.

01:09:29 I'm always interested in finding those types of things.

01:09:33 This one is new to me.

01:09:34 This is cool.

01:09:34 Yeah, this is I'm trying to remember there was a site I used to use.

01:09:39 It got bought.

01:09:40 The half of the founders went, we don't want to be bought and took their baseball bat and created Opal Stack.

01:09:46 So I used to be a client of the original and followed them along.

01:09:50 So yeah.

01:09:51 Cool.

01:09:51 And very happy with like the service as well.

01:09:54 They're like you open a ticket and things are very, very human, which is nice in this day and age.

01:09:59 You usually, I'm usually expecting to talk to a bot.

01:10:02 You're getting about as much support as you get out of Gmail.

01:10:05 Yeah, exactly.

01:10:06 Google Docs, which is done.

01:10:08 Another thing worth a shout out here is sort of an alternate way of working with Docker and Docker Compose directly

01:10:15 that I propose in the book is something called Coolify.

01:10:18 Are you familiar with Coolify?

01:10:20 No, this one I don't know.

01:10:21 Yeah, this is super interesting.

01:10:22 So what this does is it knows how to run Docker, Docker Compose,

01:10:28 but it also gives you all sorts of easier ways.

01:10:31 So if people look at what I'm proposing, they're like, no, Michael, too complicated.

01:10:35 This is interesting because what it gives you is it basically gives you your own private Heroku

01:10:40 or Netlify or Vercel or Railway, or you can go in, I don't know how to find it,

01:10:45 from here, but you can also go in and say, let me find any self-hosted app.

01:10:50 - Okay.

01:10:51 - And they've got hundreds of them in there.

01:10:52 And then you just type in the name and say, install this set of Docker containers

01:10:57 as a Docker compose file into my server.

01:10:59 So you could create the one big server, which is your own space in someone's cloud.

01:11:04 And then you can install this or you can pay them five bucks a month

01:11:07 and they'll actually manage the server, manage the deployments,

01:11:11 do like rollouts of new versions of your app.

01:11:15 stuff like that right it sounds like it sounds like it makes it way easier right it actually makes it

01:11:22 it's like two steps forward 1.8 steps backwards right because you know instead of using dot env

01:11:29 files you've got these like a ui to enter a bunch of environment variables and the saving of them

01:11:35 is weird and you're like oh i forgot to press save on these three even though i saved the page i mean

01:11:39 there's just right right it's it promises more ease than you would think and i'm not necessarily

01:11:45 switch i do like i've played with it some i'm not saying i would switch to it given a choice

01:11:50 but it does it does ease you in it's a little bit like python anywhere like i'm sure when i started

01:11:55 there were things that could have gotten in my way but the stuff the support that it gave me made it

01:12:00 possible for me to feel like comfortable and get going i feel like this might be an option for

01:12:04 people who care.

01:12:05 Right.

01:12:06 But let me give you an example.

01:12:07 For example, I could go install an app that has Postgres, Valkey, and the web app.

01:12:15 If I, then I just click install that from wherever self-hosted definition that comes

01:12:20 from, it creates those three containers.

01:12:22 And then on the container setting through the image setting, I don't know really how

01:12:26 you think of it.

01:12:26 I mean, they're not, I guess it's image sort of, you can go to the web part and say, and

01:12:31 just use this URL and it'll automatically do the SSL generation as part of that.

01:12:36 Then you go to the database, the Postgres thing, you say, oh, and make backups for

01:12:39 me daily and store them in this S3 compatible storage.

01:12:43 And that kind of stuff is a lot of extra when you're doing it yourself and you

01:12:47 just go check those boxes.

01:12:48 So that's the one point forward, that's the two forward, but then there's the

01:12:52 step back.

01:12:52 Yeah.

01:12:52 Yeah.

01:12:52 Well, and that tends to be also what makes people nervous, right?

01:12:56 So like that, and that's, you know, I still use managed database simply because I don't want to

01:13:02 have to think about it, right? Like it's like, yeah, okay. I'm perfectly fine with pointing my

01:13:07 app at a managed database and let somebody else think about backing it up and all the rest of it.

01:13:12 Yeah. Yeah. You know, one thing about managed databases that I don't like, and I can't speak

01:13:16 to all potential hosts of them, but certainly some of them, some well-known ones, some names I've

01:13:21 already said, if you get a managed database there, that database server is listening on the public

01:13:26 internet. I very much do not espouse having a database listening. Yeah, it has a password,

01:13:34 but I mean, that's that database. I'm always worried about what is in the database.

01:13:38 That's interesting. I've never thought to even check that.

01:13:41 And on my setup, not only is it not listening on the internet, it's not even listening on the

01:13:46 host there's like a private docker network that only things right in the docker shared docker

01:13:52 network can even know about or see the data you know what i mean so there's yep it's it's less

01:13:58 likely to fewer holes but i have to make backups and if i don't it's bad if it goes down it's real

01:14:04 bad so and it did one time this year i was down for like 10 minutes about a hair out there goes

01:14:09 - That was your sixth nine, yes.

01:14:11 - Exactly, I know.

01:14:13 So the problem was just for people who wanna benefit from my suffering and not suffer themselves,

01:14:19 is I did not, on the Docker pull for the database image,

01:14:23 I didn't pin it to a major version.

01:14:26 And so it upgraded and then it said, well, you have old data in your file system

01:14:29 and we're not gonna upgrade it for you automatically.

01:14:31 So we're not gonna run.

01:14:32 I'm like, why is the database server not running?

01:14:33 It just, and it was like a weird update.

01:14:36 It was like 8.2.1 that broke.

01:14:39 Well, why?

01:14:40 Point one.

01:14:41 Surely, surely a bigger number needs to be like, this will never run again.

01:14:49 Anyway, you know, you find the stuff out the hard way, but yes.

01:14:51 Well, yeah, that that's the negative, not using a managed database.

01:14:56 Yes.

01:14:56 Yeah.

01:14:57 Yeah.

01:14:57 Cause you have to deal with some of that kind of stuff yourself.

01:14:59 Yeah.

01:15:00 So I thought we would wrap up by reviving an old tradition.

01:15:04 I have two questions for you.

01:15:06 What is, what, what, what is your development environment?

01:15:09 and what library are you excited about?

01:15:12 So the development environment right now is a mix of Cursor and PyCharm for sort of editing.

01:15:21 And despite this very detailed conversation about Docker,

01:15:24 I don't use Docker very much locally for development.

01:15:28 I just use virtual environments.

01:15:29 And I want to give a shout out to Hynek, who I had some back and forth about

01:15:33 when I was writing some of this stuff that gave me some really good ideas.

01:15:36 And he has a really good article, which I referenced in the book,

01:15:38 about you just use virtual environments.

01:15:41 Keep everything consistent, right?

01:15:42 That's an interesting debate that we don't have time for,

01:15:45 but it's very fun.

01:15:46 So uv, I'm a huge fan of uv.

01:15:50 - Particularly in Docker, that makes things that much faster.

01:15:53 - Yeah, because you can just say in your Docker, it used to be you're like, okay, well,

01:15:57 I gotta use Docker and I need to use Python.

01:15:59 So let me use the official Python distribution for Docker

01:16:01 because I need to have Python.

01:16:03 And then, well, that excludes 99.9% have all the other things you could build upon

01:16:10 that already have something that's harder to manage set up for you, right?

01:16:14 But in your Docker file, you just say, run uvvenv-python3.14,

01:16:20 you've installed Python 3.14 in two seconds.

01:16:23 And it's cached, right?

01:16:24 It's like, yeah, it just, it makes it so much faster and so powerful, but also just in general, right?

01:16:30 Like it's unified so many tools that I like that are just, it's all there together.

01:16:34 And then library, oh my goodness.

01:16:37 - Now you know how it feels like i know how it feels and now i didn't warn you on purpose i love it i love it so there's

01:16:45 there are a bunch of ones i've been playing with lately and i'm trying to think which one i've

01:16:49 i've used i don't really have a great answer to this chris i'm i'm afraid to say i would say

01:16:54 let's keep it let's keep it flowing with some of the vibes that we had here i would say

01:16:59 let me give a shout out to set proc title which there you go sounds insanely silly like the goal

01:17:06 of that is so in your process and your Python process and I actually use this

01:17:10 on a bunch of different things in your Python process you can say set proc

01:17:14 title dot set title and you give it the name of whatever you want your process

01:17:18 to be so why does that matter when you pull up all these tools like glances be

01:17:24 top or others anything that looks at processes basically instead of seeing

01:17:28 Python Python Python node node node postgres postgres postgres at least the

01:17:32 Python ones now have meaningful names and you might be thinking well Michael

01:17:36 that much production, useless to me.

01:17:38 No, it's good for development too.

01:17:40 Have you ever had the idea, like I wanna know how much memory my process is using.

01:17:46 Is it using a lot or a little?

01:17:47 So you pull up, you know, activity monitor, task manager, whatever, you see Python, Python, Python,

01:17:52 you're like, oh man, I know my editor's using one of these

01:17:55 or whatever, but which one is it?

01:17:57 - And if you're using the right terminal, it'll change the terminal's title too,

01:18:02 because most terminals respond to the proc name.

01:18:05 Oh, that's a very nice touch. Yeah. Okay. Yeah. So, but if you, if you do that in development,

01:18:10 if you just set the name of your process to be like, you know, my utility or whatever the heck

01:18:15 you call it, right. Then when you go into process management tools, like even just for Mac or

01:18:19 windows or whatever, you'll see it and you can see how much CPU is it using? Is it using a lot of RAM?

01:18:25 If you've got to end task it, like we now have another way reason that this is something we've

01:18:29 got to do all the time is the, sometimes the agentic AI things go mad and they start a bunch

01:18:35 servers and then they lose track of them and then you can't run anymore because it says um port is

01:18:39 in use you're like but where like something in that stream of text that shot by for five minutes

01:18:45 it started one and then it left it going but then you pull it up it says python python python and

01:18:51 like well i don't want to kill the other thing that's running you know what i mean and so it also

01:18:56 gives you a way to kill off your ai abandoned stuff that it went mad on so there you go setting

01:19:01 a process name might save you a reboot. There's your little nugget to take away from the podcast.

01:19:06 Exactly. It's a package with one function, but it's a good one.

01:19:11 Excellent. Well, thank you for having me on. This has been fun to sort of reverse the tables on you.

01:19:16 It's been great.

01:19:18 Yeah. Chris, thank you so much. I really appreciate it. And always great to catch up with you. Bye.

01:19:21 It's been fun to be here.

01:19:23 This has been another episode of Talk Python To Me. Thank you to our sponsors. Be sure to check

01:19:27 out what they're offering. It really helps support the show. Look into the future and see bugs before

01:19:32 they make it to production. Sentry's Seer AI code review uses historical error and performance

01:19:38 information at Sentry to find and flag bugs in your PRs before you even start to review them.

01:19:44 Stop bugs before they enter your code base. Get started at talkpython.fm/seer-code-review.

01:19:51 Agency. Discover agentic AI with agency. Their layer lets agents find, connect, and work together.

01:19:57 any stack, anywhere. Start building the internet of agents at talkpython.fm/agency spelled

01:20:04 A-G-N-T-C-Y. If you or your team needs to learn Python, we have over 270 hours of beginner and

01:20:10 advanced courses on topics ranging from complete beginners to async code, Flask, Django, HTMX,

01:20:17 and even LLMs. Best of all, there's no subscription in sight. Browse the catalog at talkpython.fm.

01:20:23 And if you're not already subscribed to the show on your favorite podcast player,

01:20:27 What are you waiting for?

01:20:29 Just search for Python in your podcast player.

01:20:30 We should be right at the top.

01:20:32 If you enjoyed that geeky rap song, you can download the full track.

01:20:35 The link is actually in your podcast player show notes.

01:20:37 This is your host, Michael Kennedy.

01:20:39 Thank you so much for listening.

01:20:40 I really appreciate it.

01:20:42 I'll see you next time.

01:20:55 Thank you.

Talk Python's Mastodon Michael Kennedy's Mastodon