Learn Python with Talk Python's 270+ hours of courses

Agentic Al Programming with Python

Episode #517, published Fri, Aug 22, 2025, recorded Tue, Jul 29, 2025

Agentic AI programming is what happens when coding assistants stop acting like autocomplete and start collaborating on real work. In this episode, we cut through the hype and incentives to define “agentic,” then get hands-on with how tools like Cursor, Claude Code, and LangChain actually behave inside an established codebase. Our guest, Matt Makai, now VP of Developer Relations at DigitalOcean, creator of Full Stack Python and Plushcap, shares hard-won tactics. We unpack what breaks, from brittle “generate a bunch of tests” requests to agents amplifying technical debt and uneven design patterns. Plus, we also discuss a sane git workflow for AI-sized diffs. You’ll hear practical Claude tips, why developers write more bugs when typing less, and where open source agents are headed. Hint: The destination is humans as editors of systems, not just typists of code.

Watch this episode on YouTube
Play on YouTube
Watch the live stream version

Episode Deep Dive

Guest introduction and background

Matt Makai is VP of Developer Relations at DigitalOcean, creator of Full Stack Python, and builder of Plushcap, a Django-based analytics project that tracks developer-focused companies and their content output. He has a long history working in Python, web frameworks, and developer tooling, and he’s been hands-on with agentic AI for day-to-day software work, especially Claude Code running inside his existing workflows. He previously worked at AssemblyAI and Twilio, and still codes nights and weekends on Plushcap to stay close to real-world problems.


What to Know If You're New to Python

Agentic AI tools work best when they can read and operate inside a real Python project with tests, a formatter, and a clear structure. Here are a few quick references mentioned in the episode to help you follow along and get value from agentic workflows:

  • djangoproject.com: Batteries-included web framework that gives agents clear conventions to follow.
  • fastapi.tiangolo.com: Modern API framework using type hints; great for agent-generated endpoints.
  • ruff.rs: Ultra-fast Python linter/formatter; use it to keep agent changes consistent.
  • htmx.org: Add interactivity to server-rendered Python apps without a heavy JS framework.

Key points and takeaways

1) Agentic AI programming: from autocomplete to a collaborator embedded in your repo
Agentic AI is not just code completion; it is a loop where the tool reads your code, plans changes, runs commands, edits files, and explains its reasoning. The workflow feels like pairing with a junior dev who can try things, recover, and iterate. In editors like Cursor or in Claude Code, you can ask for a plan first, approve steps, and then let it modify the codebase. Used well, this shifts you from typist to editor and system shaper, especially for scaffolding, repetitive glue code, and rote refactors. The trick is to constrain it with your conventions and keep changes reviewable with small diffs and commits.

2) Terminology matters: AI model vs LLM vs generative AI vs agents vs autonomous agents
We explicitly separated these terms. An AI model is the broad category; an LLM is a subset focused on text. Generative AI refers to models producing novel outputs. Agents are systems that use models to take actions in a loop (plan, act, observe), often with tools like shells or editors; autonomous agents minimize human-in-the-loop guidance. Keeping these distinct helps you see through marketing and decide what a product really does.

  • Links and tools:

3) Read-only first: use agents to analyze, not just to write
A powerful early win is “read-only mode”: have the tool summarize architecture, flag hotspots, suggest caching points, or identify expensive DB queries. Asking for a plan or critique before edits reduces risk and teaches you about your own code. Once trust builds, let it implement small, targeted changes. This pattern works well across Django apps, FastAPI services, and data pipelines.

4) Prompt like you’d brief a junior dev
“Refactor this function” is vague. Provide the same context you’d give a new teammate: the goal, design patterns, constraints (ORM, framework), and an acceptance checklist. Ask for a step-by-step plan, then implement the plan in stages. This reduces flailing and keeps the model inside your architectural guardrails.

  • Links and tools:
    • ruff.rs (codify style so agents conform automatically)

5) Model choice and drift: don’t judge AI by a tiny local model
Complaints like “AI slop” often come from trying small free models with tiny contexts. Higher-end models like Anthropic’s Sonnet/Opus behave very differently on large codebases. Also, hosted models evolve and may drift. Build a quick “vibe check” suite of prompts or tasks to re-evaluate models and keep notes on which workflows each model handles well.

6) Git discipline for AI-sized diffs
Let the agent finish a small task, then commit. Avoid “let it run wild for an hour” because reverting becomes painful and you risk losing useful partial progress. Many teams split repos by function and have agents work against a stable API boundary, which limits cross-cutting damage. Colored terminals or separate tmux sessions can help you track which agent is working on which area.

7) Agents amplify whatever is in your codebase: fix technical debt first
Agents pattern-match your existing code. If your project has inconsistencies or outdated patterns, the agent will clone them faster than you can say “tech debt.” Use the tools to help identify duplication and over-coupling, then refactor so future generations follow a better pattern. Sentry and similar tools catch new issues quickly when you begin mixing human and agent edits.

  • Links and tools:
    • ruff.rs (format/lint to a single standard)
    • sentry.io (monitor production errors)

8) Concrete win: cross-repo Git refresher script built by an agent
We discussed using an agent to write a utility that recursively finds all Git repos under a folder, runs git pull in each, and reports changes. This is a perfect agent task: well-bounded, testable, and saves recurring time across multiple machines. The key is describing the environment and desired behavior, then letting the agent set up little test directories before running against your real repos.

9) Keep frontend simple so agents don’t fight your toolchain
We saw faster success when using simple, declarative frontends that avoid complex build chains. For example, using Bulma for CSS and SimpleMDE for markdown editing kept the agent focused on Python and templates rather than wrestling with NPM pipelines. Charts via Tabler and ApexCharts were quick to drop in and easy to debug.

10) Opinionated backends help agents stay consistent
Django’s conventions and project structure, or a typed FastAPI app, give the agent strong signals for where things belong. Matt’s Plushcap keeps scrapers and data importers in a separate project that talks to the Django app via a stable REST API, which limits blast radius when an agent changes code. That separation also lets you run agents in parallel on different parts without stepping on each other.

11) Cost models and usage patterns: treat the agent like a teammate
We talked frankly about cost. A flat monthly subscription that you actually use can be a bargain if it replaces hours of repetitive work. The mindset shift is to keep a prioritized queue of agent-suitable tasks and let it churn while you handle meetings or deep work. Track usage to learn where it saves the most time and where it struggles.

12) Testing with agents: ask for happy, unhappy, then edge cases
“Write tests for this project” rarely helps. Ask the agent to generate a few happy-path tests first, review them, then request unhappy-path and edge cases. This staged approach keeps the test suite meaningful and maintainable. It also keeps you in control of what gets verified and prevents low-value “one-plus-one-is-two” tests from creeping in.

  • Links and tools:

13) Simplicity, speed, and local dev
Agent workflows benefit from fast feedback. Use a formatter, type hints, and quick-running tests so each agent cycle converges. A lean stack, minimal external steps, and clear errors reduce hallucinations and retries. Speed matters for human patience and for giving the model crisp signals about success or failure.

14) Mindset: most developers are lifelong learners, so treat agents as another tool to master
A decade ago we added Django or Flask to our toolbelts; now we’re adding agents and local model runners. The same experimental, open-minded posture applies: try ideas, keep what works, discard what doesn’t. The destination isn’t “no coding,” it’s developers as editors and architects of systems that write more of their own scaffolding.


Interesting quotes and stories

"Agentic tools are not just autocomplete. It’s like pair programming with a junior dev that can plan, try, and iterate." -- Matt Makai

"If you are going to pattern match against something you're doing in your code base, make sure it's not copying the area that you know is a problem." -- Matt Makai

"I started by using models in read-only mode: analyze my code, point out security or performance hotspots, then I choose what to change." -- Matt Makai

"Developers write more bugs when typing less. Let the agent review your hand-written code, too." -- Matt Makai

"You will never write another line of code again. I don't actually really believe that is true." -- Matt Makai

"I feel like Olama is one of those tools that I've added to my tool belt like Django or Vim or Tmux." -- Matt Makai

"Over a beer, we built an app in an hour that would have taken a week before. That’s the mind shift." -- Michael Kennedy

"Use plan mode. Make it tell you what it's going to do before it touches anything." -- Matt Makai


key definitions and terms

  • AI model: Any trained machine learning model; LLMs are a subset focused on text.
  • LLM (Large Language Model): A text-focused model, often transformer-based, used for chat, coding, and reasoning.
  • Generative AI: Models that produce new content (text, code, images) rather than just classifying inputs.
  • Agent: A loop around a model that plans actions, uses tools (shell, editor, web), observes results, and iterates.
  • Autonomous agent: An agent with minimal human oversight that can run many steps on its own.
  • Plan mode: An agent feature that produces a step-by-step plan before making edits, letting you approve or adjust.
  • Technical debt: Suboptimal design or code that slows future development; agents will clone this unless you fix it.
  • Read-only usage: Using a model to analyze or explain code without permitting it to modify files.
  • Vibe check: A quick, repeatable set of tests or prompts to evaluate a model’s current behavior and drift.

  • Learning resources

Here are curated Talk Python Training courses to go deeper. Links include a simple tracking code for this episode.


Overall takeaway

Agentic AI isn’t magic and it isn’t the end of software jobs. It’s a power tool that, when given context, conventions, and boundaries, becomes a capable collaborator for planning, scaffolding, and refactoring real Python projects. The teams that win will treat agents like junior engineers: give clear briefs, review small diffs, and keep the architecture clean so the machine amplifies your best patterns instead of your worst habits. Start in read-only mode, standardize formatting and testing, and iterate toward more autonomy where it truly pays off.

Matt Makai: linkedin.com

Plushcap Developer Content Analytics: plushcap.com
DigitalOcean Gradient AI Platform: digitalocean.com
DigitalOcean YouTube Channel: youtube.com
Why Generative AI Coding Tools and Agents Do Not Work for Me: blog.miguelgrinberg.com
AI Changes Everything: lucumr.pocoo.org
Claude Code - 47 Pro Tips in 9 Minutes: youtube.com
Cursor AI Code Editor: cursor.com
JetBrains Junie: jetbrains.com
Claude Code by Anthropic: anthropic.com
Full Stack Python: fullstackpython.com
Watch this episode on YouTube: youtube.com
Episode #517 deep-dive: talkpython.fm/517
Episode transcripts: talkpython.fm
Developer Rap Theme Song: Served in a Flask: talkpython.fm/flasksong

--- Stay in touch with us ---
Subscribe to Talk Python on YouTube: youtube.com
Talk Python on Bluesky: @talkpython.fm at bsky.app
Talk Python on Mastodon: talkpython
Michael on Bluesky: @mkennedy.codes at bsky.app
Michael on Mastodon: mkennedy
Episode #517 deep-dive: talkpython.fm/517

Episode Transcript

Collapse transcript

00:00 Agentic AI programming is what happens when coding assistants stop acting like autocomplete and start collaborating on real work. In this episode, we cut through the hype and incentives to define agentic, then get hands-on with how tools like Cursor, Claude Code, and LangChain actually behave inside an established code base. Our guest, Matt Makai, now the Vice President of Developer Relations at DigitalOcean, the creator of FullStackPython, and PlushCap, shares hard-won tactics.

00:29 We unpack what breaks from brittle, generate a bunch of test requests to agents amplifying technical debt and uneven design patterns.

00:38 Plus, we also discuss a sane Git workflow for AI-sized diffs.

00:43 You'll hear practical clawed tips, why developers write more bugs when they get less time programming, and where open source agents are headed.

00:52 Hint, the destination is humans as editors of systems, not just typists of code.

00:58 This is Talk Python To Me, episode 517, recorded July 29th, 2025.

01:20 Welcome to Talk Python To Me, a weekly podcast on Python.

01:23 This is your host, Michael Kennedy.

01:25 Follow me on Mastodon where I'm @mkennedy and follow the podcast using @talkpython, both accounts over at fosstodon.org and keep up with the show and listen to over nine years of episodes at talkpython.fm. If you want to be part of our live episodes, you can find the live streams over on YouTube. Subscribe to our YouTube channel over at talkpython.fm/youtube and get notified about upcoming shows. This episode is sponsored by Posit Connect from the makers of Shiny. Publish, share, and deploy all of your data projects that you're creating using Python.

01:58 Streamlit, Dash, Shiny, Bokeh, FastAPI, Flask, Quarto, Reports, Dashboards, and APIs.

02:05 Posit Connect supports all of them. Try Posit Connect for free by going to talkpython.fm/posit, P-O-S-I-T. Matt, great to have you back on the show. Welcome back to Talk Python.

02:17 Thanks, Michael. Been a little while. Good to be back.

02:19 It has been a little while.

02:21 As I was entering your show details into the backend system for Talk Python to say what episodes are coming up in what order and so on, I have to enter the guest ID into that thing.

02:33 Yes, it could be a drop-down list, but then there's so much scrolling.

02:35 I just can't do it.

02:37 And it's multi-select.

02:38 So your guest ID is 29.

02:41 We first did an episode together about Fullstack Python when I was sitting on the couch in Stuttgart, Germany, back when I lived there.

02:49 How about that?

02:50 Wow.

02:50 All right.

02:51 Yeah, it's been a while.

02:52 I feel like that's like that credibility.

02:54 You have that like only a double digit guest ID.

02:59 Exactly.

02:59 That's a low double digits.

03:01 Yes.

03:02 This is like a pro number in racing or something.

03:05 Indeed.

03:06 Well, that was the past.

03:08 We talked about some awesome things then.

03:11 The idea of full stack Python, I believe, was a lot of the focus, but it's honestly been 10 years.

03:16 I don't remember exactly what we covered in detail.

03:19 However, I can tell you, this is definitely not a backward looking episode.

03:24 This is cutting edge stuff.

03:25 Yeah.

03:26 I mean, so much of this has changed over even three months.

03:30 I mean, we'll talk about it all, but there's like the philosophy behind all this, the holy religious wars around what developers should be doing with it.

03:40 And then let's dig into the details because that's the fun stuff.

03:44 Absolutely.

03:45 Absolutely.

03:46 I would say it's certainly one of the most controversial topics in dev space these days.

03:52 Well, it's almost like, if you remember when Django and Ruby on Rails came out and they were the new frameworks, even before all the JavaScript frameworks and everything like that, it was backend server, server-side code, and you could do so much more.

04:05 I remember this was back when I was an early professional developer.

04:08 I was working in Java with servlets in a framework called Tapestry and everything was so hard to do.

04:15 And they tried to add so many layers of abstraction.

04:17 It was like peak enterprise, Java development.

04:20 How many layers of dependency injection were at play?

04:22 Oh, yeah.

04:23 And so then I would go home at night and I'm like learning Django.

04:26 And I was like, I can't believe how much more I can get done in an hour than I did eight, nine, 10 hours a day working in the Java code.

04:34 Not because there's literally nothing wrong with Java, but it was just that the frameworks were so well built for what we were trying to do.

04:43 So that's the only parallel I personally have to like what is happening right now, which, and this is like 10X, 100X of what that is.

04:50 It's like, if you use the tools properly for the right purposes at the right time period, because like a year ago versus today is very different, you can be wildly more productive.

05:03 There's also the downsides, but there is substantial improvement to be had in certain areas of software development.

05:09 And I feel like that is actually really the big takeaway.

05:15 Among all the hype, break everything out.

05:16 It's like, there's tactical things you can use this, agentic tools for LLMs that will 10X, 100X certain things that are super annoying about software development right now

05:26 in a way that was previously impossible.

05:28 Right. Summarize my architecture.

05:30 Have you tried that compared to like, I have a hundred thousand lines of code and I've got to like study the crap out of it just to figure out what piece connects to what.

05:38 I love the roast my code.

05:40 Like, don't just tell me about it.

05:43 Like, give it to me straight here.

05:45 Like, how could this function be improved?

05:48 Roast it?

05:49 Okay, now tell me how I can improve it.

05:51 Okay.

05:53 And at the end, just give me a little bit of like ego boost because I need it after all the roasting.

05:58 Yes, exactly.

05:59 Well, it's like, okay, you roast me.

06:01 Anytime you talk me down, Just include a little spirit lifting comment as well.

06:06 Well, the best part is like, okay, roast me.

06:09 Okay, if you're so smart, go fix it for me.

06:11 Yeah, you're like, oh, that's right.

06:13 It did work.

06:14 Sometimes it does actually.

06:15 Sometimes.

06:16 Which is insane.

06:17 Yeah, I think probably the best way I could say we should frame, people should frame this conversation to at least to get started with in their mind is a balanced take, right?

06:28 It's not necessarily vibe coding.

06:31 while that's hilarious and has spawned many amazing YouTube videos, right?

06:37 But it's also, it doesn't necessarily make sense to imagine that these tools don't exist.

06:43 And just out of principle say, well, I'm never going to touch them because I'm not going to replace coders with this.

06:49 It's like saying, I was just playing a bunch with rough, you know, the formatting tool, right?

06:54 And it's ripping through 500 files at a time on these projects I'm working on.

06:59 Ruff format, rough check fix, bam, 37 files repaired.

07:04 I could have done that by hand and flexed my dev skills, but should I?

07:09 No, of course I shouldn't.

07:10 It takes microseconds.

07:10 Would that be the best use of your time?

07:12 It certainly does not.

07:14 And so if these tools exist, well, then I think it comes down to like, well, what is the right way to use them?

07:19 How do they go well?

07:20 How do they go poorly?

07:21 Yeah.

07:21 Are they too expensive?

07:23 That's spot on.

07:23 I mean, what I find fascinating about this shift is like most professional developers have spent their entire adult lives just like learning new technologies.

07:35 I actually don't know why this is any different.

07:37 Like the whole point of being a developer is like keeping an open mind, an open philosophy of like, what can I learn?

07:45 How should I be learning?

07:47 Just that is what appeals to me so much about being a software developer is like this mindset of I go into this year not even knowing what tools I may be using by the end of the year because they may not exist yet.

08:00 And I want to go learn them. And some of them I am going to throw immediately in the wastebasket

08:05 and other ones are going to stick with me for potentially decades. And in fact, when it comes to LLMs, I had put out a post on X and I was like, I feel like Olama is one of those tools that I've added to my tool belt, like Django or Vim or Teamux that will be with me potentially for a couple of decades because it is just so straightforward and it makes it so easy to use these open-weighted models. And so that's incredible. 18 months ago, I hadn't even heard of this tool. And actually, it didn't exist before, I don't know, 18, 24 months ago.

08:40 And here's something that I'm like, wow, this is going to stick with me. So I think that's what's maybe getting lost in a little bit of the hype cycle is we're developers. All we do is learn new stuff. Why is this any different from learning an open source project that you just found on

08:54 GitHub? A hundred percent. A hundred percent. I'm definitely here with you on that. I'd see in other areas like education, I see it, you know, like write this essay for me. That's very problematic if you're in 10th grade and you're supposed to learn how to write an essay, but your job is to create software that works, add features, make it awesome. Like there's not a test. Yeah.

09:15 Other than shipping.

09:45 you if you're not a senior enough developer you're not experienced enough or you're not willing to dig deep enough you suddenly are stopped there's nothing you can do from there you're like no fix

09:54 it fix it it doesn't matter how many times you tell the lom to fix it or the tool to fix it please

09:58 or you go to jail yeah exactly yes there's people on the train tracks you must fix this right now

10:05 my grandma's life depends upon it right you gotta make this database query work yeah well

10:11 I think one thing that maybe is preventing people from embracing some of this stuff is, I actually don't even think that a lot of the terminology is clear. So if you'll allow me to be pedantic for a minute, I actually think that this is often the most insightful thing that I work with people on or tell people about, which is just like, what is the difference between an AI model, an LLM, an AI agent, and some of these other things? Because actually, people use these interchangeably.

10:38 People will say Gen AI, generative AI, when they mean an agent or vice versa or an LLM.

10:44 These are not drop and replace terminology for each other.

10:48 They have very specific meanings.

10:50 And I think particularly when I've seen the traditional software companies try to all of a sudden AI all the things, this is part of why developers get so annoyed with the AI ecosystem.

11:02 Because it's like saying Django is not a...

11:07 You wouldn't say web framework.

11:08 you would say like, oh, it's an AI agent coding tool.

11:11 It's like, no, it's not.

11:12 What are you talking about?

11:12 Immediately you lose your credibility, right?

11:14 It's an AI generative web framework builder.

11:20 Right, right.

11:20 Thank goodness it's not.

11:21 Thank goodness Django, the creators and the maintainers know what they're good at.

11:26 And they're not trying to be a part of the hype cycle.

11:30 Ironically, I think Simon Wilson, one of the original Django folks, is one of the leading people in AI.

11:38 I've learned so much from his writing.

11:39 Yeah, absolutely.

11:40 It's unreal.

11:40 It's amazing.

11:41 I didn't think that, go ahead.

11:42 Yeah.

11:43 I don't know how, I don't know how, I don't understand how he can be so prolific.

11:46 He is incredible.

11:47 Like he's, he's just like a true gift, to software development, having been on the leading edge of so many things.

11:54 It's amazing.

11:55 Yeah, absolutely.

11:56 So if I could just lay out real quick, maybe for, for folks like a, an AI model was an AI model is typically when people are talking about that.

12:04 Now, again, like there can be a little bit of nuance and gray areas with some of these definitions, but typically an AI model is something that is trained on some sort of training set.

12:13 And it's been typically trained on the transformer architecture, which is a fairly recent, last eight years kind of development. And so this is really what's allowed the breakthrough when we talk about AI now versus AI a couple of decades ago when it was like complete AI winter and no one wanted to talk about it, is we've had this breakthrough in architecture and it's not just like one breakthrough. It is a tremendous number of breakthroughs around attention and the way that things are weighted and things like that. But essentially, to boil that down, you have these AI models. Now, an AI model is not equivalent to a large language model, an LLM. An LLM is one type of AI model. So when I was at Assembly AI, they were training or they are training state-of-the-art speech-to-text models is not an LLM. So a lot of times people will say like AI model LLM, but those are not equivalent. An AI model is a superset of the subset, which is like an LLM, which is one type of typically working on text modality. Although there are things called multimodal models, which could have different image inputs and image outputs and text inputs, text outputs, and stuff like that. But I think that's one thing where a lot of companies and maybe even developers who are learning about the space get confused. It's like, is an AI model and LLM are the exact same thing? No, there's a relationship there, but they're not the same. So then you have like generative AI, which a lot of companies just kind of like sprinkle that into everything that they talk about. It's like generative AI is really using AI models, typically LLMs, but also image generation and some other forms to create some sort of output. So it's the generative component of it. So you have some sort of input and then there's a non-deterministic set of outputs to come out the other side.

13:52 So it'll be, for example, like draw me a red dragon breathing fire for the image generation.

13:57 And that generative AI is basically using an AI model to produce that image out the other side.

14:04 So those are some of the common terms.

14:06 And then you have AI agents, which is a lot of what we talk about or we're going to talk about, which is it is using typically an LLM, but it's typically using some sort of AI model.

14:16 And that is kind of almost think about it as like the core.

14:19 There's inputs into the system and non-deterministic outputs that come out the other side.

14:22 So you'll say something like, write me a bunch of unit tests and in Claude Code or in Cursor or in Windsurf, and then it will interpret those inputs and then produce code or some sort of output out the other side.

14:35 So I think for developers who are trying to get into, how do I even understand kind of the AI space?

14:40 It's actually really important to get that terminology correct, because otherwise you won't even know when you're reading, particularly for companies or people that aren't as familiar with the space, like what they're even talking about.

14:53 So I always like to kind of like- A lot of the companies, yeah, their job, it's to their benefit to obscure

15:00 and make it just seem like it's everything.

15:01 Yeah, they want to seem like-

15:03 Yeah, totally.

15:05 There's financial incentives by some companies to like obscure what they're doing and make it seem much more complicated so that they can sell this solution.

15:13 Oh, it takes this really complicated problem and streamlines it down to like some sort of simple solution.

15:17 And that's like often not the case.

15:18 Like when you peel it back as a developer, like, you're not really doing that.

15:22 Right.

15:23 So I think that's often where I think a lot of developers would get frustrated with the current state of the industry because you're like, no, it's not what you're doing.

15:32 You know, like you're not, you're not doing generative AI.

15:34 You're not doing actual agents because agents are, you know, and then there's like autonomous agents, which are like operating independently.

15:40 So that's one thing I think developers can like take away from the conversation is just like, is the company accurately describing what they are doing?

15:49 Yeah, I a hundred percent agree. And let me do a little sidebar rant and I'd love to get your

15:55 thoughts on this. Okay. Yeah. So when I hear people say, I've tried AI, it's a bunch of AI

16:02 slop. It just makes up a bunch of mistakes. I noticed a couple of things that are often the case when I hear those and have people avoid that, right? So one, a lot of times I feel like people are not getting the best experience. They say like, I tried this. It really wasn't for me.

16:20 It just messes up more than it provides. They're using the cheapest free models that they can find, right? If you use, you know, Claude's Opus model versus some free model, you know, like a 3 billion parameter local model they're not even in the same category the type of accuracy and like insight and like the context do they they understand everything rather than well they only understand the last file they read like that that is like one half of the problem and then i think the other for people who are not being successful yet with this kind of stuff it has to do with not providing enough information and context and stuff in the prompt so i'll see like oh refactor this function it's like well, hold on. What, where, where do you even want it to go? Right. It's just going to like, start randomly doing stuff. Here's a function. It is similar to these. And I need to move this to this kind of design pattern, keeping in mind that I'm using this ORM and like, give it like the same amount of description you would to a junior developer who's not super familiar with your project. So you want to give that thing like a possibly a multi-page write-up of what it needs to do and then ask it to plan it out and then start working through the plan, not just refactor

17:36 to be, you know, to do this or whatever.

17:39 And I think those two things are both problems.

17:42 They often go together because the cheaper models can't accept that kind of information and keep it working.

17:48 Right.

17:48 Yes.

17:49 What do you think about that?

17:50 I mean, I totally agree.

17:51 I think also to like these models, you cannot just like mix and match and replace things.

17:56 So you may have an experience with one model and also recognize that these models, even though Claude for Opus is the public name of that model, they are tweaking and tuning that model on the backend to figure out how they can more, maybe not profitably, but serve this model up at scale in a window of resources.

18:20 So even within a given model, unless it is an open weighted model that you've downloaded, like you have under complete control, these models are changing over time.

18:29 So this is like, I think that's actually been one of the most concerning parts for me as a developer.

18:35 I was like, what if I rely on a tool and it changes?

18:39 And I wake up one day.

18:40 The tests passed last week.

18:41 They don't pass this week.

18:42 Right.

18:42 I have no control or visibility.

18:43 Right.

18:44 I was productive last week or I was productive yesterday and I can no longer be productive today.

18:49 And that's setting aside any downtime or API stuff, right?

18:52 So I think the thing is, that's why I very much appreciate the vibe check, the whole concept of a vibe check, which is you get a new model or get access to a new API.

19:03 And what are the individual components that you want to test?

19:06 So even to your example of refactor, here's the ORM, that sort of thing.

19:11 I started very, when I have a model, and I use a lot of cloud code.

19:16 I use a lot of Opus and so on now, but I'm sure this will evolve.

19:19 I would love to try it with some open weighted models soon.

19:22 And I will say something like in this URLs.py, because I work a lot with Django on the backend, I'll say update just the, I'm trying to add a new page.

19:34 It's going to be at this route.

19:35 Here is the, please update the URLs.py file for me.

19:40 And like, here's roughly what I expect, right?

19:42 Super specific.

19:42 And if it can't even get that part right, the chances of you saying, okay, now build a new page, now build a new view, now do X, Y, and Z are very small. So you kind of have like the smallest unit that you would work with and then start building up. And it is a two-way street. Like you have to build confidence in that model over time. Like what are its true capabilities? And I will say like it's a lot of work, but it's a lot of work being a software developer and learning a new open source project.

20:09 So it's actually not that different from just like, okay, pip install, new library, reading the

20:14 documentation, all those things. But it's a different format of going back and forth with

20:19 the computer. I think it encourages people to be less serious because it feels like a little chat, a little back and forth. Just, hey, I just asked a little question and it came with a cool answer.

20:28 It doesn't speak, you need to study this and you need to really, this is a skill. It's like I'm having just a chat with someone you know or whatever. Well, it does open up, even if you set

20:41 aside code generation. It opens up new capabilities that I would argue are useful for every single developer, regardless of how soon you are. So just being able to run Claude and in plan mode, ask it questions about like, where might I have security vulnerabilities in my code? Or where could I refactor database queries in order to compress the number of database queries within this view or where could I add caching that would be most impactful and don't even have it touch your code other than to read the code. That was actually like my big kind of breakthrough with LLMs was I was like, I'm just going to use them in read only mode. I don't need them to modify my code for me.

21:22 I'm comfortable doing that myself as a developer. But once I got confident in the models being able to read the code, I was kind of like, eh, just like dip my toe in the water, like maybe modifying some things. And especially in Python, I've read a lot of scripts. I'm like updating data and I just, I don't know. It's not my favorite part of coding. So having models that can write for me,

21:41 even if it's not perfect, and then I can modify it. I need to export this from the database in that format. And if it works at all, it's perfect. If it's not going to work at all, right? It's a little risk. Yeah. And an export from a database is,

21:53 as long as you're not accidentally dropping a table, it is just a read-only, kind of like, tell me about my code. There's all these things out there that in software development are like, you have to do the analysis yourself, but if you can shortcut it with an LLM, that actually seems like a big win to me.

22:08 And I don't actually see any downside to doing that.

22:11 Like if, again, if it's, if it's a read only and you're not touching the code, that to me feels like only a win.

22:17 It was, it's time that I've saved that otherwise I would have to invest myself.

22:23 This portion of Talk Python To Me is brought to you by the folks at Posit.

22:27 Posit has made a huge investment in the Python community lately.

22:31 Known originally for RStudio, they've been building out a suite of tools and services for Team Python.

22:36 Today, I want to tell you about a new way to share your data science assets, Posit Connect Cloud.

22:42 Posit Connect Cloud is an online platform that simplifies the deployment of data applications and documents.

22:48 It might be the simplest way to share your Python content.

22:51 Here's how it works in three easy steps.

22:53 One, push your Python code to a public or private GitHub repo.

22:58 Two, tell Posit Connect Cloud which repo contains your source code.

23:02 Three, click Deploy.

23:04 That's it.

23:04 Posit Connect Cloud will clone your code, build your asset, and host it online at a URL for you to share.

23:10 Best of all, Posit Connect Cloud will update your app as you push code changes to GitHub.

23:16 If you've dreamed of Git-based continuous deployment for your projects, Posit Connect Cloud is here to deliver.

23:23 Any GitHub user can create a free Posit Connect Cloud account.

23:26 You don't even need a special trial to see if it's a good fit.

23:29 So if you need a fast, lightweight way to share your data science content, try Posit Connect Cloud.

23:34 And as we've talked about before, if you need these features, but on-prem, check out Posit Connect.

23:41 Visit talkpython.fm/connect-cloud.

23:45 See if it's a good fit.

23:46 That's talkpython.fm/connect-cloud.

23:49 The link is in your podcast player's show notes.

23:52 Thank you to Posit for supporting Talk Python To Me.

23:55 Let me give people an example of something I recently did.

23:58 And I want to kind of tie this back to like really emphasize the agentic side of things.

24:03 So I have, I just checked, I have 530 GitHub repositories on my hard drive.

24:10 I have three computers.

24:11 I have like this streaming course live recording computer that we're talking on now.

24:16 M2 Mac mini.

24:17 I have an M4 Mac mini, which I usually work on, but I also have my laptop.

24:21 And so every now and then, usually I'm on top of things, but every now and then I'll up a project and i'll start making changes i'm like oh i forgot to do git pull oh no oh this might be this might be bad you know especially if it's like a binary file like a powerpoint that i'm going to use for a course it's like there's no fixing it right you just yeah put them side by side and rename one and copy it over but it's it's not great and so i was sitting at my my um kitchen table on my laptop about to take my daughter to school and i opened up cursor with cloud sonnet in agentic mode and i described what i want i said what i would like is for you to create a script that i can run as a single command that will recursively find all the git repositories in a certain whatever folder i'm in downward find that do a git pull on every single one of them and report out which ones have changed. I drove my daughter to school, which is five minutes, one way, five minutes back. I sit down. The thing was there. It had created little test directories.

25:25 It ran it. It said, it looks like it's totally good. And I literally just now have that as a

25:29 utility. I can just, when I sit down, I'm like, oh, it's been like a week since I worked on

25:33 security. Just get refreshed and just give it a moment. And then boom, you can see a report of all the stuff that's changed across all the parts. Even on something amazing is like chat CPT. You agentic. So maybe like, let's, that's a long winded way of like saying, tell us about the magic of like, what is this agent and tool using aspect for it? Cause I think when I said my rant, it's not working for people. I think a lot of times people are not doing agentic AI. They're asking LLM to write functions or stuff, which is great, but it's not the same. Yeah. Well, I think, I would say

26:05 like kind of the biggest thing, and I mean, there's like, there's multiple attributes to it. Like, again, going back to some of the definitions, it's like you have an LLM and that LLM, if it's running ChatGPT on your web browser, it's not going to have access to all the stuff in your codebase.

26:19 Unless you have a public GitHub repository or something. But generally, when you're working in your local development environment, it's not going to have access to that stuff.

26:28 To be fair, there are some tools that will take enough context and upload it to ChatGPT. But again, what you're starting to do is you're starting to get really far away from a natural workflow and into one in which you have to bend to how the LLM is set up.

26:46 So to me, I think the simplest way that I would look at it as agentic is like, it's something that is running side by side with you.

26:53 It's almost like, I don't want to say copilot because copilot has overloaded terms with like kind of GitHub copilot, but it is essentially like, think about it almost as like pair programming with like more junior developers.

27:04 Other people have used that analogy.

27:06 It's not perfect, but it's kind of the one that I have that kind of sticks with me.

27:10 It's as close as we got, I think, honestly.

27:11 It's as close as we got, right?

27:13 You can tell it to go do stuff, and it's not just going to only read.

27:16 It'll look around.

27:18 It'll experiment.

27:19 It'll try something.

27:20 It'll see it's wrong.

27:21 It'll try something.

27:22 It's kind of independent.

27:24 And I think the easiest one for a lot of folks to have tried, because it was the one that I frankly got the most comfortable with at first, was in Cursor, you have which is essentially VS Code.

27:36 So, you know, cursor is like embedded within that.

27:38 And then you have different ways of interacting with like an agent mode where you're just asking to do stuff, right?

27:45 Like asking stuff about the code base or please to write this function, whatever it is.

27:50 So I do think that like that works for some folks.

27:53 For me, it was not kind of the light bulb moment.

27:55 It was kind of like where I started initially using it was I would, if I was like, oh, I need to like use this new API or I need to like kind of like develop a script or whatever it is.

28:05 It was kind of like my, because I don't normally work in VS Code, I'm like a Vim, Tmux, like for a long time, like that's kind of my natural environment.

28:12 And I was never going to like kind of adapt my own workflow.

28:15 And I think a lot of people are like that, right?

28:16 Like you're in JetBrains or you're in Xcode or whatever it is.

28:19 You don't, you, you, what there's a breakthrough for you is like to build it in a way to your work, your development workflow that just is, is natural.

28:28 And I think that's kind of, not the canonical definition, but to me is kind of most representative

28:34 of kind of like agentic programming is like it it's just completely a part of your workflow and you don't have to adapt so um so again like that that cursor mode is kind of like okay i'd use it for like one-off stuff for me the breakthrough was using was using cloud code and i'll actually instead of talking about cloud code i'll say here was the bet that i made my feeling was there was enough value from cloud code uh based off of what i've been the videos that I've been watching, what I've been reading, and a lot of folks have been using it, that I was like, I'm a little worried about getting too reliant upon this tool, which is an API.

29:10 And I will give Anthropic a lot of credit.

29:12 They have done a lot of good work on these models, but I've also used Anthropic stuff in the past, which either had bad uptime or they were changing the model too frequently on the backside or things like that, that I was worried.

29:24 And I think that's a natural thing as a software developer, should I really use this API?

29:30 So I was like, my bet was, okay, even if I use this and it's amazing, if it goes away, I still need to be comfortable just like moving back to my workflow.

29:39 Or there could be open weighted models in the future.

29:42 Like I can just run Olama and I could run either Claude Code or Open Code or some sort of CLI that would allow me to just do the same things, right?

29:52 I mean, may not exactly, but roughly the same things.

29:54 So that was kind of the bet that I made.

29:55 That was like the philosophical mindset shift for myself was I'm already very comfortable as a software developer.

30:01 Let me add this in a way that doesn't break my coding workflow.

30:04 I'm not adapting to these tools in a way that is unnatural.

30:09 And then I will use them in ways that I feel like are actually going to be productive as opposed to forcing them to be used in ways that almost like all the hype cycle and the news media is talking about, you will never write another line of code again.

30:22 I don't actually really believe that is true.

30:25 I don't know.

30:26 I feel like anybody who's saying that is not actually using these tools. And I actually don't think it's going that direction. So I don't know that that kind of sets the stage rather than like how I use the tools, like what was going through my mind as a software, as an experienced software developer of like, should I even make this shift? I don't know if that if you had to undergo that

30:45 as well. I did. And I guess I haven't been as introspective about it as you. But for me, The real shift for me was I'm already a good coder.

30:57 I feel very competent flying around.

30:59 Our tool chain, I know yours and mine is like kind of quite different, right?

31:03 You're very Tmux based.

31:04 I'm very much often in PyCharm jumping around there, but we all are very fluid and capable, right?

31:10 So you're feeling like this is like, I'm really productive and competent.

31:13 And the last thing I want to do is just like spend my days in a chat engine.

31:17 You know what I mean?

31:18 Right.

31:19 Like that is, that is not it.

31:20 And I guess one of the main things, I sat down with a friend of mine, we were out having beer and he had his laptop, a friend named Mark, and he had his laptop.

31:28 I said, well, let's just, man, I've been doing some really cool stuff with Agenda.

31:31 Let me just, let's just do an example.

31:33 And over like a beer, we built the most amazing app that would literally take weeks or a week at least, you know, knowing what you're doing.

31:42 I got not, not, I'm going to learn these frameworks and then do it.

31:46 But even with, like that actually changes my perspective.

31:50 Did it use open source libraries and frameworks in order to, like, did the tool pull in a bunch of stuff and it actually stood up and built this application off of all the open source code, right?

32:01 Yes.

32:01 And it was ridiculous.

32:02 It was a FastAPI app using SQLAlchemy to talk to SQLite.

32:09 The web design was done in Bulma CSS.

32:12 It used simple MDE for live markdown editing.

32:18 And it just, it kept going.

32:21 And one of the things I think is really interesting, and I know you have some thoughts on this as well, is it's the more you work with like new fancy HIP frameworks, the less well off you are, right?

32:33 Like I could have said Tailwind, but Tailwind's got all these build steps and all these little nuances.

32:37 Bulma is just include a CSS.

32:39 And so it didn't have, that was like a thing it didn't have to worry about and so on, you know what I mean?

32:43 Yeah.

32:44 So I find it's like, I find myself trending towards more simple code than more towards frameworks, which is the opposite.

32:52 I think.

32:52 Yeah, well, there's I'll even take it a step further, which is I very much appreciate.

32:56 I very I agree with you on the like less build steps.

33:00 And in fact, like the simpler is often better.

33:02 But I will say that I very much appreciate the opinionated tools and frameworks.

33:07 And that's why I've actually had a better experience using Claude Code with Django.

33:15 And a big piece is also, I've written thousands, if not tens of thousands of lines of code already in Plush Cap, which is typically what I'm building, if not some side scripts and stuff like that.

33:28 And I will have-

33:29 Let's do a really quick sidebar and let you introduce those projects.

33:33 Just because I know it's really nice to be able to reference back and say, when I was adding this feature or this is the, so give people something concrete, you know,

33:40 like tell us about full stack Python and Plush Cap.

33:43 - So this is full stack Python.

33:44 I wrote full stack Python.

33:46 It's like over 150,000 words, all about the Python ecosystem, ranging from like the Python language itself to web frameworks, to deployment options, and content delivery networks, APIs, all sorts.

33:59 So it was kind of like all the things around Python and it was broken down by conceptual ideas.

34:05 So like data visualization and implementations, which I feel like is something that is not particularly obvious to people that are learning to program is like you have conceptual ideas and those have specific implementations.

34:16 So a web framework is a conceptual idea across many different programming languages, but the specific implementations like a Ruby on Rails, like Rails, the framework, or Django, the framework, are the specific implementations within those programming ecosystems.

34:30 And so that's essentially how I built out this site over 10 years.

34:34 I will say that I only really work on one side project at a time.

34:39 And Fullstack Python, I felt like, kind of run its course over 10 years.

34:43 So I really haven't updated it in a few years.

34:45 I still think it's actually relevant for most of it, but some of the links are a little bit dated and things like that.

34:52 And it's not been brought into the conversation that we're having around coding agents and things like that.

35:00 It is still relevant, but I would say not as relevant as it was when I was updating it literally every day.

35:07 Right.

35:07 Back in 2016 or something like that.

35:10 Yeah.

35:10 Yeah.

35:11 And I also, I struggle a little bit because I would love to go back to working on full stack Python, but what I struggle with is you can ask LLMs about all of this stuff and it will give you very good answers.

35:22 And so a lot of what I was doing was just bringing like as an experienced software developer, like writing almost like essays on like, you can see here, like, why are web frameworks useful?

35:32 You can ask an LLM like why web frameworks are useful. And actually it'll probably give you a better answer than what I've written here, because it's going to incorporate tons of different sources. So that's where I've struggled a little bit with like, as you know, the chat models based on LLMs, especially as they've added search have gotten better. I'm not really sure how to add enough value on top of what an LLM can tell you to justify a substantial investment in writing.

36:00 And then also, the one challenge with Full Stack Python is it's a statically generated site. It is primarily Markdown built with Pelican, which is a great static site generator in Python, but it was a lot of writing and it wasn't as much coding as I wanted. The site is kind of the site And then you add a ton of content to it. To me, I really wanted to get back to, especially now as a VP for the last several years of my career in executive level, I don't get a lot of time day in and day out to code. And so I really wanted something that I was coding every single day on nights and weekends and just doing in my spare time.

36:39 So that's kind of where I shifted to this project, PlushCap, which is at plushcap.com.

36:43 So this is like a landscape of developer-focused companies with self-service motions.

36:49 So the hypothesis behind this was like when I was at Twilio, we were tracking developer content and events and other competitors.

36:58 And it wasn't back then, but now YouTube is a really important source for teaching developers.

37:05 And so I want to just like to aggregate a lot of this data.

37:10 So that's what I've done.

37:11 It's essentially 500 developer-focused companies across all different stages from like pre-seed all the way to publicly traded companies, where they stand kind of in their competitive position.

37:21 And then like leaderboards that will show you like how many videos and views and subscribers do different companies have.

37:28 And so like there's just a lot of data analysis, a lot of data visualization, a lot that essentially just goes into, like, if you go to, if you scroll down on Airbyte and you go down to, on this page, if you scroll down to the website structure and click on the blog, like, sorry, if you click on, go back and just click on the blog or just go up a little bit, the blog posts.

37:52 Gotcha. I see.

37:53 Under content category, this lays out the number of blog posts that was published per month by the company. And so just being able to visualize like their content patterns and trends is like, been helpful for me as I talk to companies about their go-to-market motions with developers and developer marketing, things like that. So anyway, this is a Django project running on DigitalOcean, Cloudflare as a CDN, a ton of Python, a ton of database stuff on the back end. So for me, I just love digging into this and I use coding agents and they've really greatly accelerated what I can actually do with this.

38:34 Because what I'll do is I'm in meetings all day.

38:37 So in the start of the day, I'll just tee up like a bunch of stuff that I want to get done.

38:41 And then throughout the day, as I'm like getting out of meetings, I can just hit like, okay, okay, like good.

38:45 And I just run it all on my own computer.

38:48 And then I've got my work computer, I got my own computer where I'm like running all this stuff.

38:52 And like periodically when I'm taking a break, I'm gonna be like, okay, yep, that looks good.

38:55 Or no, I don't do that thing, right?

38:57 So to me, it is like having a development team, even just a small development team, because I'm not doing like huge stuff to actually implement things. So that's really where I use all these tools. And I have an established code base. I wrote the code base myself over three years by hand. So now I have all the design patterns.

39:15 I have an opinionated framework with Django. I've already chosen the vast majority of the open source libraries that I need. And it is mostly about the coding tools, pattern matching against what I've already done to create a new visualization or to create a new scraper because I'll scrape all the content from all these sites.

39:34 So if I add a new company in the database, I have to write some custom web scrapers in order to get all their data. And that stuff is annoying. I've already written hundreds of those. So I'd rather just have- You're done right now.

39:48 And then out the other side comes something that I can actually use.

39:52 And I don't have to do all the grunt work because I've done a lot of that myself.

39:55 Yeah. There's no more lessons to learn by making you- figure out their nav structure.

40:01 Yeah.

40:01 Or like, you know, I'm pulling like, okay, which CSS class corresponds to like the author name of the blog post so that I can get some structured data out of what is otherwise like unstructured data because it's every single company has a different blog format, right?

40:16 So.

40:17 Yeah, I think this, you touched on two really interesting things here.

40:20 First, well, three, I guess.

40:22 One, PlushCap is super cool.

40:23 Thanks.

40:24 Yeah.

40:24 Two, having a low stakes project that you can just kind of go mad on with agentic AI

40:32 and it's not going to end things or end badly.

40:36 And three, I think this is really interesting.

40:39 And I've noticed the same trends for me as well.

40:42 But having already created the project, having really carefully thought about design structures, like, okay, it's going to be in this folder structure and it's going to follow this design pattern over and over using these three libraries I like.

40:55 Not the most popular ones, necessarily, but the ones that I think I want to work with.

40:59 Putting that in place really limits the, oh, it just, why is it using NPM?

41:05 This is a Python project.

41:06 What is it up to?

41:06 You know, like it's way, way more focused and more likely to be aligned with what you were hoping it would do in the first place.

41:13 And you need to choose the tools that you can debug the best.

41:18 And so for me, like this is a Bootstrap front end.

41:21 There's a framework called Tabler, which is built kind of on top of Bootstrap, which makes it even easier to build kind of these admin style interfaces and pulls in.

41:29 I think it's I think it's chart.js.

41:32 Apex might be Apex.

41:33 I was using a few different charting frameworks.

41:35 So it might be like Apex or something now.

41:37 But the whole the whole point of like choosing your tools and being opinionated about your tools, I think actually helps a lot because then you don't have to get up to speed on the framework or the library before you debug the problem that the LLM created, because the lms are typically gonna they're gonna get you often 95 of the way there and you can keep prompting them but it's often it's often better to just like go fix it yourself there might be like a one-line tweak or something like that um that you need to do as opposed to like trying to get it to like sometimes it feels like getting the last like five to ten percent of polish is like finding a needle in the haystack because it's changing too much often whereas if you can just go in and and modify it yourself, you're not going to, you're not going to introduce additional noise into the development process.

42:25 Yeah.

42:26 I have a, but seems like an out of left field question, but it's not.

42:30 What about Git and source control?

42:32 Oh, yeah.

42:33 So, I mean, that's the thing that we haven't talked about, which is like wildly important, which is like after when I'm letting something run, it is off of either it's either in a separate Git branch from running a bunch of things, or I actually like a grant, I have some simplest simplifying assumptions with my own code base.

42:50 I've split the code base into multiple parts.

42:54 So I have scripts where the scripts are like web scrapers.

42:59 And if I'm updating things through an API, like I use the YouTube API.

43:03 I use the PyPy API.

43:07 I use a bunch of different APIs to get data into the database.

43:10 So I can have an agent running on that thing.

43:14 And that's separate.

43:15 I actually only have those things interface with an API that is running with Django framework with the main production application database.

43:23 So I've created a layer of abstraction where I can have an agent running on this thing over here.

43:28 Then I can also have it running on the main application code base.

43:32 And as long as we're not changing anything about the API, then I know that they can make their changes separately and that they're not going to interfere with each other.

43:41 And all of this, going back to your original question about Git, I'm just constantly just doing Git add, Git commit.

43:47 The one thing I've definitely noticed about some of these tools they will go off and create a bunch of different files.

43:53 And so you do have to get pretty careful with like, I used to just do, you know, git commit, all the things,

44:01 git commit dot, or no, sorry, git add dot, git commit with a little message.

44:06 And I've learned like, oh, geez, like there's some debugging files that it actually didn't remove.

44:10 I got to like manually remove those.

44:12 Now, again, like I think a lot of these tools, you can configure them to like get rid of those problems.

44:18 I think that that's the thing that's really improving quickly now.

44:21 So if you ask me, like, what's the thing now that's really annoying that will likely be much easier a year from now, it will be like, you're going to have a standard like claw.md file that gives instructions that's just going to be like standardized.

44:35 It's like how everybody has a git ignore file.

44:39 Like you just kind of copy it from somewhere, right?

44:42 Yeah.

44:42 And then you modify it a little bit.

44:44 But like the vast majority of it's the same.

44:45 Like that's where we're going to go, right?

44:47 Right.

44:48 Right.

44:48 You're like, oh, this we've all tended towards this.

44:50 Yeah.

44:51 And so that goes back to my, you're not prompting enough sort of things, right?

44:55 Like if, if you're opinionated about how your project is built, put that in there.

45:01 And a lot of times these agentic tools have like a rules file, like cursor has a dot cursor rules and stuff that you might put in there is my preferred web framework is FastAPI.

45:11 When you install things or create a virtual environment, use uv, not Python's native, right?

45:19 And you'll see it when it's working.

45:20 It'll say things like, I know that I'm supposed to use uv, so it'll run uv pip list when it's checking if something got installed right.

45:28 And it'll even like sort of remind itself from what you've said.

45:30 And it's those kinds of things that keep sit on track, right?

45:34 So I would say one of the things people should actually put more effort into than maybe they initially think is like setting up these rules about just your general preferences yes and uh i think

45:43 it was armin roniker had a really great uh tip the other day where essentially like you can just use i believe it's a hook in cloud code and it's like basically if the um the lm tries to use a pip install something it will actually throw an error and it will know based off of that error that it should be using uv uh and so you could add enough of these hooks and so like what i anticipate is like over time, number one, like people will develop some level of standardization for these, like right now, the configuration files are very fragmented across the different agentic tools. So you can't quickly swap from like one to the other, like, and like there's a Gemini CLI and there's a cloud, you know, cloud that has a configuration, both for project specific and for all your development workspaces. My guess is like a lot of this stuff will start to get standardized, just like MCP was standardized and an open standard. And then it will make it much easier for people to just transpose their development preferences from one tool to the other.

46:44 So my guess is that's probably what's coming over the next 12 to 18 months. And if the big companies don't want that to happen, they'll be left behind by the open source tools that will make that much easier. I think the other thing is I actually don't... I don't know. Maybe I'm one of the balls with them. I don't, I have a VMRC, like for my configuration, I don't have like an insane number of extensions or, you know, or I guess plugins, or like a huge VMRC or anything like that. Like I actually try to keep things relatively standard. And then just focus on the patterns that are most important or the configurations that are most important. And I I'm still kind of like that with, with using cloud code, I do have, you know, cloud and defy on stuff. But I also found that it's not 100% accurate. And so I think there's going to be a lot of development around making sure that the tools adhere to the practices that you really want. Because right now, I feel like if you want reliability in code formatting, you need to run the code formatter as a separate step as a part of your build workflow. The agentic tools are just not... Maybe I'm not doing it right.

47:55 I'm reliable now.

47:56 I've gotten, cloud sonnet to, to know that it's supposed to run ruff format and rough check --fix whenever it finishes anything.

48:04 And so it'll, at the end it'll say, and now I'm supposed to do, you know, this to make sure it's tidy and like the style you like according to your rough Tom, all right.

48:12 Yeah.

48:12 And that, that makes sense.

48:13 I mean, it's, it's not, sometimes it doesn't.

48:16 Right.

48:17 But you usually do what do you, what's wrong with you?

48:19 I know.

48:19 Yeah.

48:20 And also I feel like everything that you add to the agent, like the more and more steps than you add, actually, again, this will change over time, but I feel like it actually gets less reliable. And also the number of steps, I would rather just have it run a script that will handle all those things before I'm ready to commit, rather than having it run all the time.

48:39 Because I've also found that sometimes I want to add a tool for code formatting or something, and then it slows down the whole process too much for me. And then the main thing with these tools is I want them to be extremely fast. And that is probably the biggest piece of advice I can get to any of these companies is like why Claude Code over others?

48:57 Like it is insanely fast, even though the LLMs require, you know, processing time and stuff like it's responsive.

49:03 And so like my guess is like that's also where a lot of the open source tools will go as well.

49:07 They'll just be really fast and responsive, just like Vim and T-Monks are.

49:10 Yeah.

49:10 Leave it to a Vim user to say it's not fast enough.

49:13 Just kidding.

49:15 Yeah, exactly.

49:16 So I think maybe just like some quick shout outs to some of the tools, right?

49:21 So Claude Code obviously is a really interesting one.

49:24 You've been talking about that a lot.

49:25 I've mentioned Cursor a few times.

49:26 I think as much as I'm a much more fan of the PyTarm style, I find it to be very effective.

49:33 I've done some stuff with Juni, which is kind of a new project from JetBrains.

49:37 Like JetBrains has had an AI agent that was kind of like a super duper autocompleter.

49:42 And maybe we could talk about that difference as well.

49:44 But it's like a super duper autocompleter.

49:46 but then they have a separate completely different install thing that is an agentic ai called juni and i think it's making good steps it like really integrates with pycharm if you're you're into that

49:57 um yeah well and that's been the evolution right like we had yeah github copilot which was essentially like a very fancy auto complete it kind of it would complete your code for you but it was not an it wasn't an agent in any meaningful capacity like that's what kind of kicked off some of this stuff of like wow it's really great autocomplete and i think a lot of the tools follow that pattern. Even when you would use cursor say a year ago, it was very much like auto-complete, at least the standard pattern, right? Before they had the agent mode.

50:26 For sure. And that never really connected with me. I'm like, I don't need that. I can just, I can auto-complete it word by word. I don't really, because half the time it'll auto-complete like three or four lines. I'm like, yes, but not that line. And how do I get it to auto-complete

50:39 the next? I guess I got to accept it then go edit that part. You know, like it's just,

50:43 I never really vibed with that, but the agentic mode, that is next level stuff.

50:48 Also, if you're a PyCharm or VS Code user, the cloud code integrates in both of them, right?

50:55 Yes.

50:57 So maybe talk, I know you're using cloud code, so maybe tell people what it's like and why you like that one.

51:03 Okay, so I started using cloud code like six months ago, but I was using it off of the API credits, which was before they came out with the plans.

51:11 Started evaluating it.

51:12 I was like, this is going to be way too expensive.

51:15 Like this is going to be- Yes, it is very expensive very soon.

51:18 I was like, based off of my usage patterns, like just like monthly, it's probably going to cost me over $1,000 a month.

51:25 I was like, this is just not, it's not worth that, right?

51:28 So then they came out with the max plan, which was I think originally $200 a month.

51:32 Now they have $100 and $200 a month, which I thought was interesting, right?

51:35 But I was like, I talked to enough people that did use it.

51:37 And I was like, that's interesting, but I'm not that comfortable with this workflow yet.

51:41 They came out with the pro plan, which was $20 a month.

51:43 They said, okay, I'll try it for a month.

51:45 It was almost like a, to me, it was almost like a free trial.

51:48 Like I'm willing to invest $20 into a course or whatever it is like 200 is like a pretty high bar, but 20 bucks.

51:54 I was like, yeah, okay, fine.

51:56 Even if it, even if this sucks, I'll just, I'll try it.

51:59 So I maxed out the pro plan.

52:01 I was like, I'm gonna get every drop out of the pro plan that I possibly can for my 20 bucks, and then I'll make a decision as to whether I want to upgrade or not.

52:08 I was like, this is good enough.

52:11 And that was only with the Sonnet model.

52:13 It wasn't the higher Opus model.

52:14 So there's a lot of stuff that was a little bit more limiting about it.

52:17 We have to give a lot more instructions.

52:19 It wasn't as good as architecting things.

52:21 But I got enough out of it.

52:22 I was like, okay, I'd be willing to upgrade.

52:25 And so I'm actually currently now on the $200 a month plan.

52:28 I think it's worth it.

52:29 I will say that it also, for me personally, it forces me to be a little bit more addicted to using it.

52:37 I want to get my money's worth.

52:38 and so there's a, there's a, a tool called CC usage, which, was originally just for the API, but now, is it'll just give you your usage patterns.

52:49 Even if you're on the, the, you know, subscription plan, I will say like, my usage yesterday was, if it was by API would have been $105.

52:58 So, you know, over a course of a month, if I was actually consistent, I'd be spending over $3,000 a month off of API credits, which is just not, not sustainable.

53:06 Right.

53:06 Yeah, there might be people out there listening who think, you know what, Matt is actually crazy because $200 a month to some random AI thing when I'm a developer is insane.

53:16 But if you had somebody who worked for you, who you could give detailed instruction to and have some reasonable expectation that they can get it done independently, and you said, well, that person you hired is $200 a month, that would be insane, right?

53:29 And this, you can really turn this thing loose and get a lot done.

53:34 But I think one of the mind shifts, at least I had to make was if I'm going to pay for one of these AIs at a level like that, you better make use of it.

53:42 And you better have a plan that like, this thing is, this is how I'm going to take advantage

53:46 of it.

53:46 But when you do all of a sudden, you're like, wow, this is actually a bargain.

53:49 Yeah.

53:50 Well, and let's, let's compare it to like another like AI tool that I use, which is Descript.

53:54 And I use Descript quite a bit.

53:55 Descript is for creating, you just record video.

53:58 And then you can edit off of the transcript and it'll like edit the video.

54:02 It's not perfect, but it's very, very good.

54:05 And I'm on like, I don't know, it's like 150 bucks a year or something like that.

54:09 I use it a lot for everyone.

54:11 Like my teams are remote distributed and I like to record videos just so they can hear from me like in five to 10 minute increments of like, here's what I'm thinking about or shout outs or whatever it is.

54:20 Right.

54:20 So I use it like quite a bit internally, but like if I don't use it for a week because I'm traveling, I don't feel bad about it.

54:27 And so like, but that's 150 bucks a year.

54:30 Like I get enough usage out of it at 150 bucks a year that if I don't use it for a week, it's not that big of a deal.

54:35 200 bucks a month is like a really high bar.

54:37 And so my bet there was just like with the $20 a month plan was, if I don't use this enough, I'm going to cancel it.

54:44 And the other bet is I think eventually that the open weighted models are going to get really interesting here.

54:51 And so I just want to be on the edge of seeing this ahead of time.

54:56 And so I look at it as a little bit of an investment in my own learning.

55:02 And so to me, there's just no replacing hands-on time with the tools, right?

55:08 And so that to me is really what this is about.

55:11 And to be fair, I've used other tools, other APIs, developer tools, where I paid for them.

55:18 It's actually a browser-based, which is like a web scraping tool, is awesome.

55:21 I have paid 40 bucks a month here and there to use it.

55:24 And then when I don't use it enough, I will just downgrade my plan.

55:27 And you kind of have your short list of things that you're using as a developer, as a time, as opposed to thinking that it is like indefinitely, I'm going to be paying for this.

55:36 Yeah.

55:36 Yeah.

55:36 I definitely think it's important for people to maybe not try the $200 plan, but like the $20 plan or something, if they're interested in these things, because it's downloading a free model that runs on your local machine, that 3 billion parameters and saying, well, I tried AI and it doesn't work is not the same as, as trying this, this type of thing.

55:56 Right.

55:56 Right.

55:56 It's, it's, it's really, really different.

55:58 Yeah.

55:58 Well, and also think it's, if you have a, I think that's why my situation, okay, let's,

56:03 let's maybe like set up a little framing of who I think this is most valuable for.

56:07 If you were like an elite developer and you're working on things like embedded systems or things that absolutely cannot break.

56:13 I actually think there is less value for you here.

56:15 I think the read only mode is valuable in that.

56:18 like analyze my C code, where could there be like, you know, like memory, you know, buffer, buffer or bullet attacks or whatever it is. Right. But I think if you were building like side projects, like I'm not trying to like monetize like plush cap really, but like, you know, maybe someday I will. I just really love building plush cap. Like I think if you have a side project and for you, like you're building courses, you have your website, like there's a lot there that if you just had more bandwidth, you could build so much more cool stuff.

56:49 Like, yeah, like when you have a backlog of ideas and you don't have time to build them, that's actually where I think these tools are most valuable because you can build a lot of those ideas and some of those ideas might be stupid and then you can just throw them away.

57:03 But the ones that you, if you can just clear your backlog and come up with new ideas, because this thing is just, you're able to ship so much faster.

57:10 Even if you have to refactor by hand, that's amazing.

57:13 Yeah.

57:13 I have three projects that are like sort of side project-ish, but are like related to Talk Python in various ways.

57:21 And yeah, it's just turn the stuff loose on it and see if it's a good idea.

57:24 Who knows if it'll see the light of day, but it goes from, can I dedicate three months to that to, well, let me try an afternoon.

57:32 Yeah.

57:33 See how it comes out, you know?

57:34 Let me show you like something that I built purely off of like Claude Code.

57:39 if you go to if you go to plush cap i have a um a leaderboard uh of youtube channels and by company so you've got i actually have you know all almost 500 companies in there but it cuts off at a thousand so yeah if you go to leaderboards youtube um you know there's there's 231 companies listed here and all 231 are um uh have over a thousand subscribers now the thing that's interesting out like i was just showing this to someone and they're like oh you know it'd be really interesting is like you have like subscribers, you've got videos, you've got views. And if you click into like views, like if you click into the number of views, this is total views over time. It's a visualization of like, you know, what, how many views, open AI has on their YouTube channel over time. They were like, what would this look like? You know, cause they're, they're producing more and more videos. So are, is the average number of views going up over time or down?

58:34 Like, are they becoming more efficient or less efficient?

58:37 So I was like, I wonder, like I already have subscribers, videos and views.

58:42 I have the visualization.

58:43 I literally have the pattern for what I would like this to be.

58:45 And yeah, there now you can see actually they're becoming less efficient over time as they add more videos and there's different ways you can slice this data.

58:54 So I've actually got like a bunch more ideas just from creating this view.

58:58 But this view was based off of the pattern that I had already created.

59:02 And so it was just basically like using the Django ORM to pull the data, you know, create a new web page.

59:08 And I could tell whether it was correct or not and then like tweak it.

59:11 So it had the pattern and it just needed to mimic that in a way that was a new view and visualization.

59:19 And I feel like that's what this is.

59:20 Pull new data, make a new page, but it's pretty much the same.

59:23 Exactly.

59:24 Yes.

59:25 So this isn't like, you know, rocket science or anything like that.

59:28 But for me, like I wanted to build this view.

59:30 I actually, the interesting part about the story is like, I talked to somebody in the morning and I shipped this like when I had a quick break and I was like, I sent it over to them and I was like, Hey, I built this thing. and actually no, it was my, my boss at work at DigitalOcean.

59:45 He was like, cause we're looking at our YouTube channel and he's like, I wonder what the average views are over time. And I like, I was like a few hours later, I was like, I had a break, I shipped it. And he's like, this is crazy. Cause we use, you know, we use this data as a part of

59:55 running my, my org, like Debra and stuff. So yeah, absolutely. Because whenever we have an idea,

01:00:00 we're like, what does the data look like? Or what, especially when you can compare it across

01:00:03 companies, that's really helpful. Yeah. It's the perfect type of thing. I absolutely love it.

01:00:08 So let's close this out. We got maybe take a few more minutes and let's, let's give people some concrete tips. Like for example, when we talked about Git, you know, check in off to let it finish a job, save it and check that in. And don't let it just go wild for a while. Cause if it goes of bonkers and you're like oh i hate that you're like but that's also hours of work that's gone you know what i mean like use use source control as a um um sort of a save along the way

01:00:35 um you've got a bunch of tips and stuff very give us tips yes i know okay number one thing i actually feel like i can clear uh i have better clarity around technical debt and how important it is to get rid of technical debt now because if an l if you're using the like an agent and it it like is looking to pattern match to what you have in your code base, and it finds the area that you know is basically technical debt, and it copies that, and you're not really on top of looking through the code, it will often copy design patterns that you really do not want in your code base.

01:01:08 And so I actually feel like it allows you to make the argument that cleaning up technical debt is more important when you use LLMs and agents than when you were doing things by hand where you knew So as a developer, I'm just going to ignore that section of the code and I'm not going to use it.

01:01:25 So that's like a very specific tip, which is like, if you are going to pattern match against something you're doing in your code base, make sure it's not copying the area that you know is a problem.

01:01:35 And then just like shipping that.

01:01:37 I think that's, that's one really.

01:01:38 You don't want to supercharge your bad habits.

01:01:41 Yeah.

01:01:42 Yeah.

01:01:42 I would say on top of this, you absolutely do want to clean up technical debt.

01:01:47 you can, when it's done, say, great work, that's valid, but now please apply some dry principles and some design patterns that you know that we're already using and just polish this up one more time and you'll get pretty good results. On the other side is if you've got a project that's got a ton of technical debt, you can kind of just have this conversation with the agentic AI and say, help me understand, where are the long functions? Where is there too much coupling? Let's start breaking that down. And when it's in that mindset, it's, it's pretty good at like addressing those

01:02:20 problems. Right. And that's, that's the flip side of like the technical debt is like, it should be easier, arguably to clean up technical debt, because if you fix something or you're, you're asking the L like the agent, like the introspective, like how could this be improved? What are the design patterns? And particularly like, at least in clock code, there's like a plan mode. I use plan mode. It's like a shift tab. I use it all the time because then it's not, it's going to just like tell you what it's going to do, not go and do it, not going to make any changes. And so I will ask it, like, how can I optimize these database queries? How can I simplify this code without oversimplifying? That type of thing. And that's actually really helpful for just identifying what changes actually should be made. Then I can make the decision, are you going to go ahead and implement those? Or should I just go ahead and implement them because they're easy enough for me to make the changes myself. Awesome. Next tip. Yeah. So I feel like you've already talked a little bit about this. Like the context is really important and the prompting is important. I will say that like where I've failed in the past and I like was not happy with the LLMs was like, I'd say something like, can you just write a bunch of unit tests for this thing? And like, of course, it's not going to have enough. Like I give it the code that I want it to be testing. But now what I found is like I, when I, after I have it write some code, I will say like, okay, now write me like a few happy paths and I'll like evaluate the happy paths and I'll be like, write the unhappy paths. And I'll like, look at those and how it's failing. And then I'm like, what are the edge cases? And so you just have to, even if you're like, a lot of people are like, oh, here's how you write the perfect prompt. Like I actually really hate doing that because what if I wrote the perfect prompt and it still didn't give me what I wanted at the other side? I just tend to

01:04:00 have to tell it it's going to go to jail please you know no i just tend to be really incremental

01:04:05 i'm like really specific like just like again going back to like that junior developer mentality it's like just write me like three happy path tests to test this function yeah like give me some inputs and the expected outputs okay now i want you to write the things that like basically would break it and make sure that it doesn't break so what that does is like rather than i i have not found success or it's been a lot harder with like write me a bunch of tests both happy path the non-happy path and then it's kind of you know depending which model you're using or what day you get like it may or may not be correct and so i brought it's also think about like for yourself and your own sanity when you're doing reviewing the changes that it's making uh you really want it to just like be consumable like okay those are the happy path and mentally you're like okay happy path tests or mentally not that um you should if you instruct it in the way that you want to consume the information that it's created out the other side, that can actually be really helpful as well.

01:05:00 Yeah. I find it's terrible if you just, this, this project needs tests, write tests. It ends up testing like just the smallest arbitrary aspects of the code. And you're like, what is this?

01:05:09 Like now I got to maintain this junk that like one plus one is two. Like, no, this is not what I want from you. Right. You got to be really specific and do bite size. And it's a little bit of the plan to like maybe plan out like the types of tests we should have. Okay. Let's do this first one and focus, you know, like really guide it a little bit, treat it like it's, it's a junior that's kind of new to this stuff. Don't just, you wouldn't tell the junior, like we need to refactor the architecture. I'll see you next week. You're like, well, what are you going to

01:05:35 get back? You don't know. Yeah. Well, maybe, maybe like three quick, like lightning round tips.

01:05:40 Yeah. One of one, one of them mostly just, I thought about because I do this with, I have been doing this for years where if I'm working on different projects or different part of the code base, I will just use different terminal backgrounds, different colors. This is my blue terminal. So blue is where I've got Tmux and Vim, and I'm running a bunch of stuff. Exactly.

01:06:02 And then, yeah, I think my colleague, Amit, tweeted about this. And then black will be, I'm running the server, I'm running my task queue, all the stuff that is running right now, local development. Then if I'm tailing a bunch of logs on the server, that'll be in another

01:06:19 different color. So just mentally- Does this happen automatically?

01:06:22 No, I actually just open up separate terminals. And then within that, I'll have different team

01:06:27 accessions running where I can just be like, again, it has nothing to do with the computer.

01:06:32 It has everything to do with me. A lot of times just doing a ton of context switching. So I've found this to be applicable as a pattern for the agents. If I'm having an agent that works primarily on one part of the code base, then I will just have that be blue. And then another part of the code base have a B black.

01:06:49 Right.

01:06:49 So, again, like I also, this maybe goes right into the second tip, which is a lot of people are like still microservices versus monolith.

01:06:58 Like I actually think that the perfect design pattern right now for a lot of projects is like much closer to monolith, but like you split it up into like based off of the number of agents that you want to run.

01:07:10 So like, for me, I've got like my web scraping and like data collection stuff.

01:07:14 And like, I generally just want like one, maybe two agents.

01:07:18 So like, that is like a separate project.

01:07:20 And then it interfaces with everything else through APIs, like actual like web APIs.

01:07:25 And so like, that is actually how I've architected things where I'm like, how many developers, developers, agents do I want working on the code base at once?

01:07:32 Like, what is my actual capacity to review the code?

01:07:36 As a single developer, there's only so much capacity that I have.

01:07:39 And so I will actually architect, or I've started to architect around that.

01:07:44 But that doesn't mean I have a thousand microservices.

01:07:46 It typically means that I split the architecture into like three or four components, maybe two beyond the main application.

01:07:55 And that helps me to just feel like I'm being productive without being overwhelmed.

01:07:59 Yeah, it's also good advice just for working in small teams anyway.

01:08:03 Yeah, yeah, exactly.

01:08:04 And it's just like a little bit more like a pragmatic.

01:08:05 Some of the teams are automated, right?

01:08:06 It's crazy.

01:08:07 Yeah, it's like a pragmatic, like I'm not doing crazy microservices.

01:08:10 I'm not also doing like monolith everything, right?

01:08:13 So I think that to me, I'll just kind of be a little bit more, more pragmatic about things.

01:08:18 Yeah.

01:08:18 I wonder if you could automate, if you could, if you could AI agentic code, some form of automatic color profile selecting.

01:08:26 I mean, yeah, they go pretty good on bash scripts.

01:08:29 I mean, it's possible.

01:08:31 I've one, one last tip, which kind of bit me.

01:08:34 Luckily I have, I'm running Sentry, shout out to the Sentry team for, for, for BlushGap.

01:08:40 and I caught this bug and I, you know, a lot of times I'm like finding bugs, like, oh, it's a 500 error or whatever.

01:08:46 I'm not the end of the world.

01:08:47 Like I go and fix it.

01:08:48 I had definitely written more bugs myself by hand.

01:08:53 Now that I am like, I have a mixed sort of like toolkit.

01:08:56 Like I, I've just, it's just something to like be mindful of, which is like, I'm, I'm so used to having like bits and pieces written by an agent.

01:09:05 And then I review it that when I'm writing code, I have like my, it's almost like over It's like, oh, I review a lot of code.

01:09:11 I actually almost feel like the agent should be reviewing the code that I'm writing by hand.

01:09:15 And so I'll just say like, that's one thing that I feel like is a big tip for developers and might bite a lot of people as they spend somewhat less time writing code, especially like low level kind of like rote detail kind of stuff.

01:09:29 And then they have to, then they are like, oh, I'll go do that real quick.

01:09:31 And then they suddenly like introduce a bug and they're like, oh, geez, I didn't.

01:09:35 Yeah.

01:09:36 Yeah.

01:09:36 haven't done this like yeah by hand with the clickety click for a while yeah exactly exactly yeah i um i think that we need to we need to embrace and be mindful like you're saying um a certain level of like student mind when we're doing this kind of stuff i mean you say you've been using clock code for like six months and these are starting to show up like imagine 10 years down the line right you know what i mean it could be seriously an issue yeah i did help one person do a bunch of HNNC coding and won't name it because I'm not sure they want to be named, but they really were like, wow, this is incredible. And the thing that surprises me, they said, was as it's going through maybe 30 or 40 things it does to process a request, it's explaining why it's doing it. It says, okay, now I need to make sure that this runs in this way. So let's do this like Django, this manage pi command or something like that.

01:10:32 Right.

01:10:33 And they're like, I'm actually, if I'm paying attention, I'm actually learning new things about how this is supposed to be done.

01:10:38 And in the end it says, here's why I made these changes.

01:10:41 I think it's so easy to just go great next.

01:10:44 You know what I mean?

01:10:44 Yeah.

01:10:45 But if we just slow down a little bit and you can learn a lot to learn, you learn, I've learned a ton.

01:10:50 Like I can just hit escape and I'm like, tell me why you're doing that.

01:10:53 I'm not really sure why, why you're doing that.

01:10:56 And it'll explain it.

01:10:57 And then you can, you can, it's the back and forth.

01:11:00 I mean, it is like having the conversation about code that you often can't have with another developer.

01:11:07 It's really expensive to take your whole team's time to talk through some change to the code base.

01:11:12 You can have all those conversations.

01:11:14 And I personally find as someone who is relatively introverted, it is not taxing on me to just have that conversation with the computer.

01:11:20 So I can still have all the conversations with the humans and not be drained by the conversation with the computer.

01:11:26 Yeah.

01:11:27 Yeah, that's awesome.

01:11:27 And think of the opportunity for people who either are sole developers or they live in places where they can't go to user group meetup type things.

01:11:35 Yeah.

01:11:36 Yeah.

01:11:36 Or they're like busy in meetings all day and they still want to be a productive developer and be hands on with this stuff.

01:11:42 And not, you know, I think a big part of it is like not getting caught up too much with the whole hype cycle.

01:11:47 I am firmly in the camp that while this is a shift, it is not that big of a shift compared to when they said, oh, software developers are going to go away because we have COBOL. Software developers are going to go away because we have code generation off UML diagrams. Software developers are going to go away because of low code, no code.

01:12:08 I get the sense, having used these tools, that a lot of companies will use them in the wrong way of thinking they can just replace developers.

01:12:16 And actually what they're doing is they're building up a massive amount of technical debt and they're going to need software developers to fix it.

01:12:22 Yeah.

01:12:22 So that, I think it's building further layers of abstraction and more code.

01:12:26 I think that you and I and everyone else out there who is very competent coders pre-AI, we're going to be a little bit like COBOL programmers, I think.

01:12:35 There's going to be places that are just like, look, we've just got to call some people that know how to write code and figure this out.

01:12:41 Like this stuff, the crazy analogy, the crazy analogy is like, okay, this is what I read one.

01:12:47 Like, so that like the U boats during German U boats during world war one have like a pre they have steel in them that is pre or it's not been affected by like nuclear, the nuclear bombs that were set off around, you know, during and after world war two.

01:13:04 So like they will extract the steel from those because they're not affected by the radiation.

01:13:10 I guess we're like the old U-boats.

01:13:12 They're going to extract us from retirement and be like, you still know how to program

01:13:17 in this esoteric language.

01:13:18 The LLMs can't do it.

01:13:19 I don't know.

01:13:20 Maybe that's what's going to happen.

01:13:21 Take the mothballs out.

01:13:22 That's right.

01:13:22 Yeah.

01:13:22 And we're going back into service.

01:13:25 Boy, it's an interesting time.

01:13:27 And there's a lot of opportunity as you pointed out.

01:13:30 So I want to leave people with a message of don't be scared, embrace this, see what it can do for you.

01:13:35 There's a lot of opportunity here, but Matt, you get the final word.

01:13:38 Yeah.

01:13:39 I mean, I agree.

01:13:39 I think it's, again, it's like the same philosophy we started with, which is like, I don't see this as that big of a shift compared to like a lot of the other stuff that's happened in industry.

01:13:49 Like just, I would just learn these things.

01:13:51 And frankly, like, then you are empowered to say like, I'm not going to use that because you've tried it.

01:13:57 And so like, then you have an informed opinion on it.

01:14:00 And I think that's really what matters as software developers, not to just dismiss something simply because it seems like a hype cycle, but actually just to try it.

01:14:07 And if it doesn't work and you're like, well, I have an informed opinion.

01:14:10 I will say, though, this is actually rapidly evolving.

01:14:13 And as we've talked about through our conversation, I would actually try this multiple times over several months because it does get better and it does change with the tools.

01:14:22 And so it's not just like I tried it once and I'm going to toss it away.

01:14:26 I would give it a try every few months and see if it works for, you know,

01:14:29 kind of clicks for you and kind of works for your use case.

01:14:31 Yeah, if we had this conversation pre-agentic code agents.

01:14:34 Yeah.

01:14:35 Could be very, very different, but six months later.

01:14:38 It's just like when open source was like nascent.

01:14:42 Oh, I tried open source once.

01:14:43 If you haven't tried it in 15 years, like, I don't know what to say, right?

01:14:47 I think it's a similar thing, but it's just a little bit faster evolution.

01:14:50 Yeah.

01:14:51 Yeah, it's crazy times.

01:14:52 I mean, I'm honestly thinking maybe we should ship this episode like this week without recording, not the two weeks out that it's scheduled to be recording because who knows what's going to happen.

01:15:01 No, I'm just kidding.

01:15:02 But they are changing fast.

01:15:04 Well, thanks for coming back on the show.

01:15:06 Always great to catch up with you.

01:15:07 Thanks, Michael.

01:15:08 It's been great.

01:15:08 Yeah, bye.

01:15:10 This has been another episode of Talk Python To Me.

01:15:13 Thank you to our sponsors.

01:15:15 Be sure to check out what they're offering.

01:15:16 It really helps support the show.

01:15:18 This episode is sponsored by Posit Connect from the makers of Shiny.

01:15:22 Publish, share, and deploy all of your data projects that you're creating using Python.

01:15:26 Streamlit, Dash, Shiny, Bokeh, FastAPI, Flask, Quarto, Reports, Dashboards, and APIs.

01:15:33 Posit Connect supports all of them.

01:15:35 Try Posit Connect for free by going to talkpython.fm/posit, P-O-S-I-T.

01:15:41 Want to level up your Python?

01:15:43 We have one of the largest catalogs of Python video courses over at Talk Python.

01:15:47 Our content ranges from true beginners to deeply advanced topics like memory and async.

01:15:52 And best of all, there's not a subscription in sight.

01:15:54 Check it out for yourself at training.talkpython.fm.

01:15:58 Be sure to subscribe to the show, open your favorite podcast app, and search for Python.

01:16:02 We should be right at the top.

01:16:04 You can also find the iTunes feed at /itunes, the Google Play feed at /play, and the direct RSS feed at /rss on talkpython.fm.

01:16:13 We're live streaming most of our recordings these days.

01:16:16 If you want to be part of the show and have your comments featured on the air, be sure to subscribe to our YouTube channel at talkpython.fm/youtube.

01:16:24 This is your host, Michael Kennedy. Thanks so much for listening. I really appreciate it.

01:16:28 Now get out there and write some Python code.

01:16:42 *music*

Talk Python's Mastodon Michael Kennedy's Mastodon