Monitor performance issues & errors in your code

#387: Build All the Things with Pants Build System Transcript

Recorded on Thursday, Oct 6, 2022.

00:00 You have a large or growing Python code base if you struggle to run builds, tests, linting and other quality checks regularly or quickly. You'll want to hear what Benji Weinberger has to say. He's here to introduce Pants Build to us. Pants is a fast, scalable user friendly build system for code bases of all. It's currently focused on Python, Go, Java, Scala, Kotlin, Shell and Docker with more languages to come so you can learn that project that even has multiple languages at play. This is talk Python to me. Episode 388 recorded October 6, 2022.

00:48 Welcome.

00:49 To Talk Python to Me, a weekly podcast on Python. This is your host, Michael Kennedy. Follow me on Twitter where I'm at @mkennedy and keep up with the show and listen to past episodes at Talkpython.FM and follow the show on Twitter via @talkpython. We've started streaming most of our episodes live on YouTube. Subscribe to our YouTube channel over at Talkpython.fm/ YouTube to get notified about upcoming shows and be part of that episode.

01:15 This episode is brought to you by The Local Maximum podcast over at Localmax Radio.com and Microsoft for startups foundershub. Get support for your startup at 'Talkpython.fm/Foundershub'. Transcripts for this and all of our episodes are brought to you by AssemblyAI.

01:31 Do you need a great automatic speechtotext API? Get human level accuracy in just a few lines of code?

01:36 Visit talkpython.fm/AssemblyAI.

01:40 Benjy. Welcome to Talk Python to Me.

01:42 Thank you. It's great to be here.

01:43 It's great to have you here. I'm quite excited to talk about Pants and Build and bringing a little bit more structure and automation to the developer workflow using this tool that you all.

01:55 Built very happy to talk about. That something I've been very passionate about for a long time.

02:00 Yeah, you've been working on this quite a long time, as we will see. But before we get into all the details there, let's start with your story. How do you get into programming and Python?

02:07 I love this question because I've been a software developer for over 25 years or so. I've been around for a while, you.

02:15 And me both, almost the same duration.

02:17 There, so we've seen some stuff. But I first got into computers when I was about ten years old, and my uncle, who is a big gadget nut, bought a very, very early home computer. This was in the UK. It was the Sinclair ZX 80, if anyone's familiar with that 1K of Ram eight bit machine.

02:37 I had just never seen anything like it, and I was instantly smitten by it and got really into it then and at some point got my own home computer and at some point realized.

02:47 Oh, this is sort of a thing.

02:48 I want to do for the rest of my life.

02:50 Now, granted, at that time I also thought I would play with Legos for the rest of my life. So it's not like I'm always right about that, but in this case I absolutely was.

02:58 It's pretty awesome that we get to do that, right? It's like, oh, this is such a neat little project, and I would just do this for fun. But if people pay me, I get to build even more ambitious things. And a lot of times those are just the thoughts and dreams of kids who don't know better. Right. Just wait till you get into the real world. But as programmers, that's not true. We get to do it all the time.

03:18 It's absolutely unbelievable that your childhood hobby can become your grown up profession. If you have the right hobby.

03:24 Yeah, exactly. It's important to pick good hobbies kids. If you're out there listening and you haven't picked a hobby yet, and you're not there. Yeah, pick a good hobby. You talked about the 1K of Ram and the retro thing. I just started working on this project last night no, two days ago, using Circuit Python. And for that thing, $17, I got an ESP 32 feather little microchip, has a WiFi built in. It's got like a temperature sensor, all these things, and all you need to make it go is just plug in USB C. That thing is like a 240MHz Ram, four megs of storage, which is not much at all, but it fits like it has two thirds of your hand. And it's unimaginably powerful. Compared to what? The types of computers you're talking about. Right.

04:12 Oh, that as I said, 1K of Ram, the Zylock 80 was the processor, which I believe was clocked at about 4 MHz or something.

04:24 And at the time, when you're a child, a million sounds like a lot. So what do you mean, MHz? Like I can't count to a million?

04:30 Yeah. That's unbelievable. That's right. And here we are, well past the early Pentiums for $14. For $17. Anyway, it's really interesting to think about the different types of computers that we have to work with and where we all start. The other thing I always find interesting is thinking back to early 90s, late 80s, those computers and their interfaces were so basic, and yet the possibility that at least I experienced when I worked with them, it seemed so great and so amazing that even stuff today doesn't come close to where you're like, I see where this is going. It's going to be incredible.

05:05 There's definitely a joy when you use the word basic. I'm assuming in lowercase, but the interface dimension was literally the language basic or uppercase. Or you could just write machine code directly. And those were your only two options, essentially. And so you were either, you know, ten, print your name, 20, go to ten, or you were messing around with registers. And once you learn that it was such a joy, essentially, you're melting with the hardware in some way. And today, obviously, mostly for good, we are 19 layers removed from the hardware. And if you're going to be removed from the hardware, python is a good way to do it. But yeah, definitely some joy has been lost and replace with other joys.

05:50 It's a new kind of joy. Now I pip install something in three lines. I have like a cluster of servers at my command. It's a different kind of joy than working with registers. Alright, well, let's get into the main topic. Let's talk about Pants. This project that you all have created. Its role, as we started, stated briefly at the beginning, is really about helping orchestrate common tasks that we have to do to build and run and prepare software. And it's only getting more and more complex. I guess. Python grows up, as you sort of put it as we were chatting before I hit record, as Python grows up, as it's being used on larger projects, as it's being used across larger teams, the expectations of what it means to have a piece of software and run it is changing and evolving. Right. So maybe we could just start by talking about some of the pain points of large software scaling software, python in particular.

06:42 Sure. An important piece of background here that I'm sure all the listeners will be familiar with is what has happened to Python in the last ten to 15 years. I mean, I started programming in Python about 20 years ago on Python2.2. It was sort of fancy bash at that point. It was just the language, the little bit of glue you used around the edges of your real code, which was written in C or whatever. And fast forward to today, and if you look at the process of particularly over the last ten years, python is this absolutely critical language that has grown up incredibly and is now being used to build large systems. It's being used as the language of choice for data science, it's being used as language of choice for DevOps. It is absolutely crucial language that large growing scalable code bases are increasingly being built out of, and that's presumably everyone listening is a fan of Python as I am. And so there are good reasons why this is the case, but we're still a little behind the curve on the tooling that you need to grow a Python code base, the standalone Python tooling. And there are so many great tools in that toolkit. They're pretty single use and they tend to be designed to assume that you have a small, single use sort of stand alone binary. That's the only thing that your code base contains. But now increasingly you have these large growing code bases, sometimes referred to as mono repos, where you have a lot of Python code, possibly code in other languages as well. You trying to share code across a bunch of different projects and a bunch of different binaries. You have a lot of you may be deploying out of a single code base. You want to deploy many microservices, you want to deploy AWS lamdas, you want to deploy many different docker images. You have this complexity that you need to manage. And so with other languages, there have been tools around to help you deal with this problem, python. And for more contemporary languages, many of these solutions tend to come standard with that languages. Tool chain such as Go just comes with a Swiss Army knife and Rust comes with a Swiss Army knife. Python does not, for better or worse, come with the Swiss Army knife. It comes with there are many, many blades out there. And Pants particularly in its focus on Python, is designed to help you grow and scale that Python code base. So that all of the steps you need to take to go from authoring some code to having it be validated. Tested. Checked. Have passed all its quality control checks and be ready to be deployed or used in production. Rather than you manually having to figure out. Well. Which tools do I need to invoke. In which order. How do I ensure that they are consistent. How do I do the least amount of work in the least amount of time that is necessary to assure those quality checks? That is all a very automatable problem. And that's essentially what Pants is. It can look at your code base, it can look at changes to your cobase and say, oh, you want to run tests? Here's the actual work that needs to happen, and here are the tools that need to be invoked.

09:55 Sure. So maybe you want to run Py test. And as we'll learn, Hans has some great ways to speed up things like executing tests, some of that's parallelism, some of that's going, you know what, we already did that work, nothing's changed. Carry on.

10:09 Exactly. Pants supports is essentially the layer between either you as a developer working on your laptop or your CI environment and the underlying tools of which there are so many. And Pants, I think, supports well over 20 Python tools, and it's fairly easy to add more. So, for example, test being a very important example, you can say, hey, Pants, test, you know, run all the tests on everything that is affected by my current set of changes. The system can look at your code, look at the dependencies, perform all this analysis and say, well, that means I actually need to invoke Pytest on these underlying tests. And if that means I need to first install Pytest in a hermetic environment, I will do that. Pants also runs everything in these hermetic sandboxes, which means they are neither consume nor create side effects, which means all of this work can be cached at a very fine grained level. You can cache the result of an individual test, like concurrency at the level of an individual test. And so if you have eight cores on your machine, you can run eight tests at the same time, and you only need to run the ones that have not, whose inputs have changed because everything else is cached. Potentially, yeah. So that's the test example. Then you could look at another really important quality control check, is Linting and Formatting. There are, I don't know, eight or ten different Linters and formatters for Python and Pants can orchestrate them all in the right order. It understands the distinction between Linters, which don't modify your source code so they can all run concurrently, and formatters, which do modify your code, so they have to run sequentially. So it's this layer between you that allows you to not worry about which tools do I need, how do I install them, how do I isolate them, how do I cache their results, how do I reason about concurrency and what can and can't be run together. It just takes that away from you.

12:03 One of the most powerful things for me personally, when I see all of this, how it brings all of these things together is a lot of times, well, yeah, I should probably run the lender on that, but it's fine, maybe I should run the test. But I didn't change that much. It was only a couple of lines of code.

12:24 There's these different steps you got to keep in mind and at each level, do I need to do this or do I need to remember to do it or get it justified to disrupt what I'm thinking about? And if it's just pants build or pants test and you don't have to worry about it, and the system just does it all for you, at least personally, I am more willing to adopt more software engineering practices and guards on my code if it doesn't feel like I have to do them. You know what I mean? If I don't actually have to remember like, well, I was doing five steps and that was a lot. Now I'm doing six, and I'm really sick of these steps now. You know, if it's the same number of steps and it's fast enough because of the caching and parallelism, then why not adopt it?

13:07 Exactly. And I think an underappreciated complication is when it's not an individual adopting it, but a team. So a team wants to assume some best practices, but now everyone on your you want to adopt a new Linter or a new quality control check of any kind that's more cognitive load on behalf of now what do you do? Do you send an email to everyone on the team saying, well, now you have to run this as well. You can set up your existing build layer pants in this case, to apply that new Linter, and it just happens and nobody has to change their workflow.

13:40 Where we want to get to is essentially, and in many cases very close to this, is you run a single command as a developer, you run that same single command in CI and the right thing just magically happens. And the ability to do the right thing magically depends on the ability to do dependency analysis, to build a finegrained workflow, to apply concurrency and caching to it. And as I think we'll get to maybe later even not just concurrency and caching on your machine, but remote execution in a cluster and shared remote caching. So that work is being shared.

14:18 It's not just password that you have done, but password that anyone individually or NCI has done.

14:24 Yeah, that's fantastic. And if you got a large code base that starts to pay off yeah, I want to talk to you about mono repos, but before we get to there, maybe you could just give a quick shout out about some of the language support. Obviously we're talking about Python tools for Python, code bases, for python developers and data scientists, but we might also live in multilingual heterogeneous environments at work and on our projects and we might have some Kotlin for a mobile app and our Python for APIs or something like that.

14:57 So yes. Pants is not a python only tool. But it is a Python first or a python centric tool in that there is a long history to the project. But the current iteration of Pants. Which we very unimaginatively called Pants V Two. Because we're not great at naming it. Launched almost exactly two years ago. Two years ago at the end of the month and with just support for just Python. And since then we've added support for Go, for Java, for Scala, for Kotlin, for Shell. The next thing we're looking at very closely, obviously, is JavaScript. And TypeScript can't ignore those. But one of the things that makes Pants stand out and the P in the name is no accident is the recognition that Python is not is no longer this third cousin that you sort of put at the end of the list of languages to deal with. And it's sort of an afterthought. But really this thing is designed for Java. It was really part of the big driving use cases. But if you have many languages in your repo, or if you have no python at all in your repo, pants is still a useful tool for you because you still get the benefit of all the analysis that it does on your behalf.

16:06 And people will see that Pants is written through top layer in Python and lower layer in Rust and the extensibility layers and Python. So there's a lot of Python first, as you say there. But I did want to call out it does work in these other languages. So if you're trying to adopt some kind of automation that involves multiple languages, this might work for you.

16:28 Yes, and we are always interested in people who have opinions about what the next languages we should support should be. Obviously, as I said, JavaScript typescript very high on the list. I suspect Rust is very high on the list, partly because we use it and partly because it is a very up and coming and for very good reasons.

16:47 This portion of Talk Python to me is brought to you by the Local Maximum podcast. It's an interesting and technical podcast that dives into trends in technology, stats and more. But rather than tell you about it, let's hear from Max and Aaron about their show.

17:00 We are now on with Talk Python to me. Let's say hi to all the Python fans.

17:05 Hi, Python fans.

17:06 I'm Max Guar. I have actually done a lot with Python myself, so I am a fan of Talk Python. Do you know Python, Aaron?

17:14 I took a course years ago, but.

17:15 I am a little rusty.

17:16 We are here today to talk about our podcast, The Local Maximum. We've been on a roll lately with a new episode every week, and I wanted to share with you what we've been up to. Here on The Local Maximum, we tackle subjects in software and technology topics as diverse as the philosophy of probability to Elon Musk's next move. For Talk Python listeners, I want to highlight a couple of recent episodes of The Local Maximum. In 248, for example, I found out about an open source library that maps the world into Hexagons and some Pentagon. I had a discussion with an author about games and puzzles, and another on a novel approach to doing the job search.

17:52 Well, we discussed the ramifications of Ai generated art.

17:56 Have we reached peak creativity, or is this just another Local Maximum?

18:00 So check out The Local Maximum podcast available on your podcast app.

18:06 All developer tools kind of come of age when they can make themselves.

18:13 They're now fully independent, where a language or a tool builds its own self with its own features. So if you can do that for Rust, then it can kind of be a part of that group.

18:25 Yes, self hosting is a major milestone in any sort of build type project.

18:30 Indeed. All right, quick question from the audience. Mustafa out there. Says, how does Pants handle bulk publishing of packages where I might have a set of preconditions to auto publish it, intervals of all package that meet those conditions or something along those lines?

18:43 Great question. So Pants, I should mention there are many different types of deployable that it can build. I mentioned like a database lambda or Google cloud functions, or we have a format that's specifically of interest to Python users called PEX, which stands for Python Executable. And it's basically a single file that contains your Python code and all of its transit dependencies. So it is ready to run as long as there's an In Python interpreter on the system you run it on, and it even knows how to find that interpreter.

19:13 Interesting. PEX came from you guys?

19:15 Yes.

19:16 I had no idea. I mean, I've heard of PEX, but I didn't associate it with Pants. That's cool.

19:20 Yeah, I mean, other systems can also build PEX, and PEX has a standalone command line tool that you can use to build Pexes. But Pants is the home base of PEX. But I think the question was about building and publishing Python distributions, for example, to pypi, which Pants can obviously do. And I'm not 100% sure I'm answering the question appropriately, but I think one of the ways that Pants can help you here is it knows when code has changed. So if you're publishing a large number of packages from your repo, it can do the math of here are the by tracking dependencies. It can do the math to say based on the changes since the last time this thing was published, it now has changed and needs to be republished. So it can give you a lot of that logic when it comes to auto publishing. At intervals, I guess I would say Pants can tell you whether it meets the conditions based on dependency analysis, etc. But there is no auto publishing per se. Pence is a tool you have to invoke, right? So you could cron around it or something like that.

20:27 Yeah.

20:28 Cool.

20:28 You mentioned mono repos. Now you also mentioned sharing code. If I've got, say, some SQL model definitions that point at what my database looks like, well, my API code probably needs access to the right version of those, but so do my data scientists for their library that talks using SQL model to get the data into their notebooks. And if those things get out of sync, as we know, SQL alchemy will go bonkers and say you're missing a column here. Done. Crash. Right. And so keeping that stuff in sync across these different projects can be challenging. Is that the idea behind these mono repositories?

21:10 Yes, that's one of the reasons why they are increasing in popularity. So the problem is, as your code base grows, when your code base is small, there are no problems. And as your code base grows, you're faced with kind of two. You're faced with a decision on how to manage that. One is to keep breaking it up into multiple smaller repos, essentially each one with their own build and their own practices. And the way you consume code across them is through publishing, through version publishing.

21:38 Maybe you make your SQL models a data package and you publish it to an internal pypi and everyone consumes it and they pin their versions just like exactly.

21:48 The problem with that is that there are several problems with it, but a big one is that you're inviting the famous dependency held problem, which is already bad enough with third party requirements, into your first party code. So the problem is when you make changes to a library, you have no way of knowing who is consuming you upstream from you and therefore what changes they might need. Now, you might say, well, not my problem because everything is supposed to be versioned, but that breaks as soon as anyone needs to upgrade anything because now they have this horrific upgrade problem that is happening potentially weeks or months after the changes that are breaking them have happened. So you're kind of pushing the problem off.

22:29 Where a mono repo is helpful is that you get this visibility into all of the upstream dependencies. Essentially, if all the tests in the mono repo pass, you know that your changes have not broken your co workers, you know that you can use in repo tools like Git grab to find or any kind of discovery tools and dependency analysis, the tools like Pats offer within the repo to find out the impact of your changes. And this is why mono repos are increasingly popular. And it's not to say that because a bunch of other companies are doing it, you should be doing it, but it is instructive to note that Google and Facebook and Twitter and a lot of successful companies have gone in that direction, or in Google's case started out in that direction, right? It has to be said that with mono repos or without, you need appropriate tooling. So at some point you have to pick your poison. But the reason I am biased towards mono repos, having worked with companies that have had one unified code base and companies that have had a very fragmented code base, is that the structure of your code base tends to recapitulate the cohesion and structure of your organization itself. Everybody is collaborating on a single large repo within reason. And you don't necessarily need to have literally one repo for the entire company, but a small number of large repos with boundaries between them. That makes sense because they don't mutually depend on each other. Let's say then that mutuality and that sharing of code creates more cohesion at the organizational level. And when you have a very fragmented code base, you tend to have a fragmented organization. Now, your organization resembles a loose collection of warring tribes more than a single unified organization. So I am biased towards mono repos. And while you can use pants very effectively in even multiple smaller repos, I do think it supports the monorpo architecture really well. And the last thing I would say about this is just to really clarify, because we get a lot of questions about this mono repo is about your repo architecture, nothing to do with your deployment architecture. So it is not the opposite of microservices. For example, if you have many microservices, you probably want them to be in a single repo because they share, as you said, data models, they share code. And it is actually easier to deploy many micro services out of a single repo then constantly creating new teeny repos and having to go through the whole publish and consume dance every time you want to publish a microservice. And so publishing many microservices out of a single monolithic repo is actually a common pattern and a very effective one in my opinion.

25:08 Yeah, I guess if you have a monolith where the code is architecture into one giant thing, it necessarily means that you're probably just going to have it in one repo. But if it's microservices, there may be this temptation to have, well, we've got ten microservices, so we've got 20 repos because here's each one for the service. And then the shared bits got to be broken out to their own so they can be reused.

25:30 Yes, essentially you get a much in a mono repo, you get a much tighter development loop because you are cutting out all of the publishing. Everything is consuming, all the inner codebase consumption is happening at head. So you don't have this constant publishing and consuming.

25:45 Yes, I had really thought about it that way, but a lot of the tools, like the really good tools that we have, things like PyCharm and stuff, we can open them up and go to a function or a variable or a class and right click and say, show me all the uses of this. But if there's a bunch of different consumers of your library, well, you don't really know if anybody could be grabbing something and using it as a mono repo. And it says no usage is found, but that means more.

26:09 Exactly, the consumption metadata on published artifacts goes the wrong way. Right. It says that the metadata that gets published with the wheel says, here's what this wheel consumes, but it has no idea who consumes it. And so if you want to figure that out now, you need a whole bunch of tooling. So why not just cut that out entirely?

26:28 Yeah, I do get things like refactor, rename or to cross the whole company sort of in interesting ways as well. Okay, cool. Maybe we should touch a little bit on the history of Pants.

26:41 I know Pants 1.0 has been around for a long time and then there's this 2.0 version. You want to tell people a bit about the changes there. They may have experienced it a while ago.

26:52 Sure thing. So Pants, what we now refer to as Pants of One was a project that started as an internal project to Twitter and it was focused primarily on scala and like how can we speed up scala bills and make them more organized and more tractable? It then got open sourced out of Twitter and was used at a handful of other scala using companies, notably for Square. Square used it as well. There were a few companies of that vintage of early 2010, silicon Valley startups that were using scala in a big way.

27:25 V1 is basically gone at this point. I think there's a handful of organizations still maybe using it. We're desperately trying to get them onto V2.

27:33 The past V2 is a thing that we launched two years ago. There's a complete ground up re implementation. It really only shares a name with the old sort of project's home. With the old one, the code is entirely new, as you alluded to earlier. One of the big differences in the implementation is that the execution engine in V2 is written in Rust and the APIs are Async Python3. But that is an important difference. But the bigger difference is the design itself is very different in the V2 system. Learned a lot from our experience with V1, both in terms of the implementation and how to make important features like caching and concurrency just fall out of the design rather than be this laborious thing you have to add at every corner. And also an equally important lesson was, unlike many other systems, including Pants V1, which came out of a single company and were really tailored for that company's use cases. With V2, we wanted to build something that was for everyone that any organization, large or small, could use and get. You shouldn't have to work at Microsoft or Google or Twitter or to get this quality of build experience. Anyone should be able to. And that required looking at the use cases of a lot of organizations of different languages and different sizes. And one thing we learned was nobody wants to write a ton of build metadata if anyone's used a system like Pants V1 or Basil or Buck or something similar. You start by potentially refactoring your code base to be what the system expects, and then you write thousands of thousands of lines of what so called build files? We wanted to eliminate all of that. And so the system is designed to accept your code base as it is. And it doesn't require huge amounts of build metadata. It requires small amounts that it can mostly generate. But the important information, which is the dependencies it actually infers at runtime by looking at import statements and various other tricks and heuristics for figuring out what your code's actual dependencies are. That saves a huge amount of time. It makes it a lot easier to use and a lot easier to adopt. So that's kind of why V Two came about. We wanted to build something that wasn't like, here's something we built for Twitter, throw it over the wall and you can use it if you want to. But here's a thing that was designed for you, designed for Python, designed to be easy to adopt, designed to be easy to use, designed to be easy to extend, has a robust API that is Async Python Three, essentially. And that's where that project came from.

30:06 Yes, really interesting how it came from the big tech side of things. But second, take it's like, well, how do we just make this for all the projects, not just the large ones?

30:16 There's an interesting article, and it's quoted in various forms a lot. You're not Facebook, you're not Google, you're not LinkedIn speaking to most people right? I mean, there are people who actually are there, but for most people who look at these architectures and what's happening, they may well look how they're scaling this and like, yeah, but you just have 100 users, you don't need to go to that much architecture and crazy designs for what you're doing. And so I can see how it would be a temptation to have like an overly complicated system that comes along, but it looks to me like this is really easy to adopt, this is.

30:50 Significantly easier and we are constantly working on automating the adoption.

30:56 One of the commands in Pants is this is the only panel at Pun we've allowed ourselves in. The system is called Tailor because it, quote, unquote tailors. Your metadata essentially generates, it does inspection of your code and generates a bunch of metadata, not including dependencies. Those, as I mentioned, are inferred at runtime, but this is kind of a thing you run periodically because this is metadata that you may want to manually tweak. And so we're constantly working on making that easier and easier to adopt for real world cases. Just one obvious example is many repos have dependency tangles and circular dependencies and Pants V2 can handle that those other systems cannot, including Pants V1. And those other systems were not really designed for easy adoption because they didn't need to be, because they were only designed to be adopted once by a captive audience of all the developers at that company. We want to be adopted thousands of times by thousands of organizations, so we want it to be much, much easier.

31:57 This portion of talk Python to me is brought to you by Microsoft for Startups Founders Hub, starting a business is hard. By some estimates, over 90% of startups will go out of business in just their first year. With that in mind, Microsoft for Startups set out to understand what startups need to be successful and to create a digital platform to help them overcome those challenges, microsoft for Startups Founders Hub was born. Founders Hub provides all founders at any stage with free resources to solve their startup challenges. The platform provides technology benefits, access to expert guidance and skilled resources, mentorship and networking connections, and much more. Unlike others in the industry, Microsoft for Startups founders Hub doesn't require startups to be investorbacked or third party validated to participate. Founders Hub is truly open to all.

32:48 So what do you get if you join them?

32:50 You speed up your development with free access to GitHub and Microsoft cloud computing resources and the ability to unlock more credits over time. To help your startup innovate, Founders Hub is partnering with innovative companies like OpenAI, a global leader in AI research and development to provide exclusive benefits and discounts through Microsoft for Startups Founders Hub, becoming a founder is no longer about who you know. You'll have access to their mentorship network, giving you a pool of hundreds of mentors across a range of disciplines and areas like idea validation, fundraising, management and coaching, sales and marketing, as well as specific technical stress points. You'll be able to book a one on one meeting with the mentors, many of whom are former founders themselves. Make your idea a reality today with a critical support you'll get from founders hub. To join the program, just visit Talk python.FM/Founderhub all one Word the links in your show notes. Thank you to Microsoft for supporting the show.

33:48 You talked about some of the tools that you could use. Maybe we could go through that list of common tools really quickly and you could just give us your thoughts on why you think it's great and maybe why you might want to adopt it, make it part of your flow. Because with Pants, you don't have to add more steps, as we said. So some of the tools you've called out are mypi. I know you're a fan of Python Three and type annotations tell people quick about mypy.

34:12 So for those of you familiar, MyPY brings a level of rigor to your Python quality control that is fantastic. Essentially, you add type annotations to your Python3 code and MyPY performs static type checking and is absolutely tremendous. It is essentially a sort of compilation step for Python. It's not actually generating code, but it is performing type checks that finds a wide variety of bugs and issues. And I would never go back to non type checks Python, interesting.

34:44 Yeah. So it's like if I were to compile it, what would happen, or not actually going to, but let's go through that and give you a report, like sort of print out the warnings and errors that you would have seen in a compiled language and then we'll carry on with a common way these things are referred to. And Python is type hints, which kind of implies they have no effect. But with mypy it's a little bit closer. Right, right.

35:06 So they have no runtime effect, which is true, but they have the type annotations. Well, they can if you in certain circumstances. Pants itself uses its own codes type annotations at runtime in an interesting way, but generally running mypy just an extremely effective quality control check in your code.

35:28 But getting set up with mypy can be complicated. And with Pants, you just Pants, check all my code, or Pants check all my code. That has changed since my last edit. And it will install mypy and it will set it up and it will run.

35:45 It fantastic.

35:46 First time you do that, you'll get many, many errors.

35:49 I'm sure. I'm sure you will. Another one you've given a shout out to is proto buff for protocol buffers. I've spoken very much about those at all on the show. I mean, people know Rest and JSON, they may have scars from soap and XML. I'm not sure how many people are doing, depending on how long they've been doing this kind of stuff. But what's protobuff?

36:12 I just had XML PTSD for a second there.

36:14 Yeah, I'm sorry, I'll send a therapist your way for the show.

36:18 So protobuff is a really fabulous tool out of Google that generates code in many languages from a protofile, which is a language neutral interface definition language, and that works well with gRPC, which is a Google RPC language where it actually generates RPC code and stubs. So that you can use protocol buffers.

36:47 Overthewire protocols is referred to as protocol buffers, and you can just use them as this binary interchange format that is.

36:55 Very efficient of the wire exchanging binary data. If you say like, here's four bites, that's the account, and then here's the length of the string is the next four bytes, and then there's like doing that manually is super tricky and so protocol buffers is a formalization of that. And then this tool would maybe write the Python code that understands particular exchange.

37:18 Okay, so Pants knows how to do code generation in general. It supports many code generators, including Thrift, which is a sort of similar in spirit to protobuff, but Protobuff is a very prominent one. And the idea is that it will generate this Python code out of you have this very succinct interface definition. And it generates fairly elaborate Python code that can serialize and deserialize these messages and send them via RPC interfaces and send them receive them via RPC interfaces over the wire. Where the thing on the other side of that RPC interface might not be Python at all. But they're all talking the same ideal. And yeah, protos are very efficient binary formats, so they use things like variable length integer encoding. So the same message will be significantly more compact in protobuff as it will be in JSON. That said, in probably many cases, JSON is absolutely fine. Right?

38:10 Exactly. There's a ton of value to be like, I can point my web browser or postman or something at that and see the answer, right? That goes a long way. But if you're exchanging really low, latency lots of data as fast as you can, then JSON is probably not it certainly XML with namespaces and XSLT is definitely not the right thing. So this is like a cool, more modern way to do it. Some other tools we've already talked about. Py test. People know what Py test is. Black formatting, right? Auto formatting your code, stop the indentation arguments.

38:47 Yes. Oh my God. When we adopted black in the pants repo, everybody, myself included, got upset for about ten minutes and then realized that far more important than which format is that there is format that is enforced automatically. I personally used to at least prefer two space index to four space index. Black. The reason it's called black, for those who don't know, is that famous you can have any color you want as long as it's black. I think that was Henry Ford.

39:12 I think so.

39:12 Model T. So basically it's a very opinionated formatter that just says this is what Python code should look like. And you know what, I embrace robot overlords in this case. Yes, it's absolutely wonderful. I can just do patch format and it just formats all the code.

39:28 There are no more arguments. Like the true formatting is whatever black output and Pants again makes it very easy to adopt black. It also makes it easy, I should mention, for Linters, for formatters, even for mypy, Pants has affordances in it to help you adopt them incrementally, which you kind of have to do if you have an existing code base. You may want to adopt, you know, it takes time to so Black, it's very aggressive. But there are many other lenses that Pants can run doc four, matter and Flake eight and pylint and Bandit and so on. And I sort and you may want to adopt them incrementally. And Pants has ways to help you do that. And certainly with mypy, because it relies on upstream mypy, you kind of have to adopt independency order because it relies on upstream type annotations. And so there are ways to do that, right.

40:14 To get the most out of it, you've got to have the we call these three functions. Well, those three functions have to have type information and be valid. And I got to start the foundation with those bandits. Interesting. I don't know how many people run Bandit. Probably people who accept user input, probably should, or input from the Internet, tell people about Bandit.

40:34 I'm not super familiar, I'm no expert on bandit, but it does spread security checks so it will automatically find common security issues in your code. And again, Pants makes it very easy to adopt it. You essentially just enable the Bandit plug in. Pants, I should mention, has this plug in architecture where you can opt in to whichever sort of bits of functionality you want. So you enable the Bandit plug in and that's kind of it. And the next time you run Lints, it will run the Bandit checks and it will yell at you about all the security issues it's found in your code.

41:04 That's cool. So you have these different categories. Like you have a test category, a Lint category and so on. And then in your configuration you can say when I say lint, I mean these three things correct.

41:15 So what you refer to as a category in the sort of Pants jargon is referred to as a goal. So it's basically what you type on the command line. You type pants? Test means run test pants. Lint means run all the Linters. But what all the Linters means depends on your configuration.

41:30 Sure, that makes a lot of sense. Maybe one more. I know the list is kind of unbounded in a sense, but AWS Lambda or sort of functions, serverless functions in general, there's probably other ones. You could probably do Azure functions and other things as well.

41:46 The two we support at the moment are Google cloud functions and AWS lambdas. Yes. So pants knows how to take your Python code and package it into a lambda function that you can deploy to AWS or a cloud function you can deploy to GCP.

42:00 The management of that kind of stuff is super picky because you might have 20 functions coming out of all these different pieces of code and did you forget to push that to that particular function point? That's really hard to do to keep that straight.

42:11 Exactly. If you're using serverless, if you're using cloud functions, you probably have many of them and a tool that can tell you which ones need to be redeployed based on changes. Because it's very simple just with Pants to say which cloud functions have been affected by any transitive change since this tag, since this deployment tag. And it will tell you, it just gives you a list and you can just repackage them and redeploy them without having to repack and redeploy everything every single time.

42:39 Awesome. Yeah, that sounds incredibly helpful if you're using them. So the reason I wanted to go through that list here is if adopting those tools and those features sounds interesting, the more you adopt, the more the tools like Pants can help lower the burden and just make that automatic. Right?

42:56 Exactly. It generally just takes away the pain of how do I adopt this tool, how do I run it, how do I configure it, how do I make sure that everyone in my team is on the same page about how to use these things? It just automates away all of that. It also supports things like you can run a Python repl that contains all the dependencies of the bit of code you're interested in. It can do because it has this fine grained dependency analysis. It can know that even if you have like a big requirements text file, it knows which sub parts of that requirements text and their transitive dependencies are relevant to every given binary. So now you're not managing many different requirements, sets of requirements for each of your, let's say, cloud functions or each of your docker images. That's all happening automatically for you.

43:43 Nice. Let's talk about running these tasks and caching performance penalties and so on. So one thing it has here, right on the pantsbuild.org page, and you've hinted at this a few times, is that it speaks git and so it has this way of understanding changes.

44:03 Yes. So built into it is the ability to say things like when you want to run tests, you can say run all the tests. You can say run all the tests in this directory. You can say run all the tests, run this specific test. You can say run all the tests that have this tag. There's like a tagging mechanism where you can label things or you can say, run all the tests that are affected by changes since this Git other git state. So you can say as you're working on your laptop, you can say, all right, run the tests that could possibly be affected based on my changes. Since the main branch, it will internally use Git tooling to figure out what that means. So it'll say, okay, which files have you changed and which things are downstream from those changes?

44:50 When you say affected by, how is that determined? Is that mean I saw there's some Python code that my here's my test and my test is importing this and these import those, or does it do code coverage? What is it based its opinion on, of what's changed?

45:07 It based its opinions on its view of your code's dependencies. Now, almost all of that view comes from analysis of import statements. Occasionally you may have to override those. So, for example, if you're using Django, we have good support for Django, but Django notoriously does a huge amount of dynamic loading based on strings in settings.py, right?

45:28 Sometimes pants can actually look at those strings and figure it out. It has a mode where you can tell it that if it sees strings in a file that look like module names to assume that those are like imports. But sometimes that doesn't work. And so you can manually override the dependency inference and you can say, actually, here's a dependency that's important and you failed to infer it. Or the opposite, you can exclude a dependency that Pants mistakenly inferred, but that's extremely rare.

45:55 So it bases that on its automatic static analysis of your dependencies.

46:00 Cool. So one of the things that might be important when you're running these steps is, for example, the protobuff thing, right? Each step has an output, and maybe that output is consumed by some other, maybe by the test. Maybe the test load up the Python file that was generated by that to go talk to some binary blob and see if it understands it. It's really important to run an order, right?

46:23 Yes. So from your dependencies, plus its understanding of which jobs need which inputs needs to consume which inputs, and which jobs produce as inputs as their outputs, it constructs this very fine grained workflow graph. And that is exactly where the caching and the concurrency comes from. So every node in this graph, and there could be thousands, even every very fine grained node in this workflow graph knows exactly which inputs it needs and the work is done in the right sequence. So if you have two pieces of work that neither of which depend on each other in the Dag sense, they are independent, they can run concurrently and they will. So presumably, if you have multiple cores on your machine, they literally will run they could run in parallel at the same time. But obviously, if a little work unit needs the output of some other work unit to be its input, then it will wait for those. And how all that is strung together is using all of the work is described. As I mentioned, the API for this is python three is async python three. And so you have these things called rules, which are these async coroutines and the rust engine strings together executions of these rules based on data dependencies. And the data dependencies use type annotations. Use just regular python three type annotations to describe the types of the inputs and outputs. And so the engine can say this rule needs to consume an output of this type. I have found a rule that produces an output of that type based on some other input that I already have. So I will string them together and it will do that recursively until it ends up at the initial data, which obviously it has.

48:06 Right? We'll start here, go down this path in that order. Some of them, though, can be parallel, right? Like Bandit and PYflake those just mypy, all those just look at the code and say looks good or doesn't look good and here's your warning message, right? And so you can does it by default or just optionally parallelized that?

48:25 No, it will happen by default.

48:28 I mean, you can control the amount of concurrency. Let's say you're running multiple things on a machine and you don't want it to consume all your cores. You can tune that down. But just normally by default, it will use as much concurrency as the graph allows. So every one of those small work units is candidate for being executed concurrently with other ones. But it is also, and this is really important a candidate for being cached. So many, many intermediate steps can be cached. And in a typical Iterative run when you're iterating on your laptop and you're developing and you make some changes and you run some tests and you make some changes and you run the tests again. So much work that normally would be repeated will not be repeated by pants because the outputs will be pulled from cache. Yeah, that's fantastic because every one of those nodes has it runs, as I said, with no side effects. It runs in a sandbox with no side effects. And its inputs and outputs are statically defined so it can be correctly cached every time. So you as an author of a plug in, for example, don't have to think about caching or concurrency. You just write to the API and caching or concurrency fall out of the design.

49:37 That's really neat. And I can imagine on large code basis things like bandit and mypy, all those analysis tools, they can take a while. And if you just change one file and especially if you're in the mono repo business where it's not just for your website, but there's a ton of stuff you don't want to rerun all those things, you could really get a lot faster.

49:56 You can get a lot of speed increases that way. Yes.

49:59 One of the hesitations to adopt tools like this is they keep rerunning from scratch over and over and that makes it slow. The other one is while you may run it, your teammates may have a less buy in on some of these linting formatting testing ideas. And if you just do that in CI, well, you check it in, it works. Someone else gets their code, you merge theirs, you check it in again and it breaks the build and it can be super frustrating. Is there a way like a pre commit hook or some other mechanism to encourage these to be run by everyone? You certainly can have a personnel problem.

50:40 That really depends on the organization. We tend to see adoption of Pants at the team level. Usually what happens is someone at the team who is just fed up with a not great status quo drives adoption of It and other team members see, there's obviously initial skepticism because there always is whenever you try to introduce new tooling. I'm as guilty of this as anyone, but then there's that AHA moment of oh, things got more rigorous and faster. That is a trade off I will gladly take so very often if CI runs pants. And that is therefore in some sense the definition of correct quality control checks. Because whether it's testing or linting packaging, etc. Because you can't merge until you pass CI, there is a strong incentive there for you to use the same thing that CI is doing on your laptop. Because if you get that to pass on your laptop, it is overwhelmingly more likely to pass in CI, which just gets you to merge faster.

51:40 Yeah, I guess it depends a little bit on how you work as a team and source control. Right. If everyone can just commit to Main, it's much harder to have that. But if you've got to go through a kind of a get flow like you work on your branch and then you merge it in the main when It's approved and CI passes, well, then all of a sudden if CI is not passing, you're not merging and then it trickles back until it gets fixed.

52:02 I think this is a big part of Python quote unquote growing up as a language. Like now, again, it's not fancy shell scripts. It is a workhorse language that people are building massive businesses and systems and data science capabilities out of. And you need to that comes with responsibility to be rigorous about quality control and essentially having really good CI, having really good Iterative development practices is something that is really important for these growing repos. And it's why Pants exists. It is to make that much, much easier and much, much faster than it would otherwise be. If you are sort of not running tests, not really running any checks, pushing directly to main because historically Python repos were these tiny toy things that you could do that in. You are asking for trouble sooner or later.

52:58 So I'm asking you how do I get this tool that applies all these automates, all of these engineering best practices? And you're suggesting that maybe you start at the core and work your way out just the way that you work together as a team through source control. Like you formalize that a little bit, then everything else becomes easy.

53:16 I think so. I mean, having CI, that is the you need some way of saying what is correct. You need some way of saying if this CI is green, that means you can merge this change. If this CI is green, it means that you can deploy to production. You need some way automated way of saying this code is good. Pants makes it very easy to build that ability.

53:39 And once you have that, you never want to go back. There is a hurdle you have to it's less convenient than not doing any quality control, but you sort of have.

53:49 To well, it's less convenient upfront it's.

53:51 Less convenient up front, right. Not worry about it.

53:55 It is, but as soon as you spend the whole weekend trying to figure out why the thing doesn't work and you're supposed to release and it turns out it was somebody else's problem and they didn't test and then all of a sudden that little bit of work up front didn't seem so big.

54:07 Nobody goes backwards, right? Nobody goes in the direction of fewer quality control checks. There is a point in the evolution of your repo where you start adopting them and you just adopt more and more of them and you don't go it's a rat, it's a one way ratchet, basically.

54:21 And with good reason in my mind. All these conversations I've been having, I've been around like teams, I guess, maybe seeded by the idea of you talking about it coming from places like Twitter and so on. What is the Pants story for open source repos? Like, if I was in charge of HttpX, I'm not, but if I were, what is Pants offer me?

54:41 So we're starting to see open source repos adopt. Pants now tends to be the larger ones where things like how can I speed up tests, become a question, even if that repo. One thing that we haven't talked about really is security and sort of protecting your own software supply chain, especially if you're an open source project where you are typically part of other people's software supply chain. So one of the features of Pants is it has very strong support for lock files where and universal lock files that are valid across platforms that lock down. Essentially, you generate a lock file that contains pins every single transitive dependency, including these SHA 256 of all of the Wheel files and Pants then knows how to very efficiently build virtual environments out of the subset of those that is actually necessary in any given situation. So if a test only needs some small subset, it will only use those. The advantage being that test gets invalidated a lot less when because it doesn't get invalid, the results don't get invalidated if an unrelated upstream requirement changes.

55:51 So even for smaller repos or for open source repos, apart from all the other benefits, one benefit that I think is worth looking at is lock files and just locking down your supply chain. It means you don't have like the left pad issue and things like that. You are much more robust, your build is much less impacted by changes on PYPI, by changes in the world at large.

56:14 Yeah, in the homepage it says it has out of the box support for multiple dependency resolvers in addition to these lock files. Right. So is this like your own private PyPI server that you can limit what goes in there? What does that mean?

56:29 Well, you can I think what that was referring to is that you can have multiple of these lock files. So if you have a large code base. You might have different parts of it that genuinely need conflicting dependencies. But you can sort of say.

56:42 Okay.

56:42 Here are like two or three lock files that you are allowed to you have to pick one for a piece of code or a piece of code can be compatible with multiple. But you have to pick one when you come to build a binary or something like that. So for example, it's pretty common to have here is a lock file that is for my web application code and here's a different log file for my data science code because there are conflicts between them that can't be resolved.

57:06 That would be no reason to install JupyterLab on your fast API server, right?

57:13 Well, that wouldn't happen anyway because pats would know even if you had a single lock file that included, say, NumPy, Pants would know nothing in your web app imports NumPy so it wouldn't bring it in. This is more when you actually have multiple lock files. So Pants is very good about shaving down the dependencies, both the internal and external ones, to just what you actually need.

57:35 But where multiple lock files comes in is when you have conflicts. When your code base is large enough that you genuinely cannot have the entire code base be in lockstep on a single set of dependencies. We don't encourage that. It's not a great way to be. It's better if you can have a single consistent resolve across your entire code base, but it's not always possible. And this is an example of where we designed for the world as it is and not the world as we would like it to be.

58:00 Yes, that makes a lot of sense. If one of the APIs is written in Django one and the person who built it left. And there's no reason for it to change. Like, just don't touch it. Just leave that alone over there. But the other part needs to use newer libraries and sure, yes, that's a great example. Yeah, cool. Alright, Benji, I think we might be getting short on time here, but let me close this out with one final question. So you talked a lot about the caching and the parallelization and how like the dependency understanding. So if I want to run these tests, I can just say run since this last get a tag or whatever shah or whatever it is you're going back to what is your personal workflow or common workflows you see, for managing that? Because at some point I'm like, okay, the stuff up to here is good now, it's been a few days, I want to move forward. I know the older stuff is good and we're not changing it. How do you sort of evolve this developer workflow sort of history?

59:01 What's your workflow there?

59:03 I rely very heavily on the git comparison logic. So I should mention I do not code very much anymore because I'm now I'm the co founder of Tool Chain, which is a company that actually provides SASS and support and services around Build, Python and otherwise. And obviously that's where a lot of the Pants expertise comes from. So we provide things like remote caching, remote execution as a service. So I don't code that much anymore, but occasionally when I do, I rely very heavily on the get diff functionality. So my command lines are just basically and one thing Pants has is Macros, where you can create these sorry, macros is something else. What I was actually referring to is you can create these command line aliases. So I can run like Pants or Green, literally, just those words. And that will expand to a slightly longer command line that just says run tests, linters formatters and type checking on mypy, essentially on all the files that are affected by my changes. Compared to main? Compared to the main brand.

01:00:12 Yeah, okay, that's cool because the stuff in Maine should have been verified by CI and should be all right if.

01:00:17 It'S in Main is good. I want to see how have I broken main? And what you can actually do is run this in a loop so you can have paths just sort of watch for file system. It watches your file system for changes and automatically rerun that logic every time you save. So often by the time I tap into my terminal, those checks have already run or are at least running.

01:00:39 Yeah, okay.

01:00:40 Yeah, that's my workflow.

01:00:42 This command alias is a cool idea as well.

01:00:45 I'm sure people will dig that. All right. Well, congratulations on Tool chain. That sounds like a cool thing to be working on and clearly builds on a ton of work you all have been doing.

01:00:54 Thank you. Yeah, we basically feel both on the open source side and on the company's side that you should not have to work at Google or Microsoft or Facebook to have a really fast, stable, powerful build experience. You should have that when you're a 20 person company and when you're a 100 person company and when you're a 2000 person company. You should not have to wait to be 100,000 person company to get that.

01:01:14 Yes, there shouldn't be somebody whose job it is to set up CI. I mean, the whole job, not just something they do as part of their job. I guess that's the other side of things I see about the developer workflow. What does it look like? Let's suppose I have GitHub and I'm using GitHub actions as my CI like how do I get pants to work over there?

01:01:32 So that's an interesting area that we are looking at more and more closely and we will have some interesting announcements about that over the next few weeks.

01:01:40 And just heads up, this episode will probably for people not watching the livestream will be out probably in three or four weeks. So it might actually be real as they hear these words. We'll see.

01:01:51 Maybe fingers crossed. But it is very easy to set up GitHub actions or circle CI or build kite or whatever you're using to run pants commands. And those pants commands in turn take away a lot of the or handle a lot of the concurrency and Caching concerns that normally you would have to really mess with. You have to really drill into your CI config in order to get so it essentially makes it much, much easier to configure CI because the complexity of well, how do I get caching, how do I get concurrency, how do I speed things up? Is handled automatically by Pants. Instead of you manually having to write tons of YAML or whatever your CI providers configure in order to get that concurrency here there's a lot of heavy lifting going on where the system itself is analyzing your code and saying oh, here are opportunities for concurrency, here are opportunities for Caching. Whereas today with CI of all kinds, CI workflows of all kinds, either you do that yourself or you very manually or you don't get it.

01:02:56 Interesting. And maybe as part of the cache, you just say like you described for the developer workflow, right, everything other than you compare that against main and now run it on that dif.

01:03:06 Exactly.

01:03:07 And CI is basically you set up CI to call Pants pants does its magic come out so you don't.

01:03:13 Have to write tons of CI config because so much of the reason you would have to do that is now handled by Pence itself.

01:03:21 Yeah, awesome. Alright, well, very cool project and definitely something to be checking out now before you get out of there, out of here. But final two questions if you're going to write some code, even if you do a little bit less these days, what editor are you using?

01:03:35 I use PyCharm, actually. Technically, I use IntelliJ with the PyCharm plugin because of just habit of I used to write JVM code, and I never lost the habit. But effectively PyCharm.

01:03:47 Yeah. Right on. Cool. And then notable PyPI.

01:03:51 I really like Click. We don't use it for various reasons. We need, like, a lot of control over the CLI. But I really like click for just cobbling together cool tools that have really good CLI interfaces.

01:04:02 Excellent. Let me hijack the end here just for a moment. Maybe I should have asked you before to install Pants. It's not pip install Pants, is it?

01:04:11 Nope.

01:04:12 So if you go to our website, Pantsbuild.org, there's very simple steps for walking through it, but essentially there's a wrapper script that does things like install Pants for you in a virtual amp and keep it up to date so you don't have to worry about, where is this virtual amp? Which version of Pants is in it? It will look at the version that's in your Pants config file. There's a Pants.taml file that contains a bunch of Pants config. One of them is, which version of Pants is this repo supposed to be using? And the script will make sure that is the version being used. So you don't pip install it. You run this script, and it does a bunch of magic on top of the vanilla virtual Amp experience.

01:04:50 Fantastic. Okay. Yeah. Just pantsbuild.org/Installation, and off you go. All right. Thank you so much for being here. Final call to action. People want to get started with Pants. What do they do?

01:05:01 So pantsbuild.org and probably one of the best resources is our Slack channel. So if you go to pantsbuild.org and click on that community link at the top, it will take you straight into, like, how to come chat with us on Slack. So obviously you're going to try and get started without that. But we have a very friendly, helpful community that firmly believes that there are no bad questions, only bad documentation. And so Slack is a great place to kind of sample the community. Come chat with us. Tell us about your needs. Tell us about how parents can meet them or how it can't meet them. It's open source, and we have a lot of contributors from all sorts of companies and all sorts of organizations and all sorts of teams who started that way and got really enamored with what Pants can do and got really involved both in improving the developer experience at their organizations and also in improving Pants itself. So we really best call to action is come say hi.

01:05:58 Congrats on Cool project, and thanks for coming and share it with us.

01:06:01 Thank you. It was my pleasure.

01:06:03 Bye.

01:06:04 This has been another episode of Talk Python to me. Thank you to our sponsors, be sure to check out what they're offering.

01:06:11 It really helps support the show.

01:06:12 Listen to the Local Maximum Podcast learn about topics as diverse as the philosophy of probability and Elon Musk's next move. Just search for local Maximum in your favorite podcast player.

01:06:24 Starting a business is hard. Microsoft for Startups Founders Hub provides all founders at any stage with free resources and connections to solve startup challenges. Apply for free today at Talkpython.FM/foundershub. When to level up your Python. We have one of the largest catalogs.

01:06:43 Of Python video courses over at Talkpython.

01:06:45 Our content ranges from true beginners to deeply advanced topics like memory and async. And best of all, there's not a subscription in sight. Check it out for yourself at training.talkpython.FM be sure to subscribe to the show, open your favorite podcast app and search for Python. We should be right at the top. You can also find the itunes feed at /itunes, the Google Play feed at /Play, and the Direct RSS feed at /RSS on Talk Python FM.

01:07:12 We're live streaming most of our recordings these days. If you want to be part of the show and have your comments featured on the air, be sure to subscribe to our YouTube channel at talkpython.fm/YouTube. This is your host, Michael Kennedy. Thanks so much for listening.

01:07:25 I really appreciate it. Now get out there and write some Python code.

Back to show page
Talk Python's Mastodon Michael Kennedy's Mastodon