Learn Python with Talk Python's 270 hours of courses

#425: Memray: The endgame Python memory profiler Transcript

Recorded on Tuesday, Jun 20, 2023.

00:00 Understanding how your Python application is using memory can be tough.

00:03 First, Python has its own layer of reused memory, arenas, pools, and blocks, to help it be more efficient.

00:10 And many important Python packages are built in native compiled languages like C and Rust,

00:15 oftentimes making that section of your memory usage opaque.

00:19 But with memory, you can get way deeper insight into your memory usage.

00:23 We have Pablo Galindo Salgado and Matt Wazinski back on the show to dive into memory.

00:29 The sister project to their PyStack one we recently covered.

00:32 This is Talk Python to Me, episode 425, recorded June 20th, 2023.

00:38 Welcome to Talk Python to Me, a weekly podcast on Python.

00:54 This is your host, Michael Kennedy.

00:56 Follow me on Mastodon, where I'm @mkennedy, and follow the podcast using @talkpython, both on Bostodon.org.

01:03 Be careful with impersonating accounts on other instances.

01:06 There are many.

01:07 Keep up with the show and listen to over seven years of past episodes at talkpython.fm.

01:13 We've started streaming most of our episodes live on YouTube.

01:16 Subscribe to our YouTube channel over at talkpython.fm/youtube to get notified about upcoming shows and be part of that episode.

01:24 This episode is brought to you by JetBrains, who encourage you to get work done with PyCharm.

01:30 Download your free trial of PyCharm Professional at talkpython.fm/done dash with dash PyCharm.

01:38 And it's brought to you by InfluxDB.

01:39 InfluxDB is the database purpose built for handling time series data at a massive scale for real-time analytics.

01:46 Try them for free at talkpython.fm/InfluxDB.

01:51 Hey, all.

01:52 Before we dive into the interview, I want to take just a moment and tell you about our latest course over at Talk Python,

01:56 MongoDB with Async Python.

01:59 This course is a comprehensive and modernized approach to MongoDB for Python developers.

02:05 We use Beanie, Pydantic, Async and Await, as well as FastAPI to explore how you write apps for MongoDB and even test them with Locus for load testing.

02:14 And just today, yes, exactly today, the last of these frameworks were upgraded to use the newer, much faster Pydantic 2.0.

02:23 I think it's a great course that you'll enjoy.

02:25 So visit talkpython.fm/Async dash MongoDB to learn more.

02:29 And if you have a recent full course bundle, this one's already available in your library of courses.

02:34 Thanks for supporting this podcast by taking and recommending our courses.

02:39 Hey, guys.

02:41 Hey, Pablo.

02:42 Matt.

02:43 Welcome back to Talk Python to me.

02:44 It hasn't been that long since you've been here last time, has it?

02:47 With the magic of the issue, maybe even minutes since the person listening to us listens to the previous one.

02:53 Exactly.

02:53 We don't know.

02:54 We don't know when they're going to listen.

02:55 And they don't know when we recorded it necessarily.

02:57 It could be magic.

02:59 It's a little bit apart.

03:00 But we got together to talk about all these cool programs that give insight into how your app runs in Python.

03:07 So we talked about PyStack previously about figuring out what your app is doing, if it's locked up or at any given moment, if it crashes, grab a core dump.

03:14 And maybe we thought about combining that with memory and just talking through those.

03:19 But they're both such great projects that, in the end, we decided, nope, they each get their own attention.

03:25 They each get their own episode.

03:26 So we're back together to talk about memory and memory profiling in Python.

03:32 An incredible, incredible profiler we're going to talk about in a minute.

03:37 Pablo, you were just talking about how you were releasing some of the new versions of Python, some of the point updates and some of the betas.

03:43 You want to give us just a quick update on that before we jump into talking about memory?

03:48 Yeah, absolutely.

03:49 I mean, just to clarify also, like the ones I released myself are 3.10 and 3.11, which are the best versions of Python you will ever find.

03:57 But the ones we are releasing right now is 3.12.

04:01 We got the beta 3 today.

04:03 You should absolutely test beta 3.

04:05 Maybe they are not as exciting as 3.11, but there is a bunch of interesting things there.

04:10 And, you know, there is the work of the FasterCPython team.

04:13 And we have a huge change in the parser, but technically tokenizer, because we are getting f-strings even better.

04:21 And that has a huge amount of changes everywhere, even if you don't think a lot about that.

04:26 But having this tested is quite important, as you can think.

04:30 So far, we really, really want everyone to test this release.

04:33 Everyone that is listening to the live version of the podcast can go to python.org.

04:38 A lot of the latest pre-release, that is Python 3.12 beta 3.

04:43 And tell us what's broken.

04:45 Hopefully it's not my fault.

04:46 But yeah.

04:47 No, that's excellent.

04:48 Thanks for keeping those coming.

04:49 Do you know how many betas are planned?

04:52 We're on three now.

04:54 This is going to be a bit embarrassing because I should know being a release manager, I think there is two more.

04:57 It's a bit tricky though, because I think we released beta two a week after beta one, because we shift the schedule.

05:03 It's a bit difficult to know, but there is a PEP that we can certainly find.

05:07 You just search for Python releases schedule, Python 3.12.

05:11 They will tell you exactly how many betas there are.

05:13 I think there is two more betas.

05:14 Then we will have one release candidate, if I recall correctly, if we do things the way I did them.

05:20 And then the final version in October.

05:21 I'm looking forward to it.

05:23 And, you know, the release of 3.12, actually, it's going to have some relevance to our conversation.

05:28 Yes, indeed.

05:29 Yeah.

05:30 I assume people probably listened to the last episode, but maybe just, you know, a real quick introduction to yourself.

05:35 You know, Pablo, you go first, just so people know who you are.

05:37 Yeah, absolutely.

05:38 So I'm Pablo Galindo, I have many things in the Python community.

05:41 I have been practicing to save them very fast, so I don't take a lot of time.

05:44 So I'm a CPython card developer, Python release manager, a string council, and I work at Bloomberg and the Python infrastructure team doing a lot of cool tools.

05:54 I think I don't, I'm not forgetting about anything, but yeah, I'm around.

05:58 I break things so you can, I like to break tools, like with my new changes in CPython.

06:04 That's what I do.

06:04 Excellent.

06:05 Sorry, Wukesh.

06:07 And I am Matt Wisniewski.

06:09 I am Pablo's co-worker on the Python infrastructure team at Bloomberg.

06:12 I am the co-maintainer of Memray and PyStack.

06:16 I'm also a moderator on Python Discord, and that is the extent of my very short list of community involvement compared to Pablo's.

06:24 Excellent.

06:25 Well, yeah, you both are doing really cool stuff, as we're going to see.

06:28 Let's start this conversation off about profilers at a little bit higher level, and then we'll work our way into what is Memray and how does it work and where does it work, all those things.

06:38 So let's just talk about what comes with Python, right?

06:43 We have, interestingly, we have two options inside the standard library.

06:47 We have cprofile and profile.

06:49 Do I use cprofile to profile cpython and profile for other things, or what's going on here, guys?

06:55 That's already...

06:56 You use cprofile whenever you can, is the answer.

06:59 Yes, indeed.

07:00 There is a lot of caveats here.

07:01 I already see, like, two podcast episodes yet with this question, so let's keep this short.

07:07 The cprofile and profile are the profilers that come with the standard library.

07:11 I'm looking at the...

07:13 I mean, probably you are not looking at this in the podcast version, but here we are looking at the Python documentation and the profile, cprofile version.

07:21 And there is this lovely sentence that says cprofile and profile provide deterministic profiling of Python programs.

07:27 And I'm going to say, wow, that's an interesting take on them.

07:30 I wouldn't say deterministic, although I think the modern terminology is tracing.

07:35 And this is quite important because, like, it's not really deterministic in the sense that if you executed the profile 10 times, you're going to get the same results.

07:44 You probably are not because programs in general are not deterministic due to several things.

07:48 We can go into detail.

07:49 Why not?

07:50 Even what else is running on your computer, right?

07:52 Like...

07:52 Exactly.

07:52 What this is referring to is actually a very important thing for everyone to understand because everyone gets this wrong.

07:58 And, like, you know, there is so many discussions around this fact and comparing apples to oranges that is just very annoying.

08:04 So what this is referring to is a cprofile is what is called a tracing profiler.

08:08 The other kind of profiler that we will talk about is a sampling profiler, also called a statistical profiler.

08:15 So this one, so a sampling profiler basically is a profiler that every time, assuming that it's a performance one, because cprofile checks time.

08:23 So how much time does your functions take or why your code is slow, in other words.

08:28 So this profiler basically checks every single Python call that happens, all of them.

08:32 So it sees all of the calls that are made.

08:34 And every time a function returns, it goes and sees them.

08:37 Unfortunately, this has a disadvantage that is very slow.

08:41 So running the profiler will make your code slower.

08:44 So your code takes one hour to run.

08:46 It's not unsurprising that running it under cprofile makes it two hours.

08:50 And then you will say, well, how can I profile anything?

08:53 Well, because it's going to report a percentage.

08:55 So hopefully it's the same percentage, right?

08:57 Not just that it makes it slow.

08:59 The other problem with it is that it makes it slow by different amounts, depending on the type of code that's running.

09:05 If what's executing is IO and you're waiting for a network service or something like that to respond, cprofile isn't making that any slower.

09:12 It takes the amount of time that it takes and it's able to accurately report when that call finishes.

09:16 But if what's running is CPU bound code, where you're doing a bunch of enters and exits into Python functions and executing a bunch of Python bytecode, the tracing profiler is tracing all of that and it's got overhead added to that.

09:28 So the fact that it isn't adding overhead to network calls or to disk IO or things like that, but is adding overhead to CPU bound stuff means that it can be tough to get a full picture of where your program is spending its time.

09:40 It's very good at telling you where it's spending its CPU, but not as good at telling you where it's spending its time.

09:46 Right, right.

09:46 Because it has this, it's one of these Heisenberg quantum mechanics sort of things.

09:51 By observing it, you make a change.

09:53 And it's really, Matt, that's a great point.

09:55 I think also you could throw into there specifically in the Python world that it's really common that we're working with computation involving C or Rust, right?

10:06 And so if I call a function where the algorithm is written in Python, every little step of that through the loops of that algorithm are being modified and slowed down by the profiler.

10:16 Whereas once it hits a C or a Rust layer, it just says, well, we're just going to wait till that comes back.

10:22 And so it doesn't interfere, right?

10:23 And so it, even across things like where you're using something like say Pandas or NumPy, potentially, it could misrepresent how much time you're spending there.

10:33 On the other hand, it was not going to interfere with the RAS or C, but it's also not going to report inside that.

10:38 So you are going to see a very high level view of what's going on.

10:43 So it's going to tell you algorithm running, but like you're not going to see what's going on, right?

10:48 Well, the advantage here is that it comes with the standard library, which is, and it's a very simple profiler.

10:52 So you know what you're doing, which is maybe a lot to ask.

10:56 Because, you know, it's not, no, I mean it like in the sense that it's not that you are a professional.

11:00 It's that sometimes it's very hard to know when it's a good choice of a tool.

11:05 Because as Matt was saying, you know, you have a lot of CPU bound code and you don't have a lot of IOD and you're safe.

11:11 But like sometimes it's very difficult to know that that's true or like how much do you have.

11:15 So you have a very simple situation like a script maybe or a simple algorithm.

11:18 It may work and you don't need to reach for something more sophisticated, right?

11:21 Yeah.

11:21 Knowing what type of problem you're falling into and whether this is the right tool already requires you to know something about where your program is spending most of its time.

11:29 If you are using this tool to find out where your program is spending its time, you might not even be able to accurately judge if this is the right tool to use.

11:37 That's true.

11:37 But also it can give you some good information, right?

11:40 It's not completely, but it certainly, as long as you are aware of those limitations that you laid out, Matt, you could look at it and say, okay, I understand that these things that seem to be equal time, they might not be equal.

11:52 But it still gives you a sense of like within my Python code, I'm doing more of this.

11:56 Right.

11:56 Or here's how much time I'm waiting.

11:58 Also another thing to mention here, which is going to become relevant when we talk about memory as well, is an advantage of this is that because it's in the standard library.

12:06 What this tool produces is a file with the information of the profile run.

12:10 And because it's in the standard library and it's so popular, there is a lot of tools that can consume the file and show you the information in different ways.

12:17 So you have a lot of ways to choose how you want to look at the information.

12:21 Some people, for instance, like to look at the information into kind of a graph that will tell you the percentage of the calls and things like that.

12:29 Some other people like to see it in a graphical view.

12:32 So there's this box with boxes inside that will tell you the percentage and things like that.

12:37 And some people like to see it in Terminal or in the GUI or in PyCharm or whatever it is.

12:43 So there is a lot of ways to consume it, which is very good because, you know, different people have different ways to consume the information.

12:49 And that is a fact.

12:50 Depends on who you are and how, whether you're looking at some visualizations may be better than others.

12:55 And there is a lot to choose here.

12:57 And that is an advantage compared to something that, you know, just offers you one and that's all.

13:00 Indeed.

13:00 So I mentioned that 3.12 might have some interesting things coming around this profiling story.

13:08 We have PEP669, low impact monitoring for CPython.

13:14 This is part of the Faster CPython initiative, I'm guessing, because Mark Shannon is the author of it.

13:19 It's kind of related.

13:20 I don't think it's immediately there.

13:22 I mean, it's related to the fact that it's trying to make profiling itself better.

13:26 I know he has to spend time from the Faster CPython project into implementing this.

13:29 I need to double check if this is in 3.12.

13:32 I think it is.

13:33 But it may be accepted for 3.12 by going to 3.13.

13:37 We should double check.

13:39 So the 100% say that it's in 3.12 because I don't know if he had the time to fully implement it.

13:45 Yeah, I don't know if it's in there, but from a PEP perspective, it says accepted and for Python 3.12.

13:49 For Python 3.12, yes.

13:50 I do believe it's in there.

13:51 I'm pretty sure that I've been talking with Ned Batchelder a bit about coverage, and I'm pretty sure he said he's been testing with this in the coverage, testing coverage against this with 3.12 betas.

14:02 So the idea here is to add additional events to the execution in Python, I'm guessing.

14:08 It says it's going to have the following events, pystart, pyresume, pythrow, pyyield, pyunwind, call.

14:15 How much either of you guys know about this?

14:17 Yeah, I'm quite.

14:18 I mean, I was involved in judging this, so I know quite a lot since I was accepting this.

14:25 But the idea here is that just as a high-level view, because if we go into detail, we can again make two more podcast episodes, and maybe we should invite Max Shannon in that case.

14:35 But the idea here is that the tools that the interpreter exposes for the profiler and debugging, because debugging is also involved here, they impose quite a lot of overhead over the program.

14:45 What this means is that running the program under a debugger or a profiler will make it slow.

14:51 We are talking about tracing profilers, yes, because the other kind of profilers, sampling profilers, they work differently, and they will not use these APIs.

15:00 They trade accuracy for a lower impact, yeah.

15:02 Yes.

15:03 I mean, just to be clear, because I don't think we are going to talk that much about them, but just to be clear what is the difference.

15:08 The difference here is that a sampling profiler, instead of tracing the program, I see the runs and sees everything that the program does, it just takes photos of the program at regular intervals.

15:18 So it's like, you know, imagine that you're working on a project, and then I enter your room every five minutes and tell you, what file are you working on?

15:26 And then you tell me, oh, it's a program.cpp, right?

15:29 And then I enter again, it's a program.cpp.

15:31 And then I enter again, it's like a other thing.cpp.

15:34 So if I enter 100 times and 99 of them you were in this particular file, then I can tell you that that file is quite important, right?

15:41 So that's the idea.

15:43 But maybe when I was not there, you were doing something completely different, and I miss it.

15:46 It just happens that every five minutes you were checking the file because you really like how it's written, but you were not doing anything there.

15:52 So, you know, like there is a lot of cases when that can misrepresent what actually happened.

15:57 An advantage here is that nobody is annoying you while I'm not entering the room, right?

16:03 So you can do actual work at actual speed, right?

16:06 And therefore, these profiles are faster, but as you say, they trade kind of like accuracy for speed.

16:12 But this PEP tries to make tracing profiles faster, so that the other ones.

16:16 And the idea here is that the kind of APIs that Cpython offers are quite as low because they are super generic in the sense that what they give you is that every time a function call,

16:26 in the case of the profiler APIs, is made or returns, it will hold you, but it will basically pre-compute a huge amount of, well, not a huge amount,

16:36 but it's quite a lot of amount of information for you, so you can use it.

16:39 Most of the time you don't care about that information, but it's just there, and it was just pre-computed for you, so it's very annoying.

16:45 And in the case of the tracing, the Cs.setTrace, so this is for debuggers and, for instance, coverage use that as well, which is the same idea, but instead of every function call, it's every bytecode instruction.

16:56 So every time the bytecode execution loop executes the instruction, it calls you, or you can have different events, like every time it changes lines, something like that.

17:05 But the idea is that the overhead is even bigger.

17:07 And again, you may not care a lot about all these things.

17:11 So the idea here is that instead of, like, calling you every single time, you could maybe do something, you can tell the interpreter what things are you interested in.

17:19 So you say, well, look, I'm a profiler, and I am just interested on, you know, when a function starts and when a function ends, I don't care about the rest.

17:27 So please don't pre-compute line numbers, don't give me any of these other things, just call me.

17:31 Just don't do anything.

17:32 So the idea is that then you only pay for these particular cases, and the idea is that it's as fast as possible.

17:40 Because also the fact that this is event-based makes the implementation a bit easier in the sense that it doesn't need to slow down the normal execution loop by a lot.

17:47 Only you register a lot of events, then it will be quite slow.

17:50 But as you can see here for the list of events, there is a bunch of things that you may not care about, like, for instance, race exceptions or change lines and things like that.

17:58 But the idea here is that, you know, because it's event-based, then if you are not interested in many of these things, then you don't register the events for that.

18:05 So you are never called for them and you don't pay the cost, which in theory will make some cases faster.

18:10 Some others not.

18:11 Sure.

18:12 It depends on how many of these events the profiler subscribes to, right?

18:15 This portion of Talk Python to Me is brought to you by JetBrains and PyCharm.

18:22 Are you a data scientist or a web developer looking to take your projects to the next level?

18:27 Well, I have the perfect tool for you, PyCharm.

18:30 PyCharm is a powerful integrated development environment that empowers developers and data scientists like us to write clean and efficient code with ease.

18:39 Whether you're analyzing complex data sets or building dynamic web applications, PyCharm has got you covered.

18:45 With its intuitive interface and robust features, you can boost your productivity and bring your ideas to life faster than ever before.

18:52 For data scientists, PyCharm offers seamless integration with popular libraries like NumPy, Pandas, and Matplotlib.

18:59 You can explore, visualize, and manipulate data effortlessly, unlocking valuable insights with just a few lines of code.

19:06 And for us web developers, PyCharm provides a rich set of tools to streamline your workflow.

19:10 From intelligent code completion to advanced debugging capabilities, PyCharm helps you write clean, scalable code that powers stunning web applications.

19:19 Plus, PyCharm's support for popular frameworks like Django, FastAPI, and React make it a breeze to build and deploy your web projects.

19:28 It's time to say goodbye to tedious configuration and hello to rapid development.

19:33 But wait, there's more.

19:35 With PyCharm, you get even more advanced features like remote development, database integration, and version control, ensuring your projects stay organized and secure.

19:43 So whether you're diving into data science or shaping the future of the web, PyCharm is your go-to tool.

19:48 Join me and try PyCharm today.

19:50 Just visit talkpython.fm/done-with-pycharm, links in your show notes, and experience the power of PyCharm firsthand for three months free.

20:01 PyCharm.

20:02 It's how I get work done.

20:04 For example, so one of the events is PyUnwind.

20:10 So exit from a program function during an exception unwinding.

20:14 You probably don't really care about recording that and showing that to somebody in a report.

20:20 But the line event, like an instruction is about to be executed that has a different line number from the preceding instruction.

20:26 There we go.

20:27 All right.

20:27 Something like that.

20:28 This is an interesting one.

20:29 Sorry, Matt, do you want to mention something?

20:31 I think you do need to care about unwind.

20:33 Actually, you need to know what function is being executed.

20:36 And in order to keep track of what function is being executed at any given point in time, you have to know when a function has exited.

20:42 There's two different ways of knowing when the function has exited, either a return or an unwind, depending on whether it returned due to a return statement or due to falling off the end of the function or because an exception was thrown and not caught.

20:54 Okay.

20:54 Give us an example of one that you might not care about from a memory-style perspective.

21:00 Instruction is one that we wouldn't care about.

21:02 In fact, even line is one that we wouldn't care about.

21:04 Memory cares about profilers in general, for the most part, will care not about what particular instruction is being executed in a program.

21:12 They care about what function is being executed in a program because that's what's going to show up in all the reports they give you rather than line-oriented stuff.

21:19 So maybe coverage and then Batch Elder might care about line.

21:23 Yeah, yeah, yeah.

21:24 But you guys would.

21:25 He very much cares about line.

21:26 Yeah, I can imagine.

21:27 And that's the slow one.

21:28 That's the slow one.

21:29 And it's important to understand why it's slow.

21:31 It's slow because the program doesn't really understand what a line of code is, right?

21:36 A line of code is a construct that only makes sense for you, the programmer.

21:41 The parser doesn't even care about the line because it sees coding in a different way.

21:45 It's a stream of bytes.

21:46 And lines don't have semantic meaning for most of the program compilation and execution.

21:52 The fact that you want to do something when a line changes, then it forces the interpreter to not only keep around that information, which mostly is somehow there, compressed, but also reconstructed.

22:02 So basically every single time, I mean, it's made in a obviously better way.

22:06 But the idea is that every single time it executes the instruction, it needs to check, oh, did I change the line?

22:11 And then if the answer is this, then it calls you.

22:14 That is basically the old way, sort of.

22:16 Because instead of doing that, it has kind of a way to know when that happens, so it's not constantly checking.

22:21 But this is very expensive because it needs to reconstruct that information.

22:24 That slowness is going to happen every single time you're asking for something that doesn't have kind of meaning in the execution of the program.

22:31 And an exception has it.

22:32 Like the interpreter needs to know when an exception is raised and what that means because it needs to do something special.

22:37 But the interpreter doesn't care about what a line is.

22:39 So that is very expensive.

22:41 Right.

22:41 You could go and write statement one, semicolon statement two, semicolon statement three, and that would generate a bunch of bytecodes, but it still would just be one line.

22:48 Sure.

22:49 Hey, Pablo, sidebar, it sounds like there's some clipping or some popping from your mic, so maybe just check the settings just a little bit.

22:56 Oh, absolutely.

22:57 Yeah, hopefully we can clean that up just a bit.

22:59 But it's not terrible either way.

23:00 All right.

23:01 So you think this is going to make a difference?

23:03 This seems like it's going to be a positive impact here?

23:06 One particular way that it'll make a difference is that for the coverage case that we just talked about, coverage needs to know when a line is hit or when a branch is hit, but it only needs to know that once.

23:15 And once it has found that out, it can stop tracking that.

23:18 So the advantage that this new API gives is the ability for coverage to uninstall itself from watching for line instructions or watching for function call instructions from a particular frame.

23:31 Once it knows that it's already seen everything that there is to see there, then it can speed up the program as it goes by just disabling what it's watching for as the program executes.

23:40 Okay.

23:40 That's an interesting idea.

23:41 It's like, it's decided it's observed that section enough in detail, and it can just kind of step back a little bit higher.

23:47 Yep.

23:47 All right.

23:48 Okay.

23:48 Excellent.

23:49 So this is coming, I guess, in October.

23:52 Pablo will release it to the world.

23:54 Thanks, Pablo.

23:54 No, this time is Thomas Wooders, which is the release manager of 312 is Thomas.

24:00 Oh, this is Thomas.

24:01 Oh, this is 312.

24:02 That's right.

24:02 That's right.

24:03 So you want to blame someone, don't blame me for this one.

24:06 Exactly.

24:08 Exactly.

24:08 All right.

24:09 So that brings us to your project, Memray, which is actually a little bit of a different focus than at least Cprofile, right?

24:17 And many of the profilers, I'll go and say most of the profilers answer the question of where am I spending time, not where am I spending memory, right?

24:26 I would agree that that's true.

24:27 There are definitely other memory profilers.

24:29 We're not the only one, but the majority of profilers are looking at where time is spent.

24:32 And yet understanding memory in Python is super important.

24:36 I find Python to be interesting from the whole memory, understanding the memory allocation algorithms, and there's a GC, but it only does stuff some of the time.

24:47 Like, how does all this work, right?

24:49 And we as a community, maybe not Pablo as a core developer, but as a general rule, I don't find people spend a ton of time obsessing about memory like maybe they do in C++ where they're super concerned about memory leaks or some of the garbage collected languages where they're always obsessed with.

25:05 You know, is the GC running and how's it affecting real time or near real time stuff?

25:10 It's a bit of a black box, maybe how Python memory works.

25:15 Would you say for a lot of people out there?

25:17 Oh, yeah, absolutely.

25:18 Yeah, I think that's definitely true.

25:20 I think it is as well.

25:21 And even these days, with all the machine learning and data science and the higher the abstraction goes, the easier it is to just allocate three gigabytes without you knowing.

25:30 Like, you do something and then suddenly you have half of the RAM filled by something that you don't know what it is.

25:35 Yeah.

25:36 Because, you know, you are so high level that you didn't allocate any of these memories as the library.

25:40 Yeah. Profiling for where time is being spent is something that pretty much every developer wants to do at some point.

25:45 From the very first programs you're writing, you're thinking to yourself, well, I wish this was faster and how can I make this faster?

25:50 I think looking at where your program is spending memory is more of a special case that only comes up in either when you have a program that's using too much memory and you need to figure out how to pair it back.

26:01 Or if you are trying to optimize an entire suite of applications running on one set of boxes and you need to figure out how to make better use of a limited set of machine resources across applications.

26:14 So that comes up more at the enterprise level.

26:17 Yeah, sure.

26:18 We heard Instagram give a talk about what did they entitled it?

26:21 Something like dismissing the GC or something like that where they talked about actually.

26:26 It's very funny because they made that talk and then they make a following up saying like the previous idea was actually bad.

26:33 So now we have a refined version of that.

26:34 I know.

26:35 I remember they did all that.

26:36 We have a refined version of the idea.

26:37 But yeah.

26:38 This was the one where they were disabling GC in their worker processes.

26:43 Yeah.

26:44 For, I think they're Jenga workers.

26:45 Yeah.

26:45 Yes.

26:46 They have a forecast check.

26:47 Quite interesting use case because it's quite common.

26:49 But I want to ask to what Matt said that memory has this funny thing compared with time,

26:54 which is that when people think about the time my program is spending on something,

26:59 they don't really know what they are talking about.

27:01 Right.

27:02 They know what you want.

27:02 Memory is funny because most of the time they actually don't.

27:06 And you will say, how is that possible?

27:07 Like the problem is that with memory is that you understand the problem.

27:10 Like I have this thing called memory on my computer and it's like a number,

27:15 like 12 gigabytes or six gigabytes or whatever it is.

27:18 And it's half full.

27:19 And I understand that concept, but the problem is that why is half full or like,

27:23 what is even like memory in my program, which is different from that value.

27:28 Now there's a huge disconnect.

27:30 Right.

27:30 And this is so much, so interesting.

27:33 Like, I don't know if like, this is going to be a lot, like a super interesting to talk

27:36 about, but like, I want to just highlight this because when, when I, imagine that I ask

27:41 you, what is allocating memory for you?

27:43 Like, what is that?

27:44 It's calling malloc.

27:45 It's creating a Python object.

27:46 It like, because when you, this is, this is very interesting.

27:49 And in Python, because we are so high level, who knows?

27:52 Because when you create a Python object, well, it may or may not require memory.

27:57 But when you call malloc, it may or may not actually allocate memory.

28:00 Right.

28:01 And if you really go and say, okay, so, so just tell me when I really go to that, you

28:07 know, physical memory, and I really spend some of that physical memory in my problem.

28:11 If you want just that, then you are not going to get information about your program because

28:16 you are above so many abstractions that if I just told you when that happens, you're going

28:21 to miss so much because you're going to find that the Python and the runtime C or C++ and

28:28 the OS really likes to batch this operation.

28:31 The same way you don't want to, you know, you're going to read a big file.

28:35 When you call read, you're not going to read one byte at a time because that will be very

28:39 expensive.

28:39 The OS is going to kind of read a big chunk.

28:42 And every time you call read, it's going to give you the pre-chunk that it already fetched.

28:47 Right.

28:47 And here it will happen the same.

28:49 It's going to basically, even if you ask for like a tiny amount, like let's say you want

28:53 just a 5k bytes, right?

28:55 It's going to record like grab a big chunk and then it's going to give you from the chunk until

28:59 it gets rid of.

29:00 So what's going to happen is that you may be very unlucky and you're going to ask for

29:03 a tiny, tiny object.

29:04 And if you only care when I really go to the physical memory, you're going to get like maybe

29:09 a 4k allocation from that very, very tiny object that you ask.

29:13 And then you're going to, that doesn't make any sense because I just wanted space for this

29:16 tiny object.

29:17 And then you located four kilobytes of memory or even more.

29:20 It's super not obvious, isn't it?

29:22 Yeah.

29:22 On Linux, the smallest amount you could possibly allocate from the system is always a multiple

29:27 of four kilobytes.

29:28 Well, that's by default.

29:29 You can actually change that.

29:30 The page size.

29:32 The page size can be changed.

29:33 Yes.

29:33 Can it be lowered?

29:34 I don't think it can be lower, but certainly it can be made higher.

29:37 And when you make higher, there is this big page optimization.

29:41 When it's super ridiculous.

29:43 Actually, Windows, you can do the same, if I recall, because Windows has something called

29:46 huge pages.

29:47 There's something called huge pages.

29:49 And it's very funny because it affects some important stuff, like the speed of hard drives

29:53 and things like that.

29:54 This portion of Talk Python to Me is brought to you by Influx Data, the makers of InfluxDB.

30:02 InfluxDB is a database purpose built for handling time series data at a massive scale for real-time

30:10 analytics.

30:10 Developers can ingest, store, and analyze all types of time series data, metrics, events, and

30:16 traces in a single platform.

30:18 So, dear listener, let me ask you a question.

30:20 How would boundless cardinality and lightning-fast SQL queries impact the way that you develop

30:25 real-time applications?

30:26 InfluxDB processes large time series data sets and provides low-latency SQL queries, making

30:32 it the go-to choice for developers building real-time applications and seeking crucial insights.

30:38 For developer efficiency, InfluxDB helps you create IoT analytics and cloud applications using

30:44 timestamped data rapidly and at scale.

30:47 It's designed to ingest billions of data points in real-time with unlimited cardinality.

30:52 InfluxDB streamlines building once and deploying across various products and environments from

30:58 the edge, on-premise, and to the cloud.

31:00 Try it for free at talkpython.fm/influxDB.

31:05 The link is in your podcast player show notes.

31:08 Thanks to InfluxData for supporting the show.

31:13 Maybe one of you two can give us a quick rundown on the algorithm for all the listeners.

31:18 But the short version is if Python went to the operating system for every single byte of memory

31:25 that it needed.

31:26 So if I create the letter A, it goes, oh, well, I need, you know, what is that?

31:30 30, 40 bytes.

31:31 Turns out.

31:32 Hopefully less.

31:33 Hopefully less.

31:34 But yeah, it's not eight.

31:37 Yeah, it's not just the size, actually, of like you would have in C.

31:39 There's like the reference count and some other stuff.

31:42 Whatever.

31:43 Like it's, let's say, 30, 20 bytes.

31:45 It's not going to go to the operating system and go, I need 20 more.

31:48 20 more bytes.

31:49 20 more bytes.

31:50 It has a whole algorithm of getting certain like blocks of memory, kind of like 4K blocks

31:55 of page size.

31:56 And then internally say, well, here's where I can put stuff until I run out of room to store,

32:02 you know, new.

32:02 Right.

32:03 20 byte size pieces.

32:05 And then I'll go ask for more.

32:07 So you need something that understands Python to tell you what allocation looks like, not just

32:13 something that looks at how the process talks to the OS, right?

32:16 Yeah, I think that's definitely the case.

32:18 There's one pattern that you'll notice with large applications is that there tend to be

32:22 caches all the way down.

32:23 And you can think of this as the C library fetching, allocating memory from the system and then

32:28 caching it for later reuse once it's no longer in use.

32:32 And above that, you've got the Python allocator doing the same thing.

32:36 It's fetching memory from the system allocator and it's caching it itself for later reuse and

32:42 not freeing it back to the system immediately, necessarily.

32:46 Yeah.

32:46 The key here, which is a conversation that I have with some people that are surprised, like,

32:51 like, okay.

32:52 So when they ask like, what is this Python allocator business?

32:54 And when you explain it, they say, well, it's doing the same thing as malloc in the sense that

32:59 when you call malloc, it doesn't really go to the system every single time.

33:02 It does the same thing in a different way with a different algorithm.

33:06 I mean, that the Python allocator does.

33:08 So what's the point if they are doing the same thing?

33:10 The key here is that is the focus.

33:12 Like the algorithm that malloc follows is generic.

33:15 Like it doesn't know what you're going to do.

33:17 It's trying to be fast, as fast as possible.

33:20 But for the, because it doesn't know how you're going to use it, it's going to be, try to make

33:24 it as fast as possible for all possible cases.

33:26 But the Python allocator knows something which is very important, which is that most Python

33:31 objects are quite small.

33:33 And the object itself, not the memory that it holds to, right?

33:36 Because like the list object by itself is small.

33:39 It may contain a lot of other objects, but that's a big array, but the object itself is very small.

33:44 And the other thing is that there tend to be sort of leave.

33:46 This means that there is a huge amount of objects that are being created and destroyed very fast.

33:49 And that is a very specific pattern of uses.

33:52 And it turns out that you can customize the algorithm doing the same basic thing with Matt

33:57 mentioned, this caching of memory.

33:58 You can customize the algorithm to make that particular pattern faster.

34:02 And that's why we have a Python allocator in Python.

34:05 And we have also malloc.

34:06 Right.

34:07 So there's people can go check out the source code.

34:09 There's a thing called PyMalloc that has three data structures that are not just bytes, but

34:15 it has arenas, chunks of memory that PyMalloc directly requests.

34:20 It has pools, which contain fixed sizes of blocks of memory.

34:25 And then these blocks are basically the places where the variables are actually stored, right?

34:32 Like I needed 20 bytes, so that goes into a particular block.

34:35 Often the block is dedicated to a certain size of object, if possible, right?

34:41 And these tend to be quite small.

34:42 Because the other important thing is that this is only used if your object is smallish.

34:46 I think it's 512 kilobytes or something like that.

34:50 There's a limit.

34:51 It doesn't matter.

34:51 The important thing is that if the object is medium size or big, it goes directly to malloc.

34:57 So it doesn't even bother with any of these arenas or blocks.

35:01 So this is just for the small ones.

35:02 And I guess that's because it's already different from the normal allocation pattern that we see

35:07 for Python objects, that they tend to be small.

35:09 At the point where you're getting bigger ones, we might not have as good of information about

35:12 what's going on with that allocation.

35:14 And it might make sense to just let the system malloc handle it.

35:18 Okay.

35:18 So there's that side.

35:19 We have reference counting, which does most of the stuff.

35:22 And then we have GCs that catches the cycle.

35:24 Not really worth going in, but primarily reference counting should be people's mental model,

35:28 I would imagine, right?

35:29 For the lifetime, you mean?

35:31 For the lifetime of objects, yeah.

35:32 Yeah.

35:32 Yeah.

35:33 Yeah.

35:33 That's why it was at least conceivable that Instagram could turn off the GC and

35:37 instantly run out of memory, right?

35:39 Right.

35:39 Right.

35:40 I mean, when they turn off, this is just the pedantic compiler engineer mindset turning on

35:45 here.

35:46 But technically, reference count is a GC model.

35:48 So technically, there is two GCs in Python, right?

35:51 But yeah.

35:52 But normally, when people say that you see...

35:54 How about not the mark and sweep GC?

35:57 Right.

35:57 When people say that you see, they say the cycle GC.

36:00 Yeah.

36:01 Right.

36:01 Yeah.

36:01 Cool.

36:02 Python doesn't actually have a mark and sweep GC.

36:04 The way the cycle collecting GC works is not mark and sweep.

36:08 It's actually implemented in terms of the reference counts.

36:10 It was something that surprised me a lot when I learned it.

36:13 Yeah.

36:13 There is an interesting page in the dev guide written by a crazy Spanish person that goes

36:19 into detail over how it is done.

36:20 Yeah.

36:21 I wonder who wrote that.

36:21 Okay.

36:22 We talked a bit about profilers.

36:23 We, I think, probably dove enough into the memory.

36:26 Again, that could be a whole podcast.

36:28 Just like, how does Python memory work?

36:29 But let's focus on not how does it work, but just measuring it for our apps.

36:34 And you touched on this earlier, you guys, when you talked about there's memory and there's

36:39 performance, but there's also a relationship between memory and performance, right?

36:43 Like, for example, you might have an algorithm that allocates a bunch of stuff that's thrown

36:47 away really quickly.

36:48 And allocation and deallocation has a cost, right?

36:51 You might have more things in memory that mean cache misses on the CPU, which might make

36:57 it run slower, right?

36:58 There's a lot of effects that kind of tie together with performance in memory.

37:02 So I think it's not just about memory.

37:04 It's what I'm trying to say, that you want to know what it's up to.

37:07 So tell us about memory.

37:08 It's such a cool project.

37:10 So yeah, memory is our memory profiler as a lot of fairly interesting features.

37:16 It does.

37:17 One of them is that it supports a live mode where you can see where your application is

37:23 spending memory as it's running, as like a nice little automatically updating grid that

37:28 has that information in it that you can watch as the program runs.

37:31 It also has the ability to attach to an already running program and tell you some stuff about

37:35 it.

37:35 But sort of the main way of running it is just capturing capture file as the program runs in

37:41 the same way as cprofile would capture its capture file.

37:44 Check out the report.

37:45 Yeah.

37:46 Yeah.

37:46 Doing some reporting based on that capture file after the fact.

37:49 So just for people listening, because I know they can't see this, the live version is awesome.

37:54 If you've ever run glances or htop or something like that, where you can kind of see a two-y type

38:01 of semi-graphical live updating dashboard, it's that, but for memory.

38:07 This is really nice.

38:09 Yeah.

38:09 Yeah.

38:09 And the other really cool feature that it's got is the ability to see into a C or Ruster's C++ extension

38:16 modules.

38:16 So you can see what's happening under the hood inside of things that are being used from your Python code.

38:23 So if you're calling a library that's implemented partly in C, like NumPy, you can see how NumPy is doing its allocations under the hood.

38:30 Right.

38:30 Yeah.

38:30 Pablo, you were touching on this a little bit, like how the native layer is kind of a black box that you don't really see into, you don't see into a CPython.

38:38 Sorry, with C profile, but also with some of the other memory profilers.

38:41 Right.

38:42 And this, this looks at it across the board.

38:44 C, C++, Rust.

38:45 Right.

38:46 So this is kind of important because as we discussed before, what is memory?

38:51 Not only is complicated, but also depends on what you want.

38:53 Like the thing is that, and this is quite a big, important part is that you really need to know what you're looking for.

38:59 So for instance, we memory kind of highlights two important parts, which is that it sees all possible allocations.

39:06 So not only the ones made by Python, because like Python has a way to tell you when an object is going to be created, but it doesn't really, it's not going to tell you is you are going to kind of like use memory for it or not.

39:17 Among other things, because for instance, there is the Python even caches of entire objects.

39:21 There is this concept of free lists.

39:23 So object creation doesn't really mean memory allocation.

39:26 It also tells you when you are going to allocate memory, when you normally run Python, you may use PyMalloc and PyMalloc caches that memory.

39:34 So you don't really, you may not go to the actual system.

39:37 So by default, memory checks all allocations onto the system allocator.

39:42 So malloc basically.

39:43 So every time you call malloc or mem map or one of these, we see it.

39:46 And apart from seeing it and recording it, we also can tell you who made the location from C++ and Python.

39:54 On top of that, if you really want to know when you create objects, well, not objects, but like when, when Python says I need memory, we can also tell you that if you want.

40:02 So if you really want to know, well, I don't really care if PyMalloc caches and whatnot.

40:07 Every single time Python does, it requires memory.

40:11 Just tell me, even if you reuse it, maybe I just want to know because that kind of will show you a bit of like when you require object creation.

40:17 Again, not 100%, but mostly, mostly doing that.

40:22 And the idea here is that you can, you can really customize what you want to track and you don't pay for what you don't want.

40:28 So for instance, most of the time you don't want the, to know when Python requires memory because most of the time it's not going to actually impact your memory usage.

40:37 Right.

40:37 Because as you mentioned, PyMalloc is going to use one of these arenas and you're going to see the actual malloc.

40:43 But sometimes you want, so memory allows you to know, decide when you want to track one.

40:48 And by default, it's going to use the faster method, which is mostly the, is the most similar to when you execute your program.

40:55 And, or an interesting feature that as of this time only memory has is that it can tell you the location, like who actually made the location.

41:03 So who call who, et cetera.

41:05 Right.

41:05 So you're going to tell you this Python function called this C function that in turn call this Python function.

41:10 And this one actually made a call to malloc or, or created a Python list or something like that.

41:15 I think that was really a fantastic feature that it's easy to kind of miss the significance of that.

41:20 But if you get a memory profiler, it just says, look, you allocated a thousand lists and they used a good chunk of your memory.

41:27 You're like, well, okay, well let's go through and find where lists are coming from.

41:31 Right.

41:32 Like, like converting that information back of how many of these types of objects and how many of those objects you allocated back to like, where can I look at my code and possibly make a change about that?

41:42 That can be really, really tricky.

41:44 And so the fact that you can see this function is allocating this much stuff is super helpful.

41:49 One of the important things here to highlight, which I think is interesting.

41:53 Maybe Matt can also cover it more in detail, but is that memory, most memory profilers are actually sampling profilers.

42:00 Reason is that the same way tracing profilers for function calls need to trace every single function call.

42:06 A memory profiler is that tracing memory profile needs to trace every single allocation.

42:11 But there are allotions happen much more often than function calls.

42:15 So you made the calculation based on normal programs, when it can be anything that you want, just open Python even, or even any C or C++, you're going to see that actually you allocate a huge amount of, so doing something per allocation is super expensive.

42:28 It's extremely expensive.

42:30 And most profilers, what they do is that they do sampling.

42:32 It's a different kind of sampling.

42:33 So it's not this photo kind of thing.

42:35 They use a different statistic based on bytes.

42:37 So they basically see these memories, a stream of bytes, and they decide to sample some of them.

42:42 So they are inaccurate, but normally they try to be, use statistics to tell you some information.

42:47 So memory on the other hand.

42:48 To give an example, instead of sampling every 10 milliseconds and seeing what the process is doing right now, it's sampling every 10 bytes.

42:55 So every time a multiple of 10 bytes is allocated from the system, it checks what was allocating that.

43:01 Although it'll use a bigger number than 10 in order to ask for this to actually be effective, since most allocations will get at least 10 bytes.

43:07 But something like that.

43:08 Yeah.

43:09 Right.

43:10 So memory is tracing, which means that it sees every single allocation.

43:13 This is quite an interesting kind of decision here because like, you know, it's very, very hard to make a tracing profiler that is not extremely slow.

43:20 So, you know, memory tries to be very fast, but obviously it's going to be a bit slower than sampling profilers.

43:25 But the advantage of this, what makes memory quite unique is that because it captures every single allocation into the file, which has a huge amount of technical challenges.

43:34 For instance, these files can be genormals.

43:36 Like we are taking gigabytes and gigabytes, and we put a ridiculous amount of effort into making them as small as possible.

43:42 So it has double compression and things like that.

43:44 So you're not using XML to store that?

43:46 No, I certainly not.

43:47 You know, the first version almost, I think every, if you look at our release notes from one version to the next, every version, we're like, and the capture files are now 90% smaller.

43:56 Again, we've continued to find more and more ways to shrink.

43:59 Sure.

44:00 Right.

44:00 At the cost of the now reasoning about what is in the file is just bananas because, you know, we, we kind of do a first manual compression based on the information we know is there, but then we run it C4 on that.

44:12 So, so it's like double compression already.

44:14 And there is even like a mode when, when we pre kind of massage the data into the only one that you care.

44:20 So it's even smaller.

44:21 So it is out of effort.

44:23 But the advantage of having that much information is that now we can produce a huge amount of reports.

44:28 So for instance, not only we can show you the classic flame graph, like this, this visualization over hook or what, I'm like, you know, instead of where you're spending your time, where did you locate your memory, but we can do some cooler things.

44:40 So for instance, we can, you mentioned that there is this relationship between like running time and memory.

44:46 So one of the things that we can show you in the latest versions of memory is that for instance, in my end that you have like a Python list, or if you're in C++ a vector, right.

44:55 And then you have a huge amount of data you want to put into the vector and you start adding, showing Python will be append.

45:01 So you start calling append.

45:02 And then at some point the list has a pre-allocated size and you're going to fill it.

45:07 And then there is no more size, no more room for the data.

45:10 So it's going to say, well, I need more memory.

45:12 So you're going to require a bigger chunk of memory.

45:14 You're going to copy all the previous elements into the new chunk.

45:17 And then it's going to keep adding elements and it's going to happen again and again and again and again.

45:22 So if you want to introduce millions of elements into your list, because it doesn't know how many you need.

45:28 I mean, you could tell it, but in Python is a bit more tricky than in C++.

45:31 C++ has a call reserve when you can say, I'm going to need this many.

45:36 So just, just make one call to the allocator and then let me fill it.

45:39 But in Python, there is a way to do it, but not a lot.

45:42 So the idea here is that it's going to go through these cycles of getting bigger and bigger.

45:45 And obviously it's going to be as low because every time you require memory, you pay time.

45:49 And memory can detect this pattern because we have the information.

45:52 So memory can tell you when you are doing this pattern of like creating a bigger chunk, copying, creating a bigger chunk, copying.

45:58 And it's going to tell you, hey, these areas of your code, you could pre-reserve a bigger chunk in Python.

46:05 There is idioms depending on what you're doing, but it's going to tell you, maybe you want to tell whatever you're creating to just allocate once.

46:11 So for instance, in Python, you can multiply a list of none by 10 million.

46:15 And it's going to create a list of 10 million nones.

46:17 And instead of calling append, you set the element using .

46:21 Oh, interesting.

46:22 Yeah, kind of keep track of yourself of where it is instead of just using len of.

46:27 Exactly.

46:28 But in C++, for instance, with memory also Cs, as long as it's called from Python, so it's going to tell you, well, you should use reserve.

46:35 So tell the vector how many elements you need.

46:38 Therefore, you're not going to go into this.

46:40 There's not a way to do that in Python lists though, is there?

46:43 To actually set like a capacity level when you allocate it?

46:46 With this trick.

46:47 With this trick.

46:48 Yeah, yeah.

46:49 Then you can't use len on it anymore, right?

46:50 There's not a something in the initialization.

46:53 Yeah, okay.

46:54 I didn't think so either, but I could have missed it and it would be important.

46:57 No, no, no.

46:58 There are ways that I don't want to reveal because the list has a, it works the same as a vector.

47:05 It's just that the reserve call is not exposed, but there are ways to trick the list into thinking that it needs a lot of memory.

47:12 But I know how to reveal it so people don't rely on them.

47:15 Those ways are implementation details that can change from one Python version to the next.

47:18 Yeah, exactly.

47:19 For instance, one example.

47:20 Let me give you one example.

47:21 Imagine that you have a tuple of 10 million elements and then you call list on the tuple.

47:26 So you want a list of those 2 million elements because Python knows that it's a tuple and it knows the size.

47:31 It knows how many elements it needs.

47:33 So it's going to just require the million elements array.

47:35 And then you're going to just copy them in one go.

47:36 So it's not going to go through this.

47:38 I see.

47:39 You can pass a, some kind of iterable to a list to allocate it.

47:44 But if it is a specific type where Python knows about it and says, oh, I actually know how big that is instead of doing the growing algorithm, it'll just initialize.

47:53 Okay.

47:53 I think it's an implementation detail of CPython in the sense that this only works in CPython.

47:57 I don't really remember, but there is this magic method you can implement on your classes called len hint.

48:02 So this is underscore, underscore, len, underscore, hint, underscore, underscore, that is not the len, but it's a hint to the, to Python.

48:10 And it's going to say, well, this is not the real len, but it's kind of an idea.

48:14 And this is useful for instance, for generators or iterators.

48:17 So, so you may not know how many elements there are because it's a generator, but you may know, like, at least this many.

48:23 So Python uses this information sometimes to pre-allocate, but I don't think this is like in the language.

48:28 I think this is just a CPython.

48:30 Sure.

48:31 Okay.

48:32 Excellent.

48:33 So let's talk about maybe some of the different reporters you've got.

48:37 So you talked about the flame graph.

48:39 You've got a TQDM style report.

48:43 You can put it just out on, you know, nice colors and emoji out onto the terminal.

48:47 Like, give us some sense of like how we can look at this data.

48:50 Yeah.

48:51 That one is showing you kind of just aggregate statistics about the run.

48:53 So it tells you a histogram of how large your allocations tended to be.

48:58 It gives you some statistics about the locations that did the most allocating and the locations that did the largest number of allocations.

49:07 So the most by number of bytes and the most by count, as well as just what your total amount of memory allocated was.

49:13 It's interesting because this one looks across the entire runtime of the process.

49:18 A lot of our other reports will like the other major one that we need to talk about is the flame graph reporter.

49:24 That's probably the most useful way for people in general to look at what the memory usage of their program is.

49:31 But the flame graph.

49:32 So what a flame graph is, let's start there.

49:34 A flame graph is shows you memory broken out by call tree.

49:39 So rather than showing any time dimension at all, the flame graph shows you this function called that function called that function called that function.

49:48 And at any given depth of the call tree, the width of one of the function nodes in the graph shows you what percentage of the memory usage of the process was can be allocated to that call or one of the children below it.

50:04 That can be a really useful way, a really intuitive way of viewing how time or memory is being spent across a process.

50:11 But the downside to it is that it does not have a time dimension.

50:15 So with a memory flame graph like this, it's showing you a snapshot at a single moment in time of how the memory usage at that time existed.

50:26 There's two different points in time that you can select for our flame graph reports.

50:29 So you can either pick time right before tracking started or sorry, right before tracking stopped, which is sort of the point at which you would expect everything to have been freed.

50:37 And you can use that point to analyze whether anything was leaked.

50:41 Something was allocated and not deallocated.

50:43 And you want to pay attention to that.

50:44 The other place where you can ask it to focus in on is the point at which the process used the most memory.

50:52 So the point during tracking when the highest amount of memory was used, it'll by default focus on that point.

50:58 And it will tell you at that point how much memory could be allocated to each unique call stack.

51:03 Yeah, these flame graphs are great.

51:05 You have nice search.

51:06 You got really good tool tips, obviously, because some of these little slices can be incredibly small tool tips there.

51:12 But you can click on them.

51:13 If you click on one of them, it will zoom.

51:15 Oh yeah.

51:16 Okay.

51:17 And then it, yeah, if you click on one, then it'll like expand down and just focus on.

51:21 For instance, the example that you're looking at for the people like here in the podcast, they were not going to see it, but here there is one of these flame graphs and one of the kind of like paths in the flame graph.

51:31 One of the notes in the, in the tree is about imports.

51:34 So here I'm looking at a line that says from something import core.

51:37 So that's obviously memory that was allocated during importing.

51:41 So obviously you're kind of get rid of that, but hopefully unless you're implanted in the library.

51:45 So you may not care about that one.

51:47 You may care about the rest.

51:48 So you could click in the other path and then you don't care about you.

51:52 You are going to see only the memory that was not allocated during imports.

51:56 Right.

51:57 Or you could be surprised.

51:58 You could go, wait, why is half my memory being used during an import?

52:01 And I only sometimes even use that library.

52:03 You could push that down.

52:05 Well, it's like additionally imported or something.

52:07 Right.

52:08 Like here, as you can see, you go up in this example.

52:10 I think this example uses non-py.

52:12 Yes.

52:13 So you hover over this line that says import non-py as MP.

52:16 Yeah.

52:17 You may be surprised that importing non-py is 63 megabytes.

52:21 Megabyte.

52:22 And 44,000 allocations as well.

52:25 Yeah.

52:26 Just by importing.

52:27 So here you go.

52:28 Surprise.

52:29 So yes, that's...

52:30 And if someone wants to be extremely surprised, just try to import answer flow and see what happens.

52:36 Okay.

52:37 I can tell you that it's not a nice surprise.

52:39 But here you can kind of focus on different parts if you want.

52:43 Also, we have these nice like check boxes in the top that automatically hide the imports.

52:48 So you don't care about the imports one.

52:51 It just hides them.

52:52 So you can just focus on the part that is just not imports, which is a very common pattern because, again, you may not be able to optimize non-py yourself.

53:00 Right?

53:01 If you decide you have to use it, then you have to use it.

53:04 So it allows you to clean a bit because these ones can get quite complicated.

53:08 Mm-hmm.

53:09 So another thing that stands out here is I could see that it says the Python allocator is PyMalloc.

53:14 This is the one that we've been talking about with arenas, pools, and blocks, and pre-allocating, and all of those things.

53:20 That's not what's interesting.

53:21 What's interesting is you must be showing us this because there might be another one.

53:26 That's right.

53:27 Okay.

53:27 Well, not another one.

53:28 Python only ships with...

53:29 Well, Python does ship with two, kind of.

53:31 It's also got a debug one that you wouldn't normally use.

53:34 But the reason we're showing this to you is because it makes it very hard to find where memory leaks happen if you're using the PyMalloc allocator.

53:42 So if you're using PyMalloc as your allocator, you can wind up with memory that has been freed back to Python but not yet freed back to the system.

53:51 And we won't necessarily know what objects were responsible for that.

53:57 And if you're looking at memory leaks, we won't be able to tell you whether every object has been destroyed because we won't see that the memory has gone back to the system.

54:05 And that's what we're looking for at the leaks level.

54:07 Now, as Python...

54:08 Sorry.

54:09 As Pablo said earlier, there's an option of tracing the Python allocators as well.

54:12 So in memory leaks mode, you either want to trace the Python allocators as well so that we can see when Python objects are freed and we know not to report them as having been leaked as long as they were ever freed.

54:24 Or you can run with a different allocator, just malloc.

54:28 You can tell Python to disable the PyMalloc allocator entirely and just whenever it needs any memory to always just call the system malloc.

54:36 And in that case...

54:37 Oh, interesting.

54:38 Okay.

54:39 In that case, I'm not saying...

54:40 Yeah, there is an environment variable called Python malloc.

54:43 So all uppercase, all together, Python malloc.

54:45 And then you can set it to malloc, the word malloc, and that will deactivate by malloc.

54:50 You can set it to py malloc, which will do nothing because by default you get that.

54:54 But you can also set it to py malloc debug or something like that.

54:57 I don't recall exactly that one.

54:59 I think it's py malloc plus debug.

55:01 Right.

55:02 And that will set the debug version of py malloc, which will tell you like if you use it wrong or things like that.

55:06 The important thing also, apart from what Matt said, is that using py malloc can be slightly surprising sometimes.

55:12 But the important thing to highlight here is that this is what really happens.

55:16 So normally you want to run with this on because that is good to tell you what happened.

55:20 It's just that what happened may be a bit surprised.

55:23 Imagine, for instance, the case that we mentioned before, imagine that you allocate a big list.

55:28 Not a huge one, but quite a big one.

55:30 And then it turns out that that didn't allocate any memory because, you know, it was already there available in the arenas.

55:36 Right.

55:36 And then you allocated like the letter A.

55:39 Well, maybe not the letter A, but the letter E from the Spanish alphabet, right?

55:43 Yeah.

55:44 Which is, it's especially not cache because like, where are you going to cache that?

55:48 If you allocate the letter E, then suddenly there is no more memory.

55:52 So py malloc says, well, I don't have any more memory.

55:55 So let me allocate four kilobytes.

55:57 And then when you look at your fling graph, you're going to, the flinger is going to tell you your letter E took four kilobytes.

56:04 And you're going to say, what?

56:06 How is that possible?

56:07 And then you're going to go onto Reddit and rage about how.

56:10 Yeah, Python is stupid.

56:11 How bad you like.

56:12 Exactly.

56:13 And you are going to say, how is this even possible?

56:15 Well, the two important facts here is that, yes, it's possible because you're not, it's not that the letter E itself needed four kilobytes.

56:23 But when you, when you wanted that, then this happens, which is what the fling graph is telling you.

56:29 You may say, oh, but that's not what I want to know.

56:31 I want to know how much the letter E took.

56:33 Then you need to deactivate py malloc or set Python trace allocation, which you can.

56:37 It's just that normally the actual thing that you want, which is very intuitive if you think about it, is what happened when I requested this object.

56:45 Because that's when your program run is going to happen.

56:47 Because like, imagine that normally you reach for one of these memory profilers, not by, not for looking at your program.

56:53 Like, oh, let me look at my beautiful program.

56:56 How is this in memory?

56:57 How is this in memory?

56:58 You're rich because you have a problem.

56:59 The problem normally is that I don't have an old memory and my program is using too much.

57:03 Why is that?

57:04 And to answer that question, you normally want to know what happens when you run your program.

57:09 You don't want to know what happens if I deactivate this thing and yada, yada, right?

57:13 And you want to absolutely take care of like, okay, there is this thing that is caching memory.

57:17 Because like, if you run it without PyMalloc, it may report a higher peak, right?

57:22 Because like, it's going to simulate that every single object that you want to request, require memory when it really didn't happen, right?

57:29 Because maybe actually it was cached before.

57:32 Or in other words, the actual peak that your program is going to reach may be in a different point as well.

57:38 Because if you deactivate this caching, then the actual peak is going to happen at a different point, right?

57:45 Or under different conditions.

57:46 So you really want that engine to report 4K most of the time, except with leaks.

57:51 Because in leaks, it's a very specific case.

57:53 In leaks, you want to know, did I forget to deallocate an object?

57:57 And for that, you need to know really like, you know, the relationship between every single allocation and deallocation.

58:02 And you don't want caching.

58:03 Right.

58:04 So you've got to be exactly always traced and always removed.

58:08 We saw a big red warning if you run with leaks and PyMalloc saying like, this is very likely not what you want.

58:15 But who knows?

58:16 Maybe someone wants that, right?

58:18 Maybe.

58:19 You might still detect it, but you might not.

58:21 Well, yeah.

58:22 Like I have used that in CPython itself, for instance, because we have used successfully, like the spam.

58:28 We have used successfully memory in several cases in CPython to find memory leaks.

58:34 And to greater success because the fact that we can see SQL is just fantastic for CPython because it literally tells you where you forgot to put PyInRef or PyDef or something like that, which is fantastic.

58:46 We have found bugs that were there for almost 15 years just because we couldn't.

58:52 It was so complicated to locate those bugs that until we have something like this.

58:56 Nobody saw it.

58:57 Right, exactly.

58:58 But I have required sometimes to know the leaks with PyMalloc enabled just to understand what was, how PyMalloc was holding onto memory, which for us is important, but maybe not for the user.

59:09 All right.

59:10 Two more things.

59:11 We don't have a lot of time left.

59:13 Let's talk about temporary allocations real quick.

59:16 I think that that's an interesting aspect that can affect your memory usage, but also can affect, you know, just straight performance, both from caching and yours also spending time allocating things.

59:25 Maybe you don't have to.

59:27 Who wants to take this one?

59:28 Yeah, I think we talked about this for a while when Pablo was talking about how lists allocate memory.

59:33 One thing that memory has that most memory profilers don't have is an exact record of what allocations happened when and in what order relative to other allocations.

59:45 And based on that, we can build a new reporting mode that most memory profilers could not do, where we can tell you if something was allocated and then immediately thrown away after being allocated and then something new is allocated and then immediately thrown away.

59:59 We can detect that sort of thrashing pattern where you keep allocating something and then throwing it away very quickly, which lets you figure out if there's places where you should be reserving a bigger list or pre-allocating a vector or something like that.

01:00:12 So that's based on just this rich temporal data that we're able to collect that most other memory profilers can't.

01:00:18 Yeah, that's excellent.

01:00:19 And you can customize what it means to be temporary.

01:00:22 So that by default is what Matt mentioned, this allocate, the allocate, allocate, the allocate, allocate, the allocate, but you can decide for whatever reason that any allocation that is followed by a bunch of things.

01:00:33 And then it's the allocation.

01:00:34 And then a bunch of things is two, three, four, four, five, six allocations.

01:00:37 Then it's considered temporary because you have, I don't know, some weird data structure that just happens to work like that.

01:00:44 So you can select that, that end, let's say.

01:00:46 Excellent.

01:00:47 Yeah.

01:00:48 And you've got some nice examples of that list.append story you were talking about.

01:00:52 Yeah.

01:00:53 And this absolutely matters because allocating memory is pretty slow.

01:00:57 So when you're doing this, it really transforms something that is quadratic, like O n square into something that is constant.

01:01:05 So you absolutely want that.

01:01:07 You do want that.

01:01:08 That's right.

01:01:08 Yeah.

01:01:09 When I was thinking of temporary variables, I was thinking of math and like, as you multiply some things, maybe you could change, change the orders or do other operations along those lines.

01:01:19 But yeah, the growing list is huge because it's not just, oh, there's, there's one object that was created.

01:01:26 You're making 16 and then you're making 32 and copying the 16 over.

01:01:30 Then you're making 64 and copying the 32 over.

01:01:33 It's, it's massive, right?

01:01:34 And then you're making a lot of decisions.

01:01:35 And then you're making a lot of decisions.

01:01:36 And then you're making a lot of decisions.

01:01:37 And then you're making a lot of decisions.

01:01:38 And then you're making a lot of decisions.

01:01:39 And then you're making a lot of decisions.

01:01:40 And then you're making a lot of decisions.

01:01:41 And then you're making a lot of decisions.

01:01:42 And then you're making a lot of decisions.

01:01:43 And then you're making a lot of decisions.

01:01:44 And then you're making a lot of decisions.

01:01:45 And then you're making a lot of decisions.

01:01:46 And then you're making a lot of decisions.

01:01:47 And then you're making a lot of decisions.

01:01:48 And then you're making a lot of decisions.

01:01:49 And then you're making a lot of decisions.

01:01:50 And then you're making a lot of decisions.

01:01:51 And then you're making a lot of decisions.

01:01:52 And then you're making a lot of decisions.

01:01:53 And then you're making a lot of decisions.

01:01:54 And then you're making a lot of decisions.

01:01:55 And then you're making a lot of decisions.

01:01:56 And then you're making a lot of decisions.

01:01:57 And then you're making a lot of decisions.

01:01:58 And then you're making a lot of decisions.

01:01:59 And then you're making a lot of decisions.

01:02:00 And then you're making a lot of decisions.

01:02:01 And then you're making a lot of decisions.

01:02:02 And then you're making a lot of decisions.

01:02:03 And then you're making a lot of decisions.

01:02:04 And then you're making a lot of decisions.

01:02:05 And then you're making a lot of decisions.

01:02:06 And then you're making a lot of decisions.

01:02:07 And then you're making a lot of decisions.

01:02:08 And then you're making a lot of decisions.

01:02:09 And then you're making a lot of decisions.

01:02:10 And then you're making a lot of decisions.

01:02:11 And then you're making a lot of decisions.

01:02:12 And then you're making a lot of decisions.

01:02:13 And then you're making a lot of decisions.

01:02:14 And then you're making a lot of decisions.

01:02:15 And then you're making a lot of decisions.

01:02:16 And then you're making a lot of decisions.

01:02:17 And then you're making a lot of decisions.

01:02:18 And then you're making a lot of decisions.

01:02:19 And then you're making a lot of decisions.

01:02:20 And then you're making a lot of decisions.

01:02:21 And then you're making a lot of decisions.

01:02:22 And then you're making a lot of decisions.

01:02:23 And then you're making a lot of decisions.

01:02:24 And then you're making a lot of decisions.

01:02:25 And then you're making a lot of decisions.

01:02:26 And then you're making a lot of decisions.

01:02:27 And then you're making a lot of decisions.

01:02:28 And then you're making a lot of decisions.

01:02:29 And then you're making a lot of decisions.

01:02:30 And then you're making a lot of decisions.

01:02:31 And then you're making a lot of decisions.

01:02:32 And then you're making a lot of decisions.

01:02:33 And then you're making a lot of decisions.

01:02:34 And then you're making a lot of decisions.

01:02:35 And then you're making a lot of decisions.

01:02:36 And then you're making a lot of decisions.

01:02:37 And then you're making a lot of decisions.

01:02:38 And then you're making a lot of decisions.

01:02:39 And then you're making a lot of decisions.

01:02:40 And then you're making a lot of decisions.

01:02:41 And then you're making a lot of decisions.

01:02:42 And then you're making a lot of decisions.

01:02:43 And then you're making a lot of decisions.

01:02:44 And then you're making a lot of decisions.

01:02:45 And then you're making a lot of decisions.

01:02:46 And then you're making a lot of decisions.

01:02:47 And then you're making a lot of decisions.

01:02:48 And then you're making a lot of decisions.

01:02:49 And then you're making a lot of decisions.

01:02:50 And then you're making a lot of decisions.

01:02:51 And then you're making a lot of decisions.

01:02:52 And then you're making a lot of decisions.

01:02:53 And then you're making a lot of decisions.

01:02:54 And then you're making a lot of decisions.

01:02:55 And then you're making a lot of decisions.

01:02:56 And then you're making a lot of decisions.

01:02:57 And then you're making a lot of decisions.

01:02:58 And then you're making a lot of decisions.

01:02:59 And then you're making a lot of decisions.

01:03:00 And then you're making a lot of decisions.

01:03:01 And then you're making a lot of decisions.

01:03:02 And then you're making a lot of decisions.

01:03:03 And then you're making a lot of decisions.

01:03:04 And then you're making a lot of decisions.

01:03:05 And then you're making a lot of decisions.

01:03:06 And then you're making a lot of decisions.

01:03:07 And then you're making a lot of decisions.

01:03:08 And then you're making a lot of decisions.

01:03:09 And then you're making a lot of decisions.

01:03:10 And then you're making a lot of decisions.

01:03:11 And then you're making a lot of decisions.

01:03:12 And so we have actually used this thing to speed up C-Python.

01:03:14 Oh, that's amazing.

01:03:15 And like I said, this is a feature that we're able to do exactly because we're a tracing profiler.

01:03:19 And we do see every single allocation.

01:03:21 We built a new feature that was just released literally last week where we have a new type of flame graph we can generate.

01:03:28 That is a temporal flame graph that gives you sliders on it where you can adjust the range of time that you are interested in.

01:03:35 So instead of only being limited to looking at that high watermark point or only being limited to looking at the point right before tracking stop to see what's going on.

01:03:42 What was allocated and not deallocated.

01:03:44 You can tell the flame graph to focus in on this spot or on that spot to see what was happening at a particular point in time.

01:03:51 And that's again a pretty unique feature that requires tracing profiling in order to be able to do because you need to know allocations that existed at any given point in time from one moment to the next.

01:04:02 Yeah, that ability to actually assign an allocation to a place in time really unlocks a lot of cool things.

01:04:09 Right.

01:04:10 So it seems to me that this is really valuable for people with applications.

01:04:14 You got a web app or some CLI app.

01:04:16 That's great.

01:04:17 It also seems like it'd be really valuable for people creating packages that are really popular that other people use.

01:04:24 Right.

01:04:25 Right.

01:04:26 If I was Sebastian creating FastAPI, it might be worth running this a time or two on FastAPI.

01:04:30 I think they are actually using it on FastAPI.

01:04:31 Are they?

01:04:31 Okay.

01:04:32 No, it's Pydantic.

01:04:33 I think they are using it in PyEptic.

01:04:34 And I think our other bigger, I mean, there's a lot of users.

01:04:35 I'm trying to think of the big ones.

01:04:36 I think the other ones that are...

01:04:37 Yeah.

01:04:37 Or a lib3.

01:04:37 There was a feature that they came to us and they said we used memory to track down where

01:04:50 memory was being spent in a new version of URL lib3.

01:04:52 And they said that they would not have been able to release the new feature that they wanted

01:04:56 if they hadn't been able to get the memory under control and that we helped them do it very quickly.

01:05:00 That is awesome.

01:05:01 Yeah.

01:05:02 Like all the ORMs, I'm sure that they're doing a lot of like, read this cursor and put this

01:05:07 stuff in the list.

01:05:08 We're going to, you know, like there's probably a lot of low hanging fruit actually.

01:05:11 And the reason this comes to mind for me is we can run it on our code and make it faster.

01:05:16 But if somebody who's got a popular library, like the ones you all mentioned, can find some

01:05:21 problem, like the multiplicative improvement across everybody's app, across all the different

01:05:27 programs and the libraries that use those, it's a huge, huge benefit, I would think.

01:05:32 We are also very lucky because we have a wonderful community and we have using this GitHub discussions.

01:05:40 A lot of people probably don't know that that is a thing, but we have in the memory repo,

01:05:45 a discussion for feedback.

01:05:48 And a lot of, there is a lot of people from like library maintainers in the Python ecosystem

01:05:53 that have used memory successfully.

01:05:55 And they tell us about that.

01:05:56 And it's quite, quite cool to see like how many problems have been solved by memory.

01:06:02 Some of them super challenging.

01:06:03 Yeah.

01:06:04 I've got to say, I didn't know that discussions existed until we enabled it on this repo.

01:06:07 So I'm learning things every day.

01:06:09 Here you are.

01:06:10 Absolutely.

01:06:11 Maybe just a quick question to wrap up the conversation here is Brnega Boer out there

01:06:16 asked, does memory support Python 312 yet?

01:06:19 Is the short answer.

01:06:20 We're at the moment blocked on that by, Cython 0.29, not supporting 312 yet.

01:06:27 We need to get that sorted before we can even build on 312 to start testing on 312.

01:06:32 Do you have to build on 312 to analyze 312 applications?

01:06:35 Yes.

01:06:36 Yes.

01:06:37 Okay.

01:06:38 Because these runs on the application itself.

01:06:40 So this is not something that exists outside.

01:06:42 This is something that runs within the application.

01:06:44 It's like inside.

01:06:45 Yeah.

01:06:46 Yeah.

01:06:46 So you need to run your app in 312 to run memory on 312.

01:06:49 Yes.

01:06:50 That's the difference between this and PyStack, which we were speaking about last time.

01:06:53 PyStack can attach to a 312 process from a 311 processor or something like that.

01:06:58 But memory can't.

01:06:59 Okay.

01:07:00 Well, good to know.

01:07:01 All right, guys.

01:07:02 Thank you for coming back.

01:07:03 I'm taking the extra time to tell people about this.

01:07:05 But mostly, you know, thanks to you all and thanks to Bloomberg for these two apps, memory

01:07:10 and PyStack.

01:07:11 They're both the kind of thing that looks like it takes an insane amount of understanding

01:07:16 the internals of CPython and how code runs and how operating systems work.

01:07:21 And you've done it for all of us, so we could just run it and benefit, not have to worry about

01:07:25 it that much.

01:07:26 You have no idea.

01:07:27 I will add linkers because we didn't even have the time to go there.

01:07:31 But memory uses quite a lot of dark linker magic to be able to activate itself in the middle of nowhere, even if you didn't prepare for that.

01:07:41 Which a lot of memory profiles require you to modify how you run your program.

01:07:45 Memory can magically activate itself, which allows to attach itself to a running process.

01:07:50 But yeah, for another time maybe.

01:07:53 I wrote some of the craziest code of my life last week in support of memory.

01:07:57 You have no idea how wild it can get.

01:07:59 It seems intense and even that's not enough.

01:08:02 Okay, awesome.

01:08:03 Again, thank you.

01:08:04 This is an awesome project.

01:08:06 People should certainly check it out and kind of want to encourage library package authors out there to say, you know, if you've got a popular package and you think it might benefit from this, just give it a quick run and see if there's some easy wins that would help everyone.

01:08:18 Absolutely.

01:08:19 Well, and I just want to add a thank you very much for inviting us again.

01:08:22 We are super thankful for being here and always very happy to talk with you.

01:08:26 Thanks, Pablo.

01:08:27 Seconded.

01:08:27 Yeah.

01:08:28 Thanks, Matt.

01:08:29 Bye, you guys.

01:08:30 Thanks everyone for listening.

01:08:31 Bye.

01:08:31 Thank you.

01:08:32 Thank you.

01:08:32 Thank you.

01:08:32 Thank you.

01:08:32 Thank you.

01:08:33 Thank you.

01:08:33 Thank you.

01:08:34 Thank you.

01:08:35 Thank you.

01:08:36 Thank you to our sponsors.

01:08:37 Be sure to check out what they're offering.

01:08:38 It really helps support the show.

01:08:40 The folks over at JetBrains encourage you to get work done with PyCharm.

01:08:45 PyCharm Professional understands complex projects across multiple languages and technologies, so you can stay productive while you're writing Python code and other code like HTML or SQL.

01:08:56 Download your free trial at talkpython.fm/donewithpycharm.

01:09:01 InfluxData encourages you to try InfluxDB.

01:09:05 InfluxDB is a database purpose-built for handling time series data at a massive scale for real-time analytics.

01:09:12 Try it for free at talkpython.fm/influxDB.

01:09:16 Want to level up your Python?

01:09:18 We have one of the largest catalogs of Python video courses over at Talk Python.

01:09:22 Our content ranges from true beginners to deeply advanced topics like memory and async.

01:09:27 And best of all, there's not a subscription in sight.

01:09:30 Check it out for yourself at training.talkpython.fm.

01:09:33 Be sure to subscribe to the show, open your favorite podcast app, and search for Python.

01:09:37 We should be right at the top.

01:09:39 You can also find the iTunes feed at /itunes, the Google Play feed at /play, and the direct RSS feed at /rss on talkpython.fm.

01:09:48 We're live streaming most of our recordings these days.

01:09:51 If you want to be part of the show and have your comments featured on the air, be sure to subscribe to our YouTube channel at talkpython.fm/youtube.

01:09:59 This is your host, Michael Kennedy.

01:10:01 Thanks so much for listening.

01:10:02 I really appreciate it.

01:10:03 Now get out there and write some Python code.

01:10:04 Now get out there and write some Python code.

01:10:05 Now get out there and write some Python code.

01:10:06 Now get out there and write some Python code.

01:10:07 Now get out there and write some Python code.

01:10:11 Now get out there and write some Python code.

01:10:11 Now get out there and write some Python code.

01:10:13 Now get out there and write some Python code.

01:10:15 Now get out there and write some Python code.

01:10:17 Now get out there and write some Python code.

01:10:19 Now get out there and write some Python code.

01:10:24 Now get out there and write some Python code.

Back to show page
Talk Python's Mastodon Michael Kennedy's Mastodon