#385: Higher level Python asyncio with AnyIO Transcript
00:00 Do you love Python's async and await, but feel that you could use more flexibility or
00:04 higher order constructs, like running a group of tasks and child tasks as a single operation,
00:09 or streaming data between tasks, combining tasks with multi-processing or threads,
00:15 or even async file support? You should check out AnyIO. On this episode, we have Alex Granholm,
00:21 the creator of AnyIO, here to give us the whole story. This is Talk Python to Me,
00:25 episode 385, recorded September 29th, 2022.
00:29 Welcome to Talk Python to Me, a weekly podcast on Python. This is your host, Michael Kennedy.
00:47 Follow me on Twitter where I'm @mkennedy and keep up with the show and listen to past episodes
00:52 at talkpython.fm and follow the show on Twitter via at talkpython. We've started streaming most of our
00:59 episodes live on YouTube. Subscribe to our YouTube channel over at talkpython.fm/youtube to
01:04 get notified about upcoming shows and be part of that episode. This episode of Talk Python to Me is
01:10 brought to you by Compiler from Red Hat, an original podcast. Listen to an episode of their show as they
01:17 demystify the tech industry over at talkpython.fm/compiler. It's also brought to you by us over
01:24 at Talk Python training where we have over 240 hours of Python courses. Please visit talkpython.fm
01:30 and click on courses in the nav bar. Transcripts for this and all of our episodes are brought to you by
01:36 assembly. Do you need a great automatic speech to text API? Get human level accuracy in just a few
01:41 lines of code. Visit talkpython.fm/assembly AI. Alex, welcome to Talk Python to Me.
01:47 Thank you.
01:48 Yeah, it's fantastic to have you here. You have so many cool open source projects out there. We're here to
01:53 talk about any IO, but actually several of them I've covered on Python bytes on the other podcasts that
02:00 I run. And we've talked about SQL code gen and typeguard. And I didn't associate that with you
02:06 specifically and back over to any IO. So yeah, a lot of cool projects you got going on there.
02:10 Yeah, too many actually. I managed to hand over a couple of them to other people where
02:16 Seaboard 2 and Sphinx Autodoc type hints, because I'm really the stretched thing at the moment. I barely
02:23 have time for all of the projects that I'm maintaining.
02:26 I can imagine. That's, you know, how do you juggle all that? You know, it's, I'm sure you have a full
02:32 time job and you have all these different projects, right? How do you prioritize?
02:35 Yes. Yes. Basically, I get the equivalent of a writer's block from time to time. So when that
02:41 happens, I just either don't try to code up at all, or I just switch to another project.
02:47 Yeah, true. If you're talking about type hints versus async programming, like if you're stuck on one,
02:52 you probably are not stuck on the other, right?
02:54 Yeah.
02:54 Yeah. Interesting. Well, we're going to have a lot of fun talking about all of them. I doubt there's going to be any writer's block or speaker's block here,
03:01 podcaster's block. It'll be good. We'll have a good time chatting about it and sharing with everyone.
03:05 Before we get to that, though, let's hear your story. How'd you get into programming in Python?
03:09 I got into programming at the age of eight. It was on an MSX compatible machine. I started with BASIC,
03:18 as so many others did. I did some simple text-based games at first, just playing around. At some point,
03:27 I got Commodore's Commodore's 128 and I did some simple graphical demos with it. Then I got an Amiga 500.
03:38 Oh yeah, the Amiga's were cool. They were special.
03:41 Mm-hmm. Yeah. I dabbled in almost BASIC, then other kinds of tools also. I don't really remember that much of it. Then at some point, I did something with Mac. That is macOS Classic. There was this tool called HyperCard. It's precursor for Flash, basically. So that's something I did some things with. Simple games and whatnot.
04:07 Yeah. Okay. Skipping forward a bit. I got into PC programming like C++, mostly C. Then I think it was in the latter half of 2005. No, actually it was much earlier in 1999. I started with Perl. Hated it. Then I think the next step was in 2005 when I got to learn PHP. Hated that too.
04:34 Kept searching.
04:35 Then finally, finally in 2007, I got to know Python. Then that was Lava's first site, really. At that point, it was, I think, Python 2.5. And of course, I stuck with it.
04:49 I did some Java professionally for a while, but I never really, really got to love it. It had this corporate industrial feeling to it.
04:59 Sorry, I'm not dressed up enough to program my Java today. Let me go get my tie. I'll be right back.
05:04 Yeah. Python was really cool. When I started learning it, my first practical application I made in what, 30 minutes after starting to learn it. It's really staggeringly easy to learn.
05:18 It is.
05:19 That's one thing I love about it.
05:20 It really is. It's one of the few languages you can be really successful with, with a partial understanding of what's going on. Right. You don't even have to know what a class is or what modules are. You can just write a few functions in a file and you're good to go.
05:34 It's almost like it's English.
05:36 Yeah. Very cool. Raul out in the audience says, third time's the charm. The third language, you found the one you like there. Excellent. And how about now? What are you doing these days?
05:44 I've been working for several years on a project, a very complicated project where, okay, this is always a hard part to describe it. It's a sort of working well as application. I'm part of a bigger team. I'm the lead backend developer.
06:01 It collects IoT data and visualizes it and it provides all sorts of peripheral services to it. This is the first time I really had to spread my wings with databases.
06:14 Oh yeah. Okay. What technologies are you using there?
06:17 On the backend, we use TimescaleDB, which is a PostgreSQL extension.
06:22 Okay.
06:23 This is for storing that time series data. Then on the backend, we use my framework called Asphalt.
06:30 I don't know if you've encountered that one. I think it's really cool, but it's not in widespread use. It's not a web framework per se. It's more like a generic framework where you can compose applications from a mix of free-made components and custom-made components.
06:48 What's it called?
06:49 Sorry?
06:49 What was it called?
06:50 Asphalt.
06:50 Asphalt. Like the road?
06:52 Yes.
06:52 I do not spell it.
06:53 There we go.
06:54 Yeah.
06:55 No. I did a search and I just found a snake and a python snake.
06:59 Yeah, that's it. That's a good thing.
07:01 That python snake on some asphalt road. Yeah. You got to be careful here. Okay. Is it a little bit like Flask or what makes it special?
07:09 Well, Flask is a web framework. This is a generic framework. You can build any kind of applications with it. It doesn't have to be involved with web.
07:17 I see any network thing, not necessarily HTTP, so it could be UDP or it could be just...
07:23 It doesn't even have to do any networking at all. You can build command line tools with it. You can just have this mix of components and the YAML configuration to give settings to them all. I think this really would require a whole different session.
07:39 It does sound like it would be a whole different session, but this is news to me and very interesting.
07:44 Yeah, I haven't really advertised much. I'm working on version 5 at the moment, which does incorporate any I/O support and it brings the tech up to date with the current standards.
07:57 Okay. Yeah, this looks very asynchronous based. It's an async I/O based micro framework for network oriented applications, it says.
08:05 Yeah.
08:06 And built upon UV loop, which is how all the good Python async things seem to be back these days.
08:12 Mm-hmm.
08:12 So it has a lot of sort of modern Python features. It's got async I/O, it's got UV loop, it's got type hints, those sorts of things. When you started in Python in 2007, none of those existed.
08:25 Yeah.
08:26 How do you see the recent changes to Python in the last five years or so?
08:29 I would say that Python has been developing at an incredible speed. I really love it. So many useful stuff coming out with every release.
08:36 I agree.
08:37 Yeah, basically from 3.5 to 3.8 or something. There were just so many amazing features that came out then. And now we're seeing these libraries built upon it, right?
08:47 Right.
08:48 Right.
08:48 All right. Well, let's transition over to our main topic that we're going to talk about, which is what I reached out to you for, not realizing the other two interesting projects that I already gave a shout out to are also yours. We'll get to those if we got time.
09:01 So with Python in 3.4, we had async I/O introduced the actual frameworks that supported that. And then when it really came into its own was Python 3.5 when the async and await keywords were added to the language.
09:16 Python 3.5. And Python came out of the box with some support for great async programming. But then there are these other libraries that developed on top of that to make certain use cases easier or add new capabilities. And any I/O falls into that realm, right?
09:33 Yeah. So before we talk about any I/O, we should talk about Trio.
09:37 Yep.
09:38 Have you heard about Trio?
09:39 Yes, I have heard about Trio. I even had Nathaniel on the show, but it's been a little while.
09:45 That was back in 2018, I talked to Nathaniel. So Nathaniel Smith. So there's probably quite a few changes since then, actually.
09:53 Yeah, let's talk about Trio.
09:54 Yeah, actually, the last version of Trio was released just yesterday. The thing about any I/O is that it's an effort to basically bring the Trio features to async I/O land.
10:07 So Trio is fundamentally incompatible with async I/O. There is a compatibility layer called Trio async I/O,
10:15 async I/O, but it's far from perfect. So what any I/O does really is allow developers to add these features from Trio to their async I/O applications and libraries, one by one, without making a commitment.
10:32 For example, at my work, I use any I/O for just a handful of tasks. I think we should talk about the features.
10:40 Mm-hmm.
10:48 Async I/O for just a few months, but the other thing about Trio is that they're not going to be able to add them to the same thing.
10:56 Async I/O does not. So it's kind of a meta-async framework.
11:00 I see. So it builds on top of these different frameworks, right?
11:04 Yeah.
11:04 Yeah. So it builds on top of these frameworks and their underlying primitives.
11:08 Yeah. So for people who are not familiar, Trio adds things like this concept of grouped tasks.
11:16 So normally in async I/O, you start one task, you start another. They're kind of unrelated, but
11:22 even if they're conceptually solving parts of the same problem. And then with Trio, you can do things
11:27 like create what's called a nursery and then you can have them all run or you could potentially cancel
11:32 unstarted tasks. And there's other coordination type of operations as well, right? That's the kind of stuff
11:39 that Trio adds.
11:40 Yeah. So the point of any I/O is, as I said, to bring these Trio features to async I/O.
11:47 Right. And because when you do Trio, it's an end-to-end async stack.
11:51 Yeah. Which means things have to be built for Trio, right? It's like if I have, let's say,
11:56 HTTPX. I don't know how easy it is to integrate those kinds of things that are expecting an async I/O
12:03 event loop over into Trio. Well, I already have my own event loop running. It's hard to coordinate the
12:09 tasks, right?
12:10 If you're talking about HTTPX, it had a Trio and async I/O backends. Now it defaults to the
12:16 any I/O backends. So it runs by default on both.
12:20 Okay.
12:20 About any I/O features, it provides Trio-like task groups on top of async I/O.
12:27 Uh-huh.
12:27 In here, we mentioned that Python 3.11 has its own concept of a task group, but the mechanics
12:36 are quite a bit different. That requires a bit of explaining.
12:40 Yeah. How does it work here?
12:41 The thing is that async I/O task group, so not this one, but the standard library task groups,
12:49 which are in Python 3.11, they basically just start normal async I/O tasks and you can cancel
12:57 use the task objects with using the task objects that come out of the create task method.
13:03 So what sets any I/O task groups apart from async I/O task groups is the way cancellation is done.
13:10 And since any I/O was designed based on Trio, when you do start soon, it doesn't return any task
13:20 objects that you can cancel. Instead, cancellation is done via so-called cancel scopes. So each task group
13:26 has its own cancel scope. If you cancel that, you basically cancel all the underlying tasks, but it goes even deeper.
13:36 Because this is a bit complicated, so bear with me. Cancellation is not done on a per-task basis,
13:43 but on a per-cancel scope basis. You can have cancel scopes nested so that if you start a task and it starts
13:51 a cancel scope, you can just cancel that scope and it cancels everything up to that point.
13:57 Okay. So like if I call a task, if I create a task and then somewhere inside for it to do its job,
14:04 it also creates a task. Those can be grouped into the same basic scope, right? So there's not these
14:10 like children tasks running around. Yeah. You don't even have to start another task. If you cancel a
14:16 cancel scope, then anything you basically wait on gets automatically canceled. Bam. This is called
14:23 level cancellation in contrast to the edge cancellation mechanism employed by any SEKIO.
14:30 In edge cancellation, you just cancel the task once and it gets canceled error raised in the task. So you can
14:42 ignore it, which is by the way, bad thing to do, but then the task won't be canceled again. Usually there
14:48 are exceptions to this, which are a topic of debate in the community, but the cancel scope basically,
14:56 they define boundaries for cancellation. So if you say you cancel a task group's canceled scope, only the tasks
15:04 started from that task group are canceled. So when those tasks return back, so all the tasks are done,
15:11 then the code just goes forward from this async context manager. Basically when all the tasks are done,
15:18 then however they end, unless some raised exceptions, that's a different situation. If they were either
15:26 canceled or successful, then the code just goes forward to the all tasks finished part.
15:32 This is really neat. The other thing that's standing out here as I think about these task groups. So for
15:38 those of you listening, you can just create an async with block to create the task group. And then
15:42 in there you can just say task group dot start soon and give it a bunch of async methods to start
15:47 running. One of the things that's cool about this is it automatically waits for them all to be finished
15:52 at the end of that context manager, the with block, right?
15:55 The standard library task groups work the same way actually.
15:58 Okay. And those are in 3.11?
15:59 3.11? Yes. Yep. We'll see how the mechanism will work. There's a new mechanism for cancellation
16:06 on cancellation of tasks. It's not really battle tested. It's something that was added fairly late
16:14 in the game to 3.11. So it's not yet clear if there are edge cases where it fails totally. This is also
16:23 a debated topic in the community. Sure. The other thing here that I wanted to ask you about is you don't
16:28 say like create task and you don't say start, you say start soon. Why do you say what's this like
16:35 uncertainty about? Tell us about that. Okay. So start soon. It's actually does the same thing as create
16:42 the task because creating task doesn't start running it right away. It starts only running it on the
16:48 perhaps over the next iteration of the event. Right. Or maybe not. Maybe the event loops all backed up.
16:53 Maybe it's the, you know, it takes a while, right? Yeah. So it's basically the same as a loop that
16:59 calls soon. So you schedule a callback. That's all that tasks are. They are callbacks with bells and
17:05 whistles. Start soon is modeled based on trial, but it's, I should mention that there's also a method called
17:12 start, which works a bit differently. This showcase is the start method. So this is very, very useful.
17:20 This feature is not present in the standard library task groups. So basically it's very useful starting
17:27 a background service that you need to know that the task has actually started before you move on.
17:33 So in the example that you have on the docs here is you create a task group. And the first thing is to
17:37 start a service that's listening on a port. The next thing is to talk to that service on the port.
17:42 Right. And if you just say, kick them both off, who knows if that thing is actually
17:46 going to be ready by the time you try to talk to it. Exactly. This is something I use in practice all
17:51 the time. And this is different than what you would get just with the asyncio create task or whatever, right?
17:56 Yeah. And even the new task group feature doesn't have this.
18:02 This portion of talk Python is sponsored by the compiler podcast from Red Hat. Just like you,
18:08 I'm a big fan of podcasts, and I'm happy to share a new one from a highly respected open source company,
18:13 compiler and original podcast from Red Hat. Do you want to stay on top of tech without dedicating tons
18:19 of time to it? Compiler presents perspectives, topics and insights from the tech industry,
18:24 free from jargon and judgment. They want to discover where technology is headed beyond the headlines and
18:28 create a place for new IT professionals to learn, grow and thrive. Compiler helps people break through
18:34 the barriers and challenges turning code into community at all levels of the enterprise.
18:38 One recent and interesting episode is there, the great stack debate. I love, love, love talking to
18:44 people about how they architect their code, the tradeoffs and conventions they chose, and the costs,
18:49 challenges and smiles that result. This great stack debate episode is like that. Check it out and see if
18:55 software is more like an onion or more like lasagna or maybe even more complicated than that. It's the first episode
19:02 in compiler series on software stacks. Learn more about compiler at talkpython.fm/compiler. The link is in your
19:09 podcast player show notes. And yes, you could just go search for compiler and subscribe to it, but follow that link and
19:16 click on your players icon to add it. That way they know you came from us. Our thanks to the compiler podcast for keeping
19:23 this podcast going strong.
19:27 So I guess you could in the standard library, you could start it and then you would have to just
19:32 wait for it to finish and then you would carry on. But it's like two steps, right?
19:37 The workaround would be to create a future, then pass that to the task and then wait on that future. So it's a
19:45 bit cumbersome. And then you have to remember to use a try, accept in case that that task happens to fail. Otherwise, you end up waiting on the future forever.
19:55 Mm hmm. Another, I really like this idea. Now, the other thing that I don't see in your examples here,
20:02 where I'm creating a task group and starting these tasks and waiting for them to finish is
20:06 management of the event loop. If I was doing that with my code, I'd probably have to create a group
20:12 or a loop and then like, you know, call some functions on it. And, and here you just use any IO. What, where's the event loop being managed?
20:20 I'm not sure. What do you mean? Well, like a lot of times when you're doing async stuff,
20:26 you have to go and actually create an async event loop and then use the loop directly.
20:31 And, you know, you're working with a loop for various things.
20:34 And well, there is that a run command at the bottom. Yeah. Yeah. Okay. So basically the,
20:40 you just say any IO dot run or asyncio dot run. Okay. And I can, if, even though I'm using the
20:45 any IO task groups, I can still just, I can mix and match this with like more standard asyncio event
20:52 loops. That's the premise. So you can just, like, ease into it. Nice.
20:57 So for example, if I have a FastAPI web app and you know, FastAPI is in charge of manning the event,
21:05 managing the event loop. And if I've got like an async API endpoint, I could still go and use an
21:10 any IO task group and get all the benefits in there. Yep. Okay. That's beautiful.
21:15 I should mention that FastAPI also depends on any IO. Oh really? Okay. Yes.
21:19 How interesting. Yeah. I've seen Sebastian Ramirez talking about some, some little functions that he
21:24 wrote and he's like, I would love to see these just get back into any IO. I didn't realize that
21:28 FastAPI itself was using them. Yeah. Okay. So very useful. We've got these task groups. We've got the
21:36 concept of cancellation. Another one that's not exactly cancellation, but as, as sort of cancellation is
21:43 timeouts. Do you want to talk about how you do timeouts? Yeah. I meant to talk about that. As I recall,
21:50 in Python 3.11, there is a, it's a similar construct. I think it was a with timeout or some asyncio.timeout or
21:57 something similar. I don't remember really, but what this move on after does is it creates a cancel scope
22:05 with a timeout. Basically, this is a very, very practical use of cancel scopes. What it does is it
22:12 start a timer and after one second, it cancels this scope. So anything under that gets canceled.
22:20 So in this case, just at the sleep command gets canceled and then the task just keeps going.
22:27 So the way you described it before, it sounds like if there was a bunch of tasks running,
22:30 if any of them try to await something, they're also going to get canceled. Is that right?
22:35 In this case, you mean? Yeah. No, only the part that is within the with block.
22:40 Right. Well, that's what I mean. But if you had done multiple tasks within like the move on after,
22:44 right? Like I say, I try to talk to the database to insert a record and I try to call an API and
22:49 the database times out. Within a single move on after block, you can only have one thing going on,
22:55 which is the await here. So even if you start multiple tasks from that task group, they are not
23:01 enclosed within that cancel scope. I realized that castle scopes are complex and difficult concept.
23:07 And I don't think I can adequately explain them, but I hope that this will at least
23:13 give some shit some light into that. Yeah. It's a really cool idea because your code could,
23:20 it's async. So it's not as bad as if you were to like lock up waiting for an API call or something
23:27 that's going to time out on the network eventually after a really long time. But it's still a
23:32 cancellation. Still, you don't want it to clog up your code, right? You want to just say,
23:35 sorry, this isn't working. Yeah. One place where I often use a construct like this is finalization.
23:42 So when you are closing up things, then you can use this, this move on after to the timeout for closing
23:49 resources. Yeah, that makes sense. Because you want to be a good citizen and in terms of your app and
23:54 release the resources as soon as possible, like a database connection or a file handle. But if it's,
23:59 if it's not working, you know, like, oh, I made it. I made a try at it after a second. We're done.
24:04 Exactly. And also, I should mention that this is where any of your biggest caveat lies. It is in
24:11 in finalization. I often run into problems with the cancel scopes because the thing with cancel scopes is that
24:20 that when you run code within a cancel scope and that scope gets canceled, then anything awaiting on
24:28 anything within that cancel scope is always canceled time after time. So you cannot wait on anything as
24:35 long as you are within the cancel scope. And asyncIO code is not expecting that. So it might have a final
24:43 clause where it does await, say, connection.close. But that also gets canceled if you are within an
24:49 NEIO cancel scope. And it's one of the biggest practical issues with NEIO right now. And we are
24:56 trying to figure out the solution for that. Just something to keep in mind when you are writing
25:03 NEIO stuff. Yeah, that is tricky, right? But help us on the way.
25:06 You say, well, I'm going to try to call this web service and I'm going to wait it. And if it fails,
25:11 you know, probably internally, what you want to do is close the network connection as well.
25:15 Right. But if you try to wait closing the network connection. Yeah. So what happens there? Does it
25:21 eventually just get cleaned up by the garbage collector or dereferenced?
25:25 Well, garbage collector doesn't work that well with async stuff because the destructors could be called
25:31 in any thread. So you can't rely on that. You can't do any async callbacks in the destructor. So it's a
25:39 better, it's a good idea not to try any of that and instead just raise a resource warning. If you're
25:45 you're writing any I/O where code, you would have either this shielded cancel scope or better yet,
25:52 a move on after which shield true. What does that do? At least temporarily protects the
25:57 the kind of test from cancellation. So let's say you have move on after say five and with shield true.
26:06 It means that even if the outer cancel scope is canceled, your actual task will start running until
26:12 it exists as cancel scope or if the timeout expires. So you have a five second window to close any
26:19 resources that need closing. Got it. And so you just create a, you could do that say within your exception
26:24 handler or something, right? Yeah. Okay. Or actually, I think finally, a finally block might be the best place to do that. Sure.
26:32 But depending on the audio use case, of course. Yeah, of course. Okay. Very interesting. So all
26:37 this stuff about task groups and scheduling tasks and canceling them, that's very trio-esque, but it's
26:43 also just a small part of any I/O. There's a bunch of other features and capabilities here that are
26:51 probably worth going into. Some cool stuff about taking async I/O code and converting it to threads or
26:57 converting threads to async I/O and similarly for sub processes. But let's maybe just
27:02 talk real quick about the synchronization primitives. These are things like events, semaphores. Maybe
27:08 not everyone knows what events and semaphores are in this context. Give us a quick rundown of that.
27:13 Yeah. Well, these are pretty much the same as they are on async I/O. Many of them use just the async I/O
27:19 counterfeits straight up. So events are a mechanism for telling another task that something happened,
27:28 something significant, a significant happened, and they need to react to it. It's often used to
27:32 coordinate the tasks. So one thing doesn't happen before something else has happened in another task.
27:38 Yeah, there might be two tasks running and one says, I'm going to wait until this file appears.
27:43 And the other one's going to eventually create the file, right? But you don't know the order.
27:47 Yeah. So one option is to just do polling. Like, well, I'm going to async I/O.sleep for
27:52 a little while and then see if the file's there. Try to access it, you know, and do that over and over. A much more
27:58 responsive way and deterministic way would be to say, I'm going to wait on an event to be set. And the thing
28:06 that creates the file will create the file and then set the event, which will kind of release that other task to carry on. Right?
28:11 Right. Moving on, semaphores are a mechanism for saying that you have this limited, you have a number of limit,
28:21 let's say a connection pool or something. And you want to specify that whenever some part of the code needs
28:28 to access to this resource, it needs to acquire the semaphore. So you set a limit, and then each time
28:36 a task enter, that's a semaphore, that decrements the counter. And when you hit the limit, then it
28:43 starts blocking until something else releases it. Right. So people might be familiar with thread locks
28:49 or async I/O's equivalent, where you say only one thing can access this at a time. So you don't end up
28:55 with deadlocks or race conditions and so on. But semaphores are kind of like that, but they allow
29:01 multiple things to happen. Say maybe your database only allows 10 connections, or you don't want to have
29:06 more than 10 connections. So you could have a semaphore that says it has a limit of 10 and you have to
29:12 acquire it to talk to the database. That doesn't mean it stops multiple things from happening at once. It
29:17 just doesn't let it become a thousand at once. Right? Exactly. I really like this idea. And I was
29:22 showing some people some web scraping work with async I/O where it's like, oh, let's go create a whole bunch of
29:29 ACPX requests or whatever type of requests, you know, something asynchronous talking to some
29:36 servers to download some code. And if it's a limited set, you know, no big deal. But if you have
29:41 thousands of URLs to go hit, well, then how do you manage not killing your network or overloading that?
29:47 And the semaphore actually would be perfect. So for people listening, the way that you do it is you
29:52 create a task group and then just you pass the semaphore to start soon. That's really clean. And then
29:57 any I/O takes care of just making sure it gets access and then runs and then gives it back. How's that work?
30:02 I'm not sure what sort of answer you are expecting, but as I recall, this current implementation is
30:09 actually using the underlying async libraries events. Okay. So there are actually methods to
30:16 acquire and release the semaphores. It just implements an async context manager that acquires it at the
30:23 beginning and releases it at the end. There's an event involved for notifying the any awaiting task that
30:31 it has a semaphore slot available. It's super clean. And the fact that you don't have to write that code,
30:37 you just say the semaphore is associated with this task through your task group. I really like it.
30:42 Actually, semaphores are not associated with a particular task. That's what capacity limiters are for.
30:47 Okay. So you can release a setup from another task while capacity limiters are bound to the specific
30:55 task that you acquired them in. Okay. Yeah. And the semaphore example, was it being passed? Oh, yeah.
31:00 It's just being passed as an argument, isn't it, to the task. And it's up to the task to use it. I see. Okay.
31:06 So this other concept, the capacity limiter. Yeah. It sort of does that. Yeah. So this is from Trio. It's very similar to the semaphore. So you can set, you can actually set the borrower. But in most circumstances, you want the current task to be the borrower. And limiters are actually used in other parts of any IO as they are in Trio. For example, to limit the number of threads that you allocate,
31:34 or limit the number of sub processes that you spawn. Sure. You don't have too many sub processes, right? You've got a thousand jobs and you just for each thing a job, start it in a sub process. You're going to have a bad time. Right. So as documentation says, they are quite like semaphores, but they have additional safeguards. Such as? Well, they check that the borrower is the same. Okay. Yeah, that makes sense. So by default, they check that the task they are used for both acquiring or releasing on the
32:04 Yeah, nice. Okay. Well, this I didn't know about capacitor, capacity limiters. That's fantastic. I love the idea. Okay. Let's jump over to you talked about the threads, and the sub processes. Let's talk about this thread capability that you have here. This is very nice. Any IO dot two thread. What is this?
32:23 Yeah, so this is also modeled based on trio. It's basically any IO's way of doing worker threads. So in asyncio, you have these thread pool executors that do the same as run sync. Async IO's API is somewhat problematic because you have basically two methods.
32:45 So you have, I forget the older one, the newer one is called two thread. The first one was what it run in executor, whatever, whatever it was. They both have their own issues. Two thread doesn't allow you to specify any thread pool. So it always uses the default thread pool. And there is no way to add that to the API because it was done in such a manner.
33:09 Then the older function does have this parameter at the front. But the problem is that it doesn't propagate context variables, unlike the newer two thread function. So context variables, if you don't know about them, they are a fairly recent addition to Python.
33:26 Yeah, what are those?
33:27 They are basically thread locals. Are you familiar with thread locals?
33:31 Make a comment for people who don't know, like thread local variables, which also exist in Python, allow you to say, I'm going to have a...
33:39 variable, maybe it's even a global variable, and it's thread local, which means every thread that sees it gets its own copy of the variable and where it points to and what value it is.
33:51 And so that way you can initialize something at the start of a thread. If you have multiple threads, they can all kind of have their own copy so they don't have to share it.
33:59 But that falls down because asyncio event loops, when you await those, all that stuff is running on one thread, just the one that's running the loop. That's what you're talking about is that equivalent, but for asyncio, right?
34:12 Yeah, so context variables are much more advanced concept. They basically, yeah, as you said, thread locals for async tasks.
34:21 That sounds very tricky. I've thought about that. I have no idea how to implement that. So that's pretty cool. I guess Python would know.
34:26 Yeah, the thing is, when it starts running a task, when it switches between tasks, it runs that callback within that context that the task is tied to.
34:39 Right. So this has been somewhat of a problem because that older method in asyncio for running worker threads, it doesn't propagate these variables, but the newer function to thread does.
35:02 But then you can specify which thread pool you want to use.
35:07 Right. Okay, so it's similar to the built-in one, but it gives you more capability to determine where you might run it.
35:13 Yeah. When you call run sync, it allows you to specify a limiter, and it uses the default limiter, which has a capacity of 40 threads.
35:23 40? That seems like a pretty good default. Much more than that, and you end up with memory and context switching issues.
35:30 Yeah. It was arbitrarily set to 40, but then Nathaniel and Trio, so I just followed suit.
35:37 Sure. Yeah, so basically, if you've got some function that is not async, but you want to be able to await it so that basically run it on a background thread, here you just say any IO dot to thread, and then dot run sync, and you give it the function to call.
35:54 And now you can await it, and it runs on a background thread in this thread pool, which is really nice.
35:59 Yeah, that's how it works.
36:00 So one thing that I'm noticing a lot in the API here for any IO is often, now for the synchronous functions, it's completely obvious why you wouldn't do it, but even in the sort of creating tasks ones, what I'm noticing is that even the async functions, you pass the function name and then the arguments to like start soon, as opposed to saying, call the function, passing the arguments, and getting a coroutine back.
36:29 Why does it work that way?
36:30 It seems like it would make it a little less easy to use, like, say, type hands and autocomplete and various niceties of calling functions and editors.
36:38 Yeah, there was a good reason for that.
36:40 I can't remember that offhand, but at the very least, it's consistent with the synchronous counterforce, like run sync.
36:48 Yeah, cool.
36:49 All right, so we have this stuff about threads, and you have the to thread, also from thread, which is nice.
36:55 What's from thread to?
36:56 Yeah, so when you are in a worker thread, and you occasionally need to call something in the event loop thread, then you need to use this from thread to run stuff on event loop thread.
37:08 Right, because the worker method is not async.
37:12 Otherwise, you would just await it, right?
37:13 It's a regular function.
37:15 But if in that regular function, you want to be able to await a thing, you can kind of reverse back.
37:20 I see.
37:20 That's an interesting bi-directional aspect.
37:22 All right, subprocesses.
37:24 We all know about the GIL, how Python doesn't necessarily love doing computational work across processes.
37:31 Tell us about the subprocess equivalent.
37:34 This is a relatively easy way to both run tasks in a subprocess and then opening arbitrary executables for running asynchronously.
37:46 AsyncIO has similar facilities for running async processes, but these async subprocess facilities are not really up to par with, say, multiprocessing, which has some additional nice things like async shared queues and other synchronization primitives.
38:05 But they are still pretty useful as they are.
38:09 Yeah.
38:09 So basically, you can just say anyIO.toProcess run sync and you give it a function and now you can await that subprocess multiprocessing.
38:20 Yeah.
38:20 There are the usual caveats like because they don't share memory, then you have to serialize the arguments and that could be a problem in some cases.
38:30 Sure.
38:30 So basically, it pickles the arguments and the return values.
38:35 It sends them over.
38:35 Yeah.
38:36 It could be even so that the arguments are pickable, but the return value is not, which obviously causes some confusion.
38:44 Yeah.
38:44 Yeah.
38:44 Absolutely.
38:45 People maybe hear that pickling is bad and you can have all sorts of challenges like code injection and whatnot from pickling through security.
38:54 So this, I would think, is not really subject to that because it's you calling the function directly.
39:00 Right.
39:01 It's like completely just inside.
39:03 There's no way to inject anything bad.
39:05 Yeah, exactly.
39:06 It's all the multiprocessing that's handling it anyway.
39:09 So it should be okay.
39:09 Yeah.
39:10 Yeah.
39:10 Cool.
39:10 Another one that is pretty exciting has to do with file support.
39:15 So we have open in Python.
39:18 And you would say with open something as F, right?
39:23 But that's, there's no async equivalent, right?
39:25 Yeah.
39:25 These file facilities are really just a convenience that wraps pictures around these file objects.
39:33 Yeah.
39:33 So if somebody wants to know, there is no actual async file IO happening because that's not really a thing on Linux.
39:40 And even on Windows, it has terrible problems.
39:44 Okay.
39:44 So what's happening with this any IO dot open file?
39:47 It opens a file in a thread and then this opens, it starts an async context manager that on exit closes the file.
39:55 And it also closes the thread, I guess, right?
39:57 Well, it uses throwaway threads.
39:59 Okay.
39:59 Basically from the thread pool.
40:01 All right.
40:01 I see.
40:01 So it'll use a thread to open the file and get the file handle and then throw away the thread.
40:07 And then does it do something similar?
40:09 Well, return the thread to the pool.
40:10 Yeah.
40:10 Yeah.
40:11 Return to the pool, which is much better than creating it and throwing it away completely.
40:17 Also that read call is done in the shed.
40:19 Okay.
40:20 So it just is sort of a fancy layer over top of thread pool.
40:23 But it's really, really nice.
40:25 You write basically exactly the same code that you would write with regular open and a context manager, but the async version.
40:33 Although I got to say that the opening part of creating a context manager here, you say async with, which people are probably used to.
40:40 And then async with await open file, which is, it's a bit of a mouthful.
40:45 The reason for that is because you can just do await open file and then go about your business and then manually close the file.
40:53 Got it.
40:53 I'm not sure if there's a more convenient way to do this.
40:57 I might be open to adding that to any IO, but for the moment, this is how you do it.
41:01 Yeah, you could probably wrap it in some kind of class or something that's synchronous, but then has an A enter.
41:06 But yeah, I'm not sure if it's totally worth it.
41:09 But yeah, that's quite the statement there.
41:11 And the other area that's interesting about this is if you want to loop over it line for line, instead of doing a regular for loop, you can do an async for line in file and then read it asynchronously line by line, right?
41:25 Yeah.
41:25 So the A next method just gets the next line in the worker side.
41:30 Yeah, this is fantastic.
41:31 So people are working with files.
41:33 They definitely can check this out.
41:34 And also related to that is you have an asynchronous pathlib path.
41:40 Yeah, that's a fairly recent addition.
41:42 Yeah, it looks great.
41:43 So you have like an async iterator.
41:46 You have an async, is it a file, async read text, all built into the path, which, you know, is like what's built into a regular path, but not asynchronous.
41:53 Yeah, quite nice.
41:55 We're getting kind of short on time here.
41:57 What else would you like to highlight here?
41:58 That's really important.
41:59 I would like to highlight the streaming framework here because it's one of the unique things in any IO.
42:05 Okay.
42:05 Yeah, let's talk about it.
42:06 Trio has its channels and then it has Socas and whatnot.
42:11 But any IO has something that Trio doesn't have.
42:14 And really, asyncio has some kind of streaming abstraction, but not quite on this level.
42:21 In any IO, we have a streaming abstract base class hierarchy.
42:25 We have object streams and byte streams.
42:29 The difference between these are that object streams can have anything like the integers, strings, any arbitrary objects.
42:39 And byte streams have only bytes and they are modeled according to TCP.
42:44 With byte streams, you can send a number of bytes or receive bytes, but they might be chugged differently.
42:51 These are abstract streams.
42:53 So you can, say, build a library that wraps another stream.
42:57 Say you build an SSH client that creates a tunnel and it exposes that as a stream.
43:04 So long as you are able to consume a stream, you don't have to care how the stream works internally or what other streams it wraps.
43:14 A good example of a stream wrapper is the TLS support.
43:19 That's right there.
43:20 So TLS support in any IO can wrap any existing stream that gives you bytes, even if it's an object, whether it's an object on or byte stream.
43:32 So it does the handshake using the standard library.
43:35 Actually, standard library contains this sense IO protocol for TLS.
43:40 Okay.
43:41 Sense IO protocol, if you are not aware of it, it's basically a state machine without any actual IO calls.
43:48 It's a very neat protocol implementation that lets you add whatever kind of IO layer you want on top of that.
43:54 So this is what any IO uses to implement TLS.
43:57 So you have both a listener and a connect and a TLS wrapper.
44:02 So any kind of TLS, you can even do TLS on top of TLS if you like.
44:07 I'm not sure that's useful, but you can do it.
44:10 It's super encrypted.
44:12 Yeah, this is very flexible.
44:14 We have all sorts of streams that even these unreliable streams, which are modeled based on UDP.
44:21 Oh, really?
44:22 Okay.
44:22 So UDP is implemented by using these unreliable streams.
44:28 I don't think there are any more unreliable streams implementations than UDP.
44:32 But yeah, I'm really part of this streaming class hierarchy.
44:37 And it's too bad that there are no cool projects to show off this system.
44:42 But maybe in the future we will have.
44:44 Another thing that I would like to highlight is a system of type attributes.
44:50 That has been really useful in practice.
44:52 Yeah, before we move on to the type attributes, let me just ask you real quickly.
44:55 Can I use these streams and these bidirectional streams where you create like a send stream and receive stream?
45:02 Can we use those across like multiprocessing or to coordinate threads?
45:08 I'm sure we could use it for threads, right?
45:09 Sadly, no.
45:11 Sadly, no.
45:12 This memory object stream is, as they call it in TRIO channel.
45:16 This is one of the most useful pieces of NEI, really.
45:20 I use this every day at work.
45:22 So these are basically like cues on steroids.
45:25 Yeah, exactly.
45:26 That's what I was thinking.
45:27 Unlike ASYNCIO cues, you can actually close these.
45:30 You can clone them.
45:32 So you can have multiple tasks waiting to receive something like workers.
45:36 And we have multiple senders.
45:39 I see.
45:39 Like a producer consumer where some things are put in, but there's like five workers who might grab a job and work on it.
45:45 You can have five consumers and five producers all talking to each other.
45:48 Yeah.
45:49 And then you can just iterate over them.
45:51 Also not possible with cues.
45:53 You can close the cues, sorry, streams, so that when you iterate on them, if all the other other ints are closed, then the iterator adjusts the ints.
46:03 Oh, wow.
46:03 Okay.
46:04 So if the send stream goes away, then the receive stream is done.
46:08 It's at the end.
46:09 Yeah.
46:09 That's fantastic, actually.
46:11 I really like that.
46:12 Yeah.
46:12 I think this is also coming to...
46:14 Go ahead.
46:14 Sorry.
46:14 ...asyncIO.
46:15 Okay.
46:15 I think this is also coming to the standard library.
46:18 Nice.
46:19 All right.
46:19 At some point.
46:20 Last thing we probably have time for here that you wanted to highlight is typed attributes.
46:25 What's the story of typed attributes?
46:26 If you knew about the asyncIO extras in...
46:30 I think they have both in protocols and streams.
46:33 For example, with TLS, you want to get the certificate from the stream.
46:39 Like if you have negotiated a TLS stream, you want to get the client certificate, right?
46:44 So this is a type of way to do it.
46:47 You can add any arbitrary extra attributes to a stream.
46:50 Just declare it in the extra attributes method.
46:54 But the niftiest part here is that it can work across wrapped streams.
46:58 A very good example is that say you have a stream that is based on HTTP.
47:03 You have an HTTP server and you have access to a stream, let's say, WebSockets.
47:08 Then you want to get the client IP address.
47:12 Well, usually you may have a front-end web server like Nginx at the front.
47:18 Normally, what you would get when you ask for an IP address, you actually get the IP address of the server.
47:24 What you need to do is look at the headers.
47:27 This is something you can do transparently with these type attributes.
47:30 So basically, wrap a stream that understands HTTP, you can have that handle the request for the IP address, the remote IP address,
47:40 and have it look at the headers and look for a forwarded header and return that instead.
47:45 Nice.
47:45 Let me see if I got this right here.
47:47 So people are probably familiar with Pydantic.
47:49 And Pydantic allows you to create a class and it says what types are in the class and the names and so on.
47:56 And those serialize out a JSON message.
48:00 It sounds to me like what this is built for is when I'm talking binary messages over a stream like a TCP stream, I can create a similar class that says, well, I expect a message that is a string and then a float.
48:14 Read that out of the stream.
48:16 Is that right?
48:17 Yeah.
48:17 A good example here is if you go back to the streams part.
48:21 Text streams.
48:22 Okay.
48:23 So this is something that translates between bytes and streams on the fly.
48:28 So this is a perfect trivial example of a stream prepper.
48:32 Okay.
48:33 Yeah.
48:33 So you have a text received stream that will do the byte decoding.
48:37 Yeah.
48:37 Yeah.
48:37 Very nice.
48:38 And you don't have to care like what's downstream of that.
48:41 And even if you have three layers on top, you can just still ask for, say, client remote IP address.
48:49 If there's a network stream somewhere downstream, that's the stream that will give you your answer.
48:55 Cool.
48:55 Yeah.
48:55 The stream work here is really nice.
48:58 There's a lot of things that are nice.
48:59 The coordinating, the task groups, the coordinating limitations, like with a limiter, capacity limiter.
49:08 A lot of cool building blocks on top of it.
49:11 And also the fact that this runs against or integrates with regular asyncio means you don't have to completely change your whole system in order to use it.
49:20 Right?
49:21 Yeah.
49:21 Very cool.
49:21 All right.
49:22 Well, I think we're about out of time to talk about any IO.
49:24 Do you want to take 30 seconds and just give the elevator pitch for SQL A code gen?
49:31 This is a really exciting project that you created here.
49:34 Yeah.
49:34 This is one of those side projects that are on the verge of a major release.
49:38 So what this does is it takes an existing database.
49:44 It connects to an existing database, reflects the schema from that, and then writes modal code for you.
49:51 The next major version even supports data classes and other kinds of formats.
49:57 The SQL model one is the one that's most exciting for me because that'll give you Pydantic models that use a SQL model, which is very exciting.
50:05 Yeah.
50:06 So nice.
50:06 What else?
50:07 Think if you're a consultant or you talked about Java earlier, right?
50:12 Imagine you've got like a Java code base and you want to do a proof of concept in Python and SQL model or SQLAlchemy.
50:18 And somebody says, well, why don't you try building a simple version that talks to our database?
50:24 And if that thing has like a hundred tables and complicated relationships, it's no fun to sit down.
50:31 Like a big portion of that project might be just modeling the database.
50:34 And here I can just say SQL A code gen, connect to Postgres.
50:38 Boom.
50:39 Out comes SQLAlchemy classes.
50:40 That's a huge jumpstart for getting started.
50:43 Or if you're a consultant jumping into a new project.
50:45 Yeah, exactly.
50:46 If you have a really large database in this, we will save little hours, you'll feel time.
50:51 At least hours, yes.
50:53 And a lot of frustration, right?
50:56 Because with like SQLAlchemy, you've got to have the model match the database just right.
51:01 And this will do that for you.
51:03 Yeah.
51:03 Okay.
51:04 Super cool project.
51:05 Typeguard is another one you have.
51:06 Not super complicated, but an interesting capability to grab on.
51:10 Yeah.
51:11 Also one of those that are on major of a version of a major release.
51:16 I sadly have not had enough time to finish the next major version.
51:21 And then there's of course, Python 3.11, which brings a whole bunch of new features that I have not been able to yet incorporate into Typeguard.
51:31 And sadly, I also have not started using it myself.
51:37 It's a sad story really.
51:40 Yeah.
51:40 Okay.
51:41 But the premise is that you have this pytest plugin.
51:45 You activate it during the test run.
51:47 And then in addition to static type checking your application, which you usually do with mypy, PyRite or what have you.
51:57 You can also do a runtime type checking because the static tools don't always see the correct types.
52:05 You might not be in control of that, right?
52:06 You might write a library.
52:08 Your library might have types declared on what it's supposed to take.
52:11 The person consuming your library has no requirement to run mypy.
52:16 And they have no requirement to make sure what they typed matches what you said you expect.
52:21 And because they're hints, they're not compiled options in Python.
52:25 It might at runtime, you might get something you don't expect, even though you put a type there.
52:30 Yeah, exactly.
52:31 So this is how you get the runtime assurance that you have the right type there.
52:37 Nice.
52:37 So all you do with this type guard library is you put an at type checked decorator on a function?
52:42 The best way would be to use the import hook.
52:46 Okay.
52:46 So there's an import hook that will automatically add these decorators while doing the import.
52:52 So you don't have to alter your code at all.
52:55 There are some open issues with that import hook.
52:58 Like somebody reported that this import hook is installed too late.
53:02 The modules in question were already imported.
53:06 So that's something I have yet to fix or find a workaround for.
53:10 Sure.
53:10 That's the idea.
53:11 Right.
53:12 So you can either use the decorator and be somewhat guarded about how you're doing it and only apply
53:18 to certain parts, like say your public API.
53:20 Or you could just say install import hook and then everything that gets imported gets wrapped
53:25 in the type checked decorator.
53:26 And what that does is it looks at the type hints and the declared return value and will raise
53:32 an exception if say you say that your function takes an integer and it's passed a string.
53:36 That becomes a runtime error.
53:38 Right.
53:38 Or you can just issue a warning.
53:41 Sure.
53:41 The warning may be nice, but it's still, I think it's pretty cool.
53:44 You can opt into having Python type hints become enforced basically.
53:49 Yeah.
53:50 What I use this for is in the asphalt.
53:53 When I accept the configuration for a component, I use this decorator or rather an assert to
54:01 check that the types are correct.
54:03 So I don't, I don't raise any mysterious warning type errors or value errors further down the
54:09 line.
54:09 Or even worse at runtime.
54:11 Sure.
54:11 Okay.
54:12 That's, that's cool.
54:13 There is one of the features of asphalt or highlights is runtime type checking for development
54:19 and testing to fail early when functions are called with incompatible arguments and can
54:24 be disabled for zero overhead.
54:26 So it sounds like, you know, you're maybe doing an import hook in development mode.
54:31 That's handling all this for you.
54:33 Is that right?
54:33 Well, actually in this current version, I'm using the assert.
54:37 So in case you didn't know when you have asserts, they are normally run without any switches
54:42 to Python.
54:43 But if you run Python without the debug mode, then asserts are not compiled in the bytecode.
54:49 So just by using this switch, you can disable these potentially expensive asserts.
54:54 Yeah.
54:54 Okay.
54:55 I didn't know that.
54:55 I'm familiar with that from C and C# and other compiled languages with their pragmas
55:00 and that type of thing.
55:02 But I didn't realize that about Python asserts.
55:04 Yeah.
55:04 There's this one thing that actually, if you have this code, if under debug, and there's a
55:11 bus of code under that block, that whole block gets omitted from the compiled code.
55:16 If you run Python with the debug mode disabled.
55:20 Okay.
55:20 Yeah.
55:21 Very cool.
55:21 All right, Alex.
55:22 Well, those are some cool additional projects.
55:24 You know, I feel like the SQL A code gen, we almost could spend a whole bunch of other
55:28 time on it.
55:29 Another one is the AP scheduler.
55:31 Again, could almost be its own show, but we're out of time for this one.
55:35 So thanks so much for being here.
55:36 Now, before you get out of here, I've got the two final questions to ask you.
55:39 If you're going to write some Python code, what editor do you use?
55:43 I've been looking at the different editors available.
55:46 And so far, PyCharm wins, hands down.
55:50 Right on.
55:50 I'm with you there.
55:51 So it has so many of these intelligent features and what have you that, for example, I use its
55:58 database features to browse through my database.
56:01 I use its refactoring features to change my code relatively safely.
56:06 And its Docker support gives me auto-completion.
56:10 The list goes on and on and on.
56:12 And most of these IDs are not nearly as sophisticated.
56:17 I agree.
56:18 Excellent one.
56:18 Now, notable PyPI package.
56:21 I mean, we talked about a bunch.
56:22 You can recommend any of these we talked about.
56:24 You can say something else you found interesting.
56:27 Well, I think I already mentioned Trio, but this is a difficult question, really.
56:33 Maybe poetry.
56:34 Okay.
56:34 Yeah.
56:34 Poetry.
56:35 Yeah.
56:35 Poetry is something that I use for my application at work.
56:38 It's the closest thing in Fison to, say, Yarn.
56:42 So I manage the dependencies and lock down the dependencies using poetry.
56:47 It's quite handy for that.
56:50 There are some issues with poetry, like when I just need to update one dependency,
56:56 update them all, and small issues like that.
56:59 But other than that, it's great.
57:01 Yeah.
57:02 It looks really great.
57:03 I know a lot of people are loving poetry.
57:04 It's a good recommendation there.
57:06 All right.
57:07 Final call to action.
57:07 People are interested in any I.O.
57:09 How do they get started?
57:11 Well, there's somewhat of a tutorial there.
57:15 I really don't have a really long tutorial on it.
57:18 I like Trio, so I'm heavily leaning on Trio's documentation here.
57:22 Because any I.O. has such a similar design to Trio, then a lot of Trio's manual can be
57:28 used to draw parallels to any I.O.
57:31 You can almost use Trio's documentation, the tutorial, to learn how any I.O. works.
57:37 Yeah, it's highly inspired, right?
57:39 Anything else, then you should just come to Gitter.
57:43 I think there's a link getting help at the bottom.
57:46 Okay.
57:46 Yeah.
57:46 So there's a Gitter link, and I'm usually available there.
57:50 Great.
57:50 Okay.
57:51 Yeah.
57:51 Very, very nice.
57:52 And I'm guessing you accept contributions?
57:54 Sure.
57:55 Yeah.
57:55 So, yeah.
57:56 Let's see.
57:57 Over here, we've got, what is that?
58:00 33 contributors?
58:00 So, yeah.
58:01 Excellent.
58:01 If people want to contribute to the project, and maybe that's code, or maybe even they
58:06 could put together a tutorial or something like that if they're interested.
58:09 Maybe.
58:10 Yeah.
58:10 Perhaps.
58:10 Okay.
58:11 Excellent.
58:11 Well, thank you for all the cool libraries, and take the time to come share them with us.
58:16 Thanks for having me.
58:16 Yeah.
58:17 You bet.
58:17 Bye.
58:18 Thanks, everyone, for listening.
58:19 Bye.
58:19 This has been another episode of Talk Python to Me.
58:23 Thank you to our sponsors.
58:25 Be sure to check out what they're offering.
58:26 It really helps support the show.
58:29 Listen to an episode of Compiler, an original podcast from Red Hat.
58:32 Compiler unravels industry topics, trends, and things you've always wanted to know about
58:37 tech through interviews with the people who know it best.
58:40 Subscribe today by following talkpython.fm/compiler.
58:44 Want to level up your Python?
58:45 We have one of the largest catalogs of Python video courses over at Talk Python.
58:50 Our content ranges from true beginners to deeply advanced topics like memory and async.
58:55 And best of all, there's not a subscription in sight.
58:57 Check it out for yourself.
58:58 Be sure to subscribe to the show.
59:02 Open your favorite podcast app and search for Python.
59:05 We should be right at the top.
59:06 You can also find the iTunes feed at /itunes, the Google Play feed at /play,
59:11 and the direct RSS feed at /rss on talkpython.fm.
59:16 We're live streaming most of our recordings these days.
59:19 If you want to be part of the show and have your comments featured on the air,
59:22 be sure to subscribe to our YouTube channel at talkpython.fm/youtube.
59:28 This is your host, Michael Kennedy.
59:29 Thanks so much for listening.
59:30 I really appreciate it.
59:31 Now get out there and write some Python code.
59:33 Thank you.
59:53 Thank you.