Learn Python with Talk Python's 270 hours of courses

#389: 18 awesome asyncio packages in Python Transcript

Recorded on Thursday, Nov 3, 2022.

00:00 If you're a fan of Python's async and await keywords and the powers they unlock,

00:03 then this episode is for you. We have Timo Furrer here to share a whole bunch of async

00:09 IO related Python packages. Timo runs the awesome asyncio list and he and I picked

00:14 out some of our favorites to share with you. This is Talk Python episode 389 recorded November 3rd,

00:21 2022.

00:22 Welcome to Talk Python To Me, a weekly podcast on Python. This is your host, Michael Kennedy.

00:41 Follow me on Mastodon where I'm @mkennedy and follow the podcast using @talkpython,

00:46 both on fosstodon.org. Be careful with impersonating accounts on other instances. There are many.

00:52 Keep up with the show and listen to over seven years of past episodes at talkpython.fm.

00:57 We've started streaming most of our episodes live on YouTube. Subscribe to our YouTube channel over

01:02 at talkpython.fm/youtube to get notified about upcoming shows and be part of that episode.

01:08 This episode is sponsored by Microsoft for Startups Founders Hub. Check them out at

01:14 talkpython.fm/foundershub to get early support for your startup. And it's brought to you by

01:20 Sentry. Don't let those errors go unnoticed. Use Sentry. Get started at talkpython.fm/sentry.

01:26 Transcripts for this episode are sponsored by Assembly AI, the API platform for state-of-the-art

01:33 AI models that automatically transcribe and understand audio data at a large scale. To learn more,

01:39 visit talkpython.fm/assemblyai. Hey, Timo.

01:43 Hello, Michael.

01:44 Hey, it's great to be here with you. I'm super excited to talk about Async Python.

01:48 Yeah, same. Good to be here. You know, we've spoken a little bit through GitHub. I think it's

01:52 odd, but also kind of awesome how many connections are made through places like that, right? Like,

01:57 we've never met in person, but we previously chatted about some Async things on GitHub.

02:02 Yeah, it's always nice to kind of see the same people again, which you met like online,

02:07 in a call or something, talk about something. It's pretty cool.

02:10 And that's why I really enjoy going to conferences because you're like, oh yeah,

02:13 you're the person I've been talking to for six months now.

02:15 Absolutely.

02:16 Yeah. Cool. Well, I'm super excited to talk about all the awesome Async Python things that you've

02:21 curated. Before that though, let's just get into your background. What's your story? How'd you get

02:26 into programming in Python?

02:27 Yeah. So I started programming, I think when I was around 10 years old, I, at the time I was

02:32 exploring at an offline computer from my parents, which didn't have a lot of like things on it,

02:38 but there were a few applications. And one of those were, I think it's called front page,

02:43 like Microsoft front page and publisher, which is like, Yes. It was like Microsoft Word for creating websites. It was insane.

02:49 Yeah, exactly. So I was like playing around with this and just, you know, exploring whatever was

02:54 there because I didn't have any internet. And yeah, that's how I got interested in like how things are

02:58 built in a computer. And at some point I got internet, I kind of browsed around to see like how

03:05 these websites really work like. And yeah, that's how I got into PHP and HTML and did some website stuff.

03:11 Couldn't you open a website in front page? Couldn't you like point it at a URL and say,

03:15 open this and it would pull down the HTML, like into the editor? I think I remember.

03:19 I don't really remember.

03:20 What a weird piece of software that was. Sorry, I don't need to do it. I'm just thinking back of like

03:24 the early web was a weird time.

03:26 I more use publisher than I use front page because it was more complex for me at the time. And I didn't

03:32 really have any documentation or tutorials. So I, yeah, it was more like fiddling around and

03:37 getting something to work and something to happen.

03:40 Cool. So you found your way to PHP.

03:41 Yeah. Then I went to PHP, did some very basic websites, like guest books and these kinds of things with PHP, but nothing, nothing really big.

03:50 And then after mandatory school years here in Switzerland, I started an apprenticeship

03:54 at Roche. It's a pharmaceutical company. So if you, if you've done any PCR tests lately,

03:59 you probably have done that on an instrument of them. And there I was in a team where we did

04:04 hardware simulation testing for software, which is running on these instruments. And all these,

04:10 like the testing framework around the simulation was in Python. And that's basically how I,

04:15 I would start in Python. Yes. It's mostly like testing code and providing frameworks in Python

04:21 for testing.

04:22 That's cool. That's a neat way where you can connect your code to physical things,

04:27 you know, testing.

04:28 Yeah. It's super nice. Like lab equipment and stuff. The simulation at the time, it also had

04:32 some 3D visualization. So you could see motors moving around and kind of, you know, see pipelines

04:38 colliding and these kinds of things, which was pretty awesome for me at the time because it was my first

04:44 job basically. And yeah, it was super cool. It was super cool. We also built with Python,

04:49 like a distributed testing framework or testing system, kind of like you would have in Jenkins

04:53 these days or, or any other CI system.

04:57 Yeah. They probably don't have as easy integration to actual hardware.

05:00 Now, when I pushed to this branch, I wanted to fire up that robot. Oh, okay.

05:05 Exactly. And also like a huge problem is like error case testing. Like you can't just break a needle

05:11 in an instrument while, when you're running something, because either you get hurt or it just costs you a

05:17 thousand dollars just for breaking something for testing. So you need some kind of simulation to

05:21 actually do that.

05:22 Yeah. Very fun. How about now? What are you up to now?

05:25 Yeah. I started a new job at GitLab. I'm a senior backend engineer in the

05:30 configure group. And what we're doing is we provide Kubernetes integration for GitLab, GitLab projects

05:36 and groups, these kinds of things, and also are responsible for the infrastructure as code features

05:42 like the Terraform estate backend and the Terraform provider, which also I've been

05:47 maintaining for a year now. That's just like an outside contributor. Yeah. But this is mostly go.

05:52 If maybe, you know, Ruby is a big player in GitLab. So the entire GitLab rep was mainly a Ruby and

05:59 Rails application.

06:00 Give us the elevator pitch on GitLab. Is it, I should use GitLab instead of GitHub or something else?

06:06 Or what, what's the value proposition for GitLab?

06:09 It's a good question. What do you see here also on the screen is that it's called the

06:13 one DevOps platform. So it provides much more feature, I think, than, than GitHub in terms of like front planning

06:19 to production, then monitoring an application, get like all these insights, which you cannot really do on GitHub.

06:25 So you have all, yep. You basically have features for all the DevOps lifecycle stages.

06:31 And here, for example, you see the very Fiverr. You have a very powerful pipeline integration.

06:37 The open source world is happening on GitHub, but a lot of enterprises are using GitLab these days for

06:43 developing their applications.

06:44 Yeah. It's definitely an important piece of, piece of the puzzle out there. And how long have you been

06:49 at GitLab?

06:50 Just one month. Actually, today marks my one month being at GitLab.

06:53 Wow. So you're probably, yeah, you're probably just starting to get comfortable with like how stuff

06:57 works and how you deploy things and so on.

06:59 Exactly. It was a lot of onboarding, but I've been contributing before.

07:03 So not everything was new enough in using GitLab as a user. So it wasn't that hard, I would say.

07:08 Cool. All right. Well, let's go ahead and jump into your project.

07:11 Sure.

07:12 It's one of these awesome lists, and it's about one of my favorite aspects of Python because it

07:17 async.io, async and await, they let you write such neat software that really takes advantage of

07:23 latency. And when other parts of the system are busy, you can just keep on going without

07:28 rethinking how your code works all that much. So I'm a super fan of async and await and Python and all the

07:35 languages that use it, I suppose, because I just like the idea. But, you know, tell us about your project

07:39 and maybe first start with like, what the heck is an awesome list anyway?

07:42 An awesome list is just a curated list of projects or whatever the awesome list is about. It could be

07:48 recipes or whatever. And it's just trying to collect awesome pieces of that thing. In this case,

07:54 async.io, which are async.io packages or projects, and you basically showcase them to the readers and

08:01 they could, you know, take inspiration for the next project stack or can just explore like what's out

08:06 there and what people feel that is awesome. And yeah, I mean, I'm not an expert in all of these by far.

08:14 So I'm more like librarian off the list and rely on people contributing actually their awesome

08:21 projects or ideas or whatever.

08:23 Yeah, neither, just as a disclaimer, a front, neither of us are, you know, maintainers of all these

08:29 projects or like we're not super experts. It's more of a survey of all the cool things. And I do think

08:34 that's one of the really cool powers of the awesome list. You know, I remember the first time I found

08:39 awesome Python. I was like, wow, look at all these things I didn't even know existed, right? It's,

08:44 it's not necessarily that you use the awesome list to make a decision about what you use,

08:49 but it, it's like a good starting point for research, depending on whatever area,

08:53 like you've broken your list into stuff about databases and about networking and about web

08:57 frameworks. And as we'll see, and you know, you go to that section you're interested in,

09:01 you're like, oh, here's my 10 research projects to figure out what I want to do.

09:04 I think it's also nice to just, you know, once in a while browse through it and see like where,

09:08 where the ecosystem is at and like what new things have been popping up. So.

09:12 So another thing I think maybe is worth touching on, I get the sense, although I'm not a hundred

09:17 percent sure, cause I'm asking you now that these things that get put there, they don't,

09:22 it's not an exhaustive list. It's more of a things that the community thinks reaches some threshold for

09:28 interestingness. So under the PRs, I see you got a new one an hour ago.

09:33 Yeah. So under the PRs that you have a please vote before these are accepted because like what

09:41 it means to accept a PR is really to add a line to a read me. It's not like a super,

09:45 oh boy, how does this affect our overall performance? Like there's not a lot of considerations in that

09:50 regard, but the question is sort of, let's talk about whether it belongs on the list, right?

09:54 How do things make it on your list?

09:56 It's a very good question. And I never been really strict about these rules and maybe I should be,

10:01 I don't know, but I usually put this please vote label on pull requests just to see if people are

10:08 interested in this. Usually I also check things like the stars. When was the last contribution?

10:13 Like how many, how many contributors are there? Because if we put something here on the list and

10:18 then people start using it and we burn out some maintainers of a library, we just wanted to do,

10:23 you know, publish something. Yeah. I don't know if that would be a good idea, right?

10:26 Right. And on the other end of the spectrum, you've got, you know, maybe there are people

10:30 who publish something just for the heck of it, but there's one person they've touched it two years

10:35 ago and you know, it's, it's not necessarily something you want to recommend if there's five

10:39 stars and no one using it. And is it really going to be good enough? Okay, cool. So I'm guessing it's

10:44 open for people to go and do more PRs and suggest more things if they listen to the show and they're

10:49 like, but you forgot about this awesome thing.

10:51 Yeah. If people are listening and have something awesome, please create a pull request. Always great to

10:58 have some addition. Yeah. Cool. All right. Well, let's, let's go through it. So I think we'll just

11:02 take it section by section or topic by topic. And I know you pulled out a couple of things that are

11:08 interesting to you. I grabbed a couple as well and we'll just, just touch on them, you know,

11:11 kind of work on that awareness and cover the broad spectrum of what's available. So we'll take it,

11:16 I guess the top section that you have here, probably the most important section I would say

11:21 is web frameworks, right? There's some interesting ones. First of all, it's kind of notable. The

11:26 ones that are not there yet, maybe actually, maybe there's some PRs that should be making their way

11:30 there. The really traditional web frameworks that people think about are not there, right? We don't

11:36 see Flask directly, although through court it's there, which is kind of its full async implementation.

11:42 Django is not listed. Bottle, Pyramid, bunch of these older ones, but the really hot new ones are here,

11:49 right? Like we've got FastAPI and Sanic and some of the others as well.

11:52 I guess people are just probably more excited about those and then, you know, they're kind of hyped

11:57 and at those and that's probably why they end up here and not having like Pyramid or the older ones.

12:03 Yeah. Well, Pyramid doesn't have an async version, but you know, it's interesting that Django does.

12:08 And I think maybe somebody should do a PR for Django now that it actually properly supports all the way

12:14 to the database layer. But, you know, until recently it didn't. And what's notable,

12:17 I think about all of these frameworks that are here on the web one and pretty much for many of the

12:23 others as well, is not just they have a capability to do async, but they were kind of born to be async,

12:31 right?

12:31 Yeah, you're right. And I think also some of them, I mean, they're all, they're not all on the same

12:36 level, I would say. Like we have Starlet, which may be like a very lightweight framework that others

12:43 kind of build on top, like FastAPI, which, you know, Starlette may be more comparable to Quartz

12:49 than Quartz is to FastAPI, right? So you have these kind of different layers where people could build

12:54 upon and, you know, so it's some variety there.

12:56 Yeah. And you've got a couple of WebSocket style ones in here as well. They're maybe not full frameworks,

13:02 but they work in that regard. Yeah. So, you know, notable to me here, certainly, I mean,

13:07 FastAPI is definitely taking the world by storm. It only came out a couple of years ago and it's

13:13 already got 50,000 GitHub stars and that's close to what Flask and Django have. It's certainly a popular

13:19 one. Yeah. Have you got any chance to play with FastAPI?

13:22 I do. Yeah. Or I did. At my last job, we built some applications on top of FastAPI. And we also,

13:29 at Roche, we opened source one, which is kind of nice. I always liked the, like the FastAPI experience

13:35 overall. Like, you know, the documentation is super nice. I think Sebastian did a great job in also

13:40 taking the extra mile to explain more general concepts than FastAPI, like introduction to AsyncIO

13:47 and these kinds of things, which the others do not have. They don't need to, but it's just that you

13:52 can see that they really care about the community and the users of FastAPI to make it very easy to

13:57 put something into production. It definitely stands out in that regard, for sure. So FastAPI is notable.

14:03 I also think another one worth giving a shout out to is Starlit. Now, Starlit is not as popular,

14:09 right? Having 7,000 GitHub stars. Not that this is like a popularity contest, but it gives you a sense of

14:16 like how many people are using it, right? And so Starlit is its own web framework, but it is also

14:22 something that can be used for the building blocks of other web frameworks, which I think is unusual

14:27 for, you know, Flask isn't like, well, take us apart and just use us. Don't actually use Flask or,

14:32 you know, but FastAPI itself actually is built on Starlit. So much of what people love about FastAPI,

14:38 they actually love it about Starlit. It's just kind of like a pass-through.

14:41 Yeah, I think for a lot of things, FastAPI is just a pass-through to Starlit. And that's what I

14:47 meant before with a lot of people are like comparing RCD block posts like FastAPI versus

14:53 Flask or Quart and these kinds of things. But it's an unfair comparison because they're like Starlit and Quart,

14:59 I think they're meant to be extended into something like going from the microframe or to your,

15:05 like that application afterwards. And I think you will have, and they'll start using FastAPI if you

15:11 need all of these features. Then, you know, it depends on the use case, I guess.

15:15 Very cool to learn about that one. Also, Sanic. I hadn't really been tracking Sanic until recent.

15:21 I had been and then kind of didn't pay too much attention. But this is a pretty popular framework,

15:25 16,000 stars. And yeah, it's kind of got its own philosophy. It's a little bit like Starlit,

15:31 actually, in its style.

15:32 I've never used it. I've seen it around, but I never really looked into it. So what is it that

15:37 it has a different style? What do you mean?

15:39 Well, so many of the web frameworks, that's a great question. So many of the web frameworks

15:43 these days are like, we're a brand new web framework. We look just like Flask, except,

15:48 you know, we're just like Flask, but we're API oriented and we come with auto documentation.

15:54 We're just like Flask, but we do this other slightly different thing. You know, they're,

15:58 they're all like, you create an app and then you say app.get or app.route on your,

16:04 and you kind of build up out of a blueprint or API router style of like separator, right?

16:09 This, there's so many of these new web frameworks that are highly inspired by Flask,

16:15 but they don't carry over the same runtime. They carry over kind of the shell API concepts,

16:21 right? And Sanic is not so much like that. So if you go and check out Sanic, they have like a good

16:27 getting started. Let's see if I can pull up an example. They have a huge button that says,

16:31 get started. Maybe I should click that. So if you look at the way that, that it works is you just

16:37 create functions. Here's, they're using this app.get. So I saw some, I believe that were,

16:41 you just say, here's a function, here's its URL, go and call it, right? Where it's a little more

16:47 assemble it back together. But yeah, anyway, it's, it's an interesting web framework as well,

16:53 a little bit different. And it's, I think it's really nice that there's all these

16:57 people attempting different perspectives on solving the same problem.

17:00 It's cool. And it probably depending on the use case, one suits you better than the other. I mean,

17:04 it's not that FastAPI just because it had so many stars that it's always the best choice,

17:08 right? Maybe Quark is better for your use case because you want something very minimalistic or

17:12 something you can extend in your own ways.

17:15 Yeah. And some of these like Sanic just added this ability to have background workers that are managed.

17:20 So you don't have to go all the way to like a salary worker type of infrastructure, just the web

17:26 framework will manage it. And I believe Starlette also has that. Yeah. I guess one more thing to give a

17:30 shout out to is the stuff from the Encode folks. There's a lot of those appearing here. So they've got

17:37 Starlet, they've got HTTPX and UVicorn, right? Once you get one of these frameworks, you got to run it,

17:42 right? Yeah. And probably it's one of the most popular for production, I think.

17:47 At least that's what we've been using and we've been super happy. I mean, yeah. Word's great.

17:51 In fact, if you use it with UV loop. Yeah. UV loop will make an appearance a little bit later

17:56 as well. But yeah, I've been using UVicorn for production also. I'm loving it. Okay. And I guess

18:02 also one thing I'll put into the show notes here is I can't remember which framework had this that I

18:08 pulled it up. It might've been Sanic or Starlet, one of those two, but they created a filter across

18:14 the tech and power web framework benchmarks that just highlight the Python ones, right? Cause there's

18:19 how many, 290 frameworks in this. I don't really care what this obscure rust, super lightweight thing

18:26 does because it's not a full web framework and I will never use it and so on. So it's kind of

18:31 interesting to compare just the like raw basic ones or whatever. But if people are,

18:36 doesn't necessarily matter too much, but if you kind of want to get a sense of what performance

18:40 looks like across all of these, you know, here's a, I'll put a link to the tech and power benchmarks.

18:45 I don't know. What do you think about these things? Does this influence you to see, oh,

18:48 Sanic is above FastAPI or do you not care?

18:51 It's nice to see those comparison and kind of see how the theory optimization in these

18:57 frameworks can translate to, to practice. But in the real world, I would say that it doesn't

19:02 really matter too much because it's probably your business logic, which is slowing you down.

19:06 Yes.

19:06 And these kinds of things or latency to your database or whatever, and not the framework itself. So

19:13 I would take those with a grain of salt.

19:16 Yeah. It's kind of like asking, well, if I have a tight loop and I increment a number,

19:21 how fast can I do that? Like, okay, well, sure. C++ is super fast for that, but that's not what

19:26 real software does. Real software interacts with all these things. And like that difference you

19:30 think is so huge is like a little marginal bit over the real work.

19:34 This portion of Talk Python to Me is brought to you by Microsoft for Startups Founders Hub.

19:41 Starting a business is hard. By some estimates, over 90% of startups will go out of business in

19:47 just their first year. With that in mind, Microsoft for Startups set out to understand what startups

19:52 need to be successful and to create a digital platform to help them overcome those challenges.

19:57 Microsoft for Startups Founders Hub was born. Founders Hub provides all founders at any stage

20:03 with free resources to solve their startup challenges. The platform provides technology benefits,

20:09 access to expert guidance and skilled resources, mentorship and networking connections, and much

20:15 more. Unlike others in the industry, Microsoft for Startups Founders Hub doesn't require startups to be

20:21 investor backed or third party validated to participate. Founders Hub is truly open to all.

20:27 So what do you get if you join them? You speed up your development with free access to GitHub and

20:32 Microsoft cloud computing resources and the ability to unlock more credits over time. To help your startup

20:38 innovate, Founders Hub is partnering with innovative companies like OpenAI, a global leader in AI research

20:44 and development to provide exclusive benefits and discounts. Through Microsoft for Startups Founders Hub,

20:50 becoming a founder is no longer about who you know. You'll have access to their mentorship network,

20:54 giving you a pool of hundreds of mentors across a range of disciplines and areas like idea validation,

21:00 fundraising, management and coaching, sales and marketing, as well as specific technical stress

21:06 points. You'll be able to book a one-on-one meeting with the mentors, many of whom are former founders

21:11 themselves. Make your idea a reality today with the critical support you'll get from Founders Hub.

21:16 To join the program, just visit talkpython.fm/foundershub, all one word,

21:21 No links in your show notes. Thank you to Microsoft for supporting the show.

21:26 Yeah. And I would also say that most of the people don't actually need that speed. If you may need it, you may also choose another language or, you know, if this really is a thing for you, then yeah, I don't know if this micro optimizations between Sanic and FastAPI really brings you much benefit.

21:43 I would certainly say pick the API that makes you happy.

21:46 Exactly.

21:46 The programming API and the framework that makes you happy and just go with that. Yeah. Good advice. All right, let's see. Are we on to our next section? We are message queues.

21:54 Yeah, I haven't been a big user of any of these. I've been using the MQP one a while ago, so I don't really know where it's at these days.

22:02 Message queues are interesting. They're a way to add crazy scalability. If you've got a lot of stuff that takes a while, but you don't need the answer right away. They're pretty interesting, but I just haven't needed them much myself either.

22:14 I did not too long ago speak with Min Reagan Kelly about zero MQ and Python, and apparently they're doing a lot of cool stuff for powering Jupyter with zero MQ.

22:27 It's way more interesting than I initially kind of in my mind gave it credit for. But yeah, it's a cool project.

22:34 So they're hosting Jupyter or what do they do?

22:37 They're using for something for like the client server communication. I thought that it's been, that's what I think I remember, but it's been like quite a long while since I talked to them about it. But yeah, we've got the AMQP one. That's the one you talked about, right?

22:51 Yes. It's the one you would use if you use rapid MQ, for example. Right. You started using the AMQP protocol and then that's where you can use this library in particular. Right. You have the PI ZMQ. That's the zero MQ one I was talking about. And then some others, one for like Apache Kafka, for example.

23:08 But again, anytime you're talking, these are usually separate processes, sometimes on separate machines. You're doing network comms. Like if you, if you have the word I'm talking over a network, then async and await. I mean, asyncio, like what does the IO stand for? Right.

23:24 And then I think point being also here is that you have asyncio libraries for pretty much all message queues out there these days. I mean, we, every time I looked and looked for a library, it was, so something was out there and you could use.

23:36 All right. Let's move on to the next one, which is the database stuff. So I think this is another area where you spend, you're, especially in the web apps, you're spending a lot of time waiting. So thinking about your asynchronous database driver is super important, right?

23:53 Yeah, absolutely. And I think it's not too long ago when there wasn't really good support for asyncio and databases. It's great to see that a lot of projects are now supporting it. And also you mentioned Django, which has it all the way to the, I'm not a Django user, so I don't really know. I also guess that's a huge deal.

24:12 Yeah. I mean, that was the main blocker, I believe is the Django ORM didn't have an async interface. And I think it was 4.1, again, not doing a ton of Django myself either, but I think Django 4.1, which just came out.

24:23 It kind of completed the whole cycle and added that.

24:26 Very cool.

24:26 Yeah. So what database drivers stand out on this list for you?

24:30 Well, I think asyncpgA is a very popular one if you use Postgres. I've been using it and usually you don't really see much about them actually, because you may use SQLAlchemy on top of these drivers.

24:43 Right.

24:43 So as an end user, you may not have seen them, but you may have used them.

24:48 The only way you might see them is you put the async connection string into SQLAlchemy and it complains that it doesn't have this package. Like, well, I guess I got to install that. Here we go. Right.

24:56 Exactly.

24:57 Yeah. Yeah. So certainly the asyncpg one, I think is pretty interesting. This one is from the EdgeDB folks, right? From, is that Magic, I believe? Yeah. Magic Stack, like Yuri and crew over there.

25:11 So the same people that do UV loop, right? He did a lot of the original asyncio work in Python itself, I believe.

25:18 Yeah. I think there's also a nice, interesting Talk Python to Me episode about with Yuri, I think.

25:24 It was some.

25:25 Yeah. I think I spoke to him in the, about a year ago as well. And that was a great chat. Yeah. Let's see what else stands out to me here. So motor, if you're doing MongoDB, then motor is often the building block, much like asyncpg would be the building block for SQLAlchemy's async.

25:41 The motor is the building block for a lot of the Python MongoDB async libraries.

25:45 Yeah. I also noticed that for Redis.

25:47 Which one is that?

25:48 The Redis Pi.

25:49 Yeah, exactly. So I noticed a week ago or something that these IO Redis was included into the official Redis Pi library. I don't know when that happened. May have been a while ago, but I still think it's nice to have another separate package, but like the same package you have the sync API for.

26:06 And you can kind of use similar APIs so that you don't have to like rethink everything. If you want to switch to async, it just makes it easier to migrate if you want to also move back for some reason.

26:18 I agree. Some of these have both APIs, like for example, SQLAlchemy, you can create an async setup connection string engine sequence, or you can do a synchronous one. And depending on what you're doing, you might like this utility doesn't want to be async.

26:33 So we're just going to go and use the sync API, but your web app or API might want the async version. Let's see a couple more notables here. The Piccolo one, I think is pretty interesting because I really think the query syntax for this one is quite expressive.

26:49 Have you played with Piccolo? I have not, but I still admire its query style.

26:53 I recently checked it out though. And I also have the same impression that the query syntax is super nice because compared to others like Prisma, I also looked at lately and while they have type safety with like type sticks, you know, here you actually have the Python symbols or variables you can use, which is, I think, a little bit nicer than having strings, even though they can't be all the completed.

27:16 Yeah. And you get, and when you do refactoring, like it understands what's going on. So for just for people listening, for example, to do a select statement, I would say a weight because it's async band would be the class you say band dot select, then band type dot name, and then where dot where band dot popularity greater than 100.

27:34 A lot of these ORMs and ODMs have like janky syntax to push operations into it. So for example, in Mongo Engine, you would say popularity underscore GT equals 100. So popularity greater than 100, but it's, you're saying equals, you don't want equals, you want the greater than symbol, right?

27:55 This is like exactly the same meaning in SQL as it is in Python, which I just really like that.

28:00 Yeah. And it's also super cool if you're, you know, entering a code base and you see these kind of things, because even if you don't know Piccolo, like he would understand what's going on.

28:09 Yeah, exactly.

28:10 Which I think is a good aspect of a nice API.

28:13 It is. So Brandon out in the audience has a question says, so we can use asyncpg in place of psycopg two? I don't, I haven't done enough with this, but what are your thoughts?

28:24 I'm not sure if the latter one really is async.

28:27 Yeah, I think the deal is the latter one, psycopg two is not async.

28:32 Yeah.

28:32 And so that's what say SQLAlchemy would use if you had created a synchronous connection. But if you wanted to do the async version, then you would have to use the asyncpg foundation for it, basically. I think that's my understanding, but I do more MongoDB than Postgres.

28:46 I think so too, but I wouldn't really know because I've always or lately been using async only.

28:51 So yeah, exactly.

28:53 It's not the only thing you would have to change, right? You would also have to adapt your code and put the weights here and there and make your code async.

29:00 Like you can't just replace the query string and then expect it to work.

29:04 So one other one here that I think is probably noteworthy to put in the database drivers.

29:08 And I'd like to hear your thoughts on this as well. Is the AIO SQLite?

29:13 Yeah.

29:14 On one hand, interesting because SQLite doesn't really do much concurrency. So you're like, well, that's silly. Like, why would I ever want to use AIOs?

29:22 You know, like the asyncio with it if it doesn't really do that.

29:24 But if you're writing a web API or website or something that uses SQLAlchemy and you want to on dev use SQLite just for like a simple test and you want to use Postgres in production, well, guess what?

29:37 Your async code will fail to run on SQLAlchemy unless you have AIO SQLite as the foundation like we just talked about.

29:44 So it kind of allows you to still test your code and run it, even though you wouldn't necessarily directly use it.

29:49 Yeah. I think we've always been using AIO SQLite for testing purposes.

29:53 Super nice because you can use like in-memory databases and don't need to worry about the setup too much.

29:58 And it just works basically.

30:00 I need to be careful though for a few features.

30:03 Yeah.

30:03 Because it's not exactly a match, right?

30:07 Exactly.

30:07 So we had the case where, you know, it didn't, it worked in testing and in CI and then broken production because, yeah, or testing against Postgres.

30:15 All right. Let's move on to the networking section.

30:18 So what stands out? There's not that many of this one.

30:20 This is not overwhelming, like the database thing, right?

30:23 Yeah. I think probably a lot of people know is HTTPX, which is a super nice requests-like package for making HTTP calls.

30:31 It has a similar ATI and the good thing, or what I really like is that you can use it in sync and async and the API looks pretty much the same.

30:40 So if you want to switch from sync to async, I think it's a delight to use it.

30:46 You can just say HTTPX.get just like you can with requests, which is great.

30:50 And then if you want the async version, do they have an async example?

30:54 It's super easy to quick.

30:55 I think you just create like an async client and then call the same functions on it.

30:59 Yes.

30:59 That's cool.

31:00 Much like requests where you create like a client, you just say, or a client session, I guess it's called.

31:06 You say create an async client, then you await the client.get.

31:10 Yeah.

31:10 It's always interesting to me how libraries decide to add on async and, and I mean, we phrase that, a synchronous and a non-synchronous.

31:20 Synchronous version, like the both variants into the same library, right?

31:24 Mm-hmm.

31:24 Do you see people doing that successfully?

31:26 Like, do you see any patterns that you really like when they do that?

31:28 I think I actually like how HTTPX is implemented in those regards.

31:33 I didn't think much in the code base, but like you have the same like protocols for the, for the API so that you can reuse them easily.

31:41 And then you kind of interchanged it.

31:42 I think the transport layer in this case.

31:44 Yeah.

31:45 It's super nice.

31:45 But for my own libraries, it always bothered me to, that there is no really nice API to provide both in like the same function.

31:55 Like you couldn't reuse the function name in an async version or the method name, right?

31:59 You need to have another class.

32:02 But I guess that's just how it is.

32:04 You know, I think a lot of these probably grow up.

32:06 They come into existence to be one or the other.

32:09 And then they're like, all right, well, we kind of want to have both.

32:11 So how do we add it?

32:12 And if personally, if I was going to start from the very beginning, I would like to see what they're doing just for the synchronous version of HTTPX, where you just say HTTPX.get.

32:24 Instead of saying import HTTPX, you'd have to say from like HTTPX synchronous import HTTPX or from HTTPX async import HTTPX.

32:34 And then it's just exactly the same API, but you have to await everything versus not await.

32:39 I don't know.

32:39 I think there's like if you said we'll control it at the import level and then what you get is either all awaitable or it's all blocking.

32:47 It would be a nice pattern.

32:48 Yeah.

32:48 By the way, could you await this get?

32:51 Not really.

32:51 Or is there like a, you always need a client that you want to have async support?

32:56 I'm pretty sure for HTTPX, you have to create the client and then you have to create an async client.

33:00 Then you can await it.

33:01 I've also seen other APIs.

33:03 You have get, then you have like a get underscore async, but I kind of, I kind of don't like that since you could just do one import statement, you know, and fix it.

33:11 So I don't know.

33:12 It's really, as I'm going through all these, these examples that you've curated, I'm thinking like, okay, some of these have both APIs.

33:18 Like, how are they making that clear to people?

33:21 Right.

33:21 What else is noteworthy on that list?

33:23 I think maybe we could just super quick touch on it.

33:26 You've got an async SSH library.

33:28 Like literally that's its name.

33:29 And a DNS and a ping, right?

33:32 I haven't either.

33:33 It's nice to have like here at least to have some, well, if you want to do a ping, like it may not be obvious what to use.

33:40 So I think it's a good one.

33:41 So I think it's a good one or at least people consider it a good one.

33:43 So I think it's, it's nice adding those to the list.

33:45 So there are a few like niche libraries on the list, which wouldn't make a huge, like they wouldn't get many votes probably if you do this because.

33:53 They're not broadly useful.

33:55 You're like, oh my goodness, I've been building a DNS system.

33:58 I'm so glad I found AIO DNS.

33:59 But actually some of the frameworks, like I believe HTTPX uses that under the covers.

34:06 Pretty sure.

34:07 Something I played with recently was like using AIO DNS under the covers to make its work a little more asynchronous.

34:12 Yeah, it will make sense.

34:13 Excellent.

34:14 All right.

34:14 Tell us about the testing story.

34:16 I mean, here of the ones we see on the screen, there's IOMOC, there's AsyncTest, pytestAsyncIO, A-Responses and AIO-Responses.

34:25 And I've been only using pytestAsyncIO, to be honest.

34:28 And it basically gives you the, like a decorator to mark your Asyncpytest or your test as Async so that they run in an event loop.

34:38 There is also now a mode you can set in the configuration where you don't need that marker.

34:43 So you would, if you have a test, like you would say AsyncTest underscore some underscore AsyncIO code, and then your normal test code.

34:50 And you would decorate this function with a pytest mark AsyncIO decorator.

34:56 But yeah, it's not, I think, not necessary.

34:58 These days you could configure it to have it in auto mode, where it basically just detects always a coroutine.

35:05 I'd schedule it on the event loop.

35:07 Yeah, and this is not about trying to say, well, I've got a bunch of AsyncTest functions, so let's try to run them all in parallel, like Xtest or Xdist or any of those types of things.

35:16 It's just, I have some function I want to call and test its result.

35:21 It's Async, so I have to await it.

35:22 In order to await it, the test function itself must be Async, and then how do I run it, right?

35:27 Like, now what?

35:28 I think the other way would be kind of cumbersome to create an event loop every time and then schedule your coroutine in there and pytest AsyncIO, which just does that for you.

35:37 Sure, you could do it, but it would make the code not look normal.

35:41 It'd be like all these weird things you have to do to like Async to Syncify.

35:44 And we'll see some frameworks that might even do something in that regard for you.

35:48 But this is really nice.

35:49 It just seems like if you're testing code that is Async, clearly this is something like this is what you want.

35:55 Also, mocking.

35:56 I hadn't really thought about mocking Async methods, but I guess you need some.

36:00 I've done this.

36:01 I haven't used this IOMock.

36:03 So there is, I think even in UnitestMock, there's an AsyncMock class which you can use.

36:09 I'm not really sure why you would need this.

36:12 This has changed six years ago.

36:13 I wonder if the asynchronous mocking capabilities were not in the framework itself when this got created.

36:20 And then probably like, you know what, we should just be able to test our own stuff.

36:23 So let's fix that.

36:24 Yeah, I'm guessing.

36:25 Okay, pretty nice.

36:27 You mentioned UV loop before in one of the sections.

36:30 This is maybe the only section that has a single item in it, but it's a big one.

36:34 UV loop, right?

36:35 Yes.

36:35 For alternate loops.

36:36 Yeah, exactly.

36:37 So you could use, if you're on AsyncIO, you can just use the one which is in like CPython.

36:42 You could just use that.

36:43 But there is other implementation like this UV loop, which is based on LiveUV, which is another

36:49 events loop in C.

36:50 And it just is super fast compared to the built-in implementation.

36:54 I think it's for production use cases.

36:57 It's so nice because in order to use it, a lot of times if it's just literally installed in

37:03 the environment, things will use it.

37:04 Like I believe UVicorn will use it if it finds it and some other things.

37:08 You don't even have to say, please use it.

37:10 It's just like, oh, it's available.

37:11 Let's go.

37:11 And if for some reason you need to explicitly use it, like in your code, you just say UV

37:16 loop.

37:16 You say asyncIO.setEventLoopPolicy and you pass over the UV loop policy class.

37:21 And now the rest of your program just uses that.

37:24 It's really nice.

37:25 And don't really know how many other like alternative implementations actually.

37:29 Yeah, I don't either.

37:29 But it's really nice.

37:30 And it basically bundles up, as you said, LiveUV.

37:33 They've got some nice performance graphs.

37:35 It says UV loop makes asyncIO two to four times faster.

37:38 Who wouldn't want your asyncIO code to just go two to four times faster with no effort?

37:42 Yeah.

37:43 And it's super easy to install and use.

37:45 So there's no really downside to that.

37:48 Mario has a totally reasonable question.

37:51 Why wouldn't UV loop just be the standard ASyncIO implementation then?

37:54 What's the catch?

37:55 Well, I don't know what the catch is, but I could assume that it's easier to change outside of

38:01 the CPython development cycle.

38:04 And probably that could be one of the reasons.

38:06 I don't know.

38:06 I don't agree.

38:07 That's a good idea.

38:08 Also, you know, do you want Python itself to take on LiveUV?

38:12 I'm sorry.

38:13 Yeah.

38:13 LiveUV as a C dependency.

38:15 And then third, when I played with UV loop originally, it's been a few years, it didn't

38:22 work at all on Windows.

38:23 Like its implementation of LiveUV was a, for whatever reason, it just wouldn't install

38:27 on Windows.

38:28 And that obviously is a breaking change or a stopper.

38:31 So maybe even if it works on Windows, maybe there's like some obscure place where Python

38:36 runs, but LiveUV won't, you know, think of like some small device like a Raspberry Pi

38:42 or whatever.

38:43 I don't know.

38:43 Yeah.

38:44 And I think we're coming down to the discussion you had before about the benchmarking again,

38:48 that maybe not everyone actually wants this feed or needs this feed.

38:51 Yeah.

38:52 Maybe it's just for like a very optimized production use cases.

38:55 And you actually need this four times faster.

38:58 And for all the other use cases, maybe it's your code that is slow anyways.

39:01 You don't really care too much.

39:03 Yeah.

39:03 There was a discussion about a few years ago about why is requests not just built into

39:09 Python, right?

39:10 They're like, well, there's URL stuff in there, but it's way less obvious how to use it compared

39:16 to just request.get done and response.json when you get your response back and whatnot.

39:23 And they debated that at the core dev summit.

39:26 And they decided if we put requests into CPython, kind of like you were saying, is it will,

39:31 it'll actually slow the progress and the evolution of requests itself.

39:35 And they wanted to keep this nice library, its own thing that could go at its own pace.

39:42 This portion of Talk Python To Me is brought to you by Sentry.

39:45 How would you like to remove a little stress from your life?

39:48 Do you worry that users may be encountering errors, slowdowns, or crashes with your app right

39:54 now?

39:54 Would you even know it until they sent you that support email?

39:57 How much better would it be to have the error or performance details immediately sent to you,

40:02 including the call stack and values of local variables and the active user recorded in the

40:07 report?

40:08 With Sentry, this is not only possible, it's simple.

40:11 In fact, we use Sentry on all the Talk Python web properties.

40:15 We've actually fixed a bug triggered by a user and had the upgrade ready to roll out as we

40:20 got the support email.

40:21 That was a great email to write back.

40:23 Hey, we already saw your error and have already rolled out the fix.

40:27 Imagine their surprise.

40:28 Surprise and delight your users.

40:30 Create your Sentry account at talkpython.fm/sentry.

40:34 And if you sign up with the code talkpython, all one word, it's good for two free months

40:40 of Sentry's business plan, which will give you up to 20 times as many monthly events as

40:45 well as other features.

40:46 Create better software, delight your users, and support the podcast.

40:50 Visit talkpython.fm/sentry and use the coupon code talkpython.

40:58 Yeah, makes sense.

40:59 I think there, I also heard that the security there, it's easier to patch the library in

41:04 like on PyPI and make people an update than shipping a hotfix release or whatever of Python

41:11 to fix those security issues.

41:12 Absolutely.

41:13 And Brandon just says, I just learned that FastAPI starlet use UV loop by default.

41:18 Yeah, that's what I, that's one of the things I was thinking of.

41:20 If it's installed in the virtual environment and it has access to it, it'll just take it and

41:24 go.

41:24 No need to make any changes there.

41:27 All right, awesome.

41:27 So that's just one, the one thing in the alternate loop section, but quite, quite neat

41:32 indeed.

41:32 And then there's got to be a miscellaneous, right?

41:35 There's got to be a utils.

41:36 There's got to be a helpers.

41:37 There's got to be something that's just like, well, what the heck is this?

41:40 Tell us about the grab bag at the end here.

41:43 Yeah, there's a few here, which I find very interesting.

41:46 The first one here is Iocham, which adds a CSV style concurrency feature.

41:52 So if you've, if you've done some Go programming, you came across channels, I

41:56 would say this IO gen brings these kinds of patterns into Async IO.

42:01 So basically what you will have is you can create a channel and then you can have multiple

42:07 coroutines like a producer and a consumer listening and writing to this channel and you can have

42:12 it buffered or not.

42:13 And these kinds of things can select on multiple channels and react on incoming data and these

42:19 kinds of things.

42:19 So it's just another, other way to communicate between your coroutines than what you would probably do with the built-in

42:26 mechanisms.

42:26 Sure.

42:26 And a lot of those patterns are incredibly hard to get just right with the event signaling and all those things.

42:33 Yes.

42:33 And so if you can just hook it in, then it's good to go.

42:36 You could probably make this work with queues and events.

42:39 Yes.

42:39 And then all these, but it's nice to have the abstraction.

42:42 Yeah.

42:43 Of these.

42:43 Exactly.

42:44 These primitives.

42:45 Yeah.

42:45 It's sort of equivalent to saying, well, you've got a HTTP server built into Python itself.

42:50 Like, why do you need Flask?

42:51 No, no, no, no.

42:53 No, we don't want to do this.

42:55 Other notable ones here, like one that stood out to me is AIO cache, which is pretty straightforward.

43:01 It's like just a cache, right?

43:02 Add, get, set, even has a cool increment and so on.

43:06 But it's Async IO and it talks to Redis, Memcache, Redis and Memcache, MessagePack, a bunch of

43:12 different capabilities it has, right?

43:13 Yeah.

43:14 It looks super cool.

43:15 I haven't used it.

43:15 But yeah, it's nice that you can just switch out the backends and use something else.

43:19 Also, the API looks very straightforward with like just .set, .get.

43:24 It's pretty straightforward to, you know, await a cache.get or await a cache.set.

43:29 Yeah.

43:30 It seems like a real nice, real nice API.

43:32 The decorators will look interesting so that you can cache our coroutines, probably.

43:39 Yeah, exactly.

43:40 That's just, that I didn't really catch that before you.

43:42 Right.

43:43 So people are probably familiar with the func tools.

43:46 LRU cache.

43:47 LTS cache.

43:48 The LRU cache.

43:49 Yes.

43:49 I'm like, oh, there's a, it's not a T. LRU cache.

43:52 Thank you.

43:52 But this is that idea.

43:54 But instead of saying, well, where you cache that is in memory, as you just say, cache

43:57 equals Redis, which is like, wait a minute.

43:59 Okay.

44:00 That's cool.

44:01 That's really cool.

44:01 It's cool.

44:02 Just make sure the latency is lower than actually your execution time.

44:06 But yeah, it looks very nice.

44:08 That's a really good point.

44:10 Like if you call this a bunch of times and the Redis is far away, like it actually might

44:14 just be slower.

44:15 Yeah.

44:15 But the CPU will be nice and low.

44:17 So you'll be fine.

44:18 You'll be fine.

44:19 Yes.

44:19 This is a cool project.

44:20 Another one that I really like, I think adds some important capabilities is AIO files.

44:26 Tell us about that one.

44:27 Yeah.

44:28 It basically provides you, say, file API support like you have with normal, like open

44:33 and these kinds of functions and do it async with asyncio.

44:36 But I'm not sure if I read it here, but I think on some platforms like Linux, it's hard

44:42 to actually implement this correctly with like ePoll and stuff.

44:45 I don't know if you know more about this, but I heard it.

44:48 It's not really a big benefit actually to run it async.

44:53 Yeah.

44:53 I don't know either.

44:54 What it claims here is it doesn't try to, I don't think it tries to do fancy work with

44:58 truly hooking into asynchronous stuff in the file.

45:01 So it just says, it's just going to run it on a background thread basically.

45:04 Oh yeah.

45:05 So it probably creates like a worker thread.

45:07 And whenever you ask to read it, just in the back, it goes open.

45:10 And then when you say a wait read, it's just on that thread, it maybe sets an event and

45:16 then does a read or something.

45:16 I don't know, a little bit of like juggling background threads.

45:19 But yeah.

45:20 So it, in a sense, it may make it actually timely bit slower.

45:24 But if you're doing an API and the API has got to read or write a big file, that could be

45:29 a problem.

45:29 The other one is, you know, we'll see this in a couple of things we're discussing in

45:33 this section.

45:34 So it's a lot of things that we're talking about.

45:35 It could be a lot of things that we're talking about.

45:35 It could be a lot of things that we're talking about.

45:35 It could be a lot of things that we're talking about.

45:35 It could be a lot of things that we're talking about.

45:35 It could be a lot of things that we're talking about.

45:35 It could be a lot of things that we're talking about.

45:36 It could be a lot of things that we're talking about.

45:37 user slash whatever.

45:39 But it could be backslash backslash network server backslash network drive, right?

45:45 It could be very, very slow where all of a sudden, you know, unlocking that, that access

45:50 is a huge deal.

45:52 Yeah.

45:52 Yeah.

45:52 Does this actually support us?

45:54 talking network files?

45:55 Oh, yeah.

45:56 With multiple drive here.

45:57 Okay.

45:57 Yeah, exactly.

45:58 Yeah.

45:58 Like what the thing you're talking to might actually be far away, you know?

46:02 Yeah, exactly.

46:03 It does say handling local disk files, but I bet if you mapped it in your OS, it wouldn't

46:08 know, you know?

46:09 Yeah, probably would work.

46:10 Yeah.

46:11 What I also like here, there is also an IO path.

46:14 Yes.

46:15 Which, that one actually looks maybe even cooler, right?

46:19 Yeah.

46:20 I think a lot of people are using pathlet these days and IO path basically gives you an async

46:27 path type, which you can just wrap around your strings or path objects.

46:31 And you get the same methods, but just, you can evade them basically.

46:35 So you can, you can have your path.open, path.exist and you have, need to use an upgrade.

46:41 This is for you.

46:42 I don't know here how it's implemented, if it's also using background threads or if it

46:46 doesn't match it.

46:48 Does it actually hook into the true IO completion ports and all that kind of business?

46:52 Yeah.

46:52 Yeah.

46:53 So this is really cool.

46:54 So you could create, we all know about path from pathlib.

46:57 It's super neat.

46:58 And you can ask it questions like, does it exist?

47:01 Or, you know, create this directory?

47:03 Or is it a directory?

47:05 Or you can actually say, read bytes, write bytes, read text.

47:09 There's a lot of things that you would do with a context manager that become just one-liners

47:13 with pathlib.

47:14 And this async pathlets, you make all those asynchronous, you're like, await pathlet exists,

47:18 await write bytes, and so on.

47:20 And the cool thing is there really, I think, try to be a drop-in replacement for pathlib

47:24 in the asyncio world.

47:26 So if you've been having a code base or have been using pathlib in an async code base, it's

47:31 super easy to just switch to an async version of pathlib.

47:34 Oh, excellent.

47:35 It says the implementation here.

47:37 Let's see.

47:37 Does it tell us?

47:39 It inherits from pure path, which is cool.

47:41 So you could use it as an argument to some of these, some of the pieces that will take path

47:46 objects directly.

47:47 It takes advantage of lib AIO for asyncio on Linux, which is probably where you care

47:53 most about performance because that's where your server is, right?

47:55 Yeah.

47:56 I don't know anything about lib AIO, but that's probably some sort of native type thing going

48:00 on there.

48:01 Like Linux native asynchronous IO access library.

48:04 Yeah, that's okay.

48:05 So maybe this is not just a little bit better.

48:07 Yeah, maybe it's not just a little bit better than AO files because, well, you can work with

48:11 path objects, but it has like an OS level implementation as well.

48:15 That's pretty cool.

48:15 All right.

48:16 I'm using it.

48:16 I'm using it.

48:18 It looks good.

48:18 Yeah.

48:19 That looks like, those are all the ones that jumped out at me.

48:21 Those, those two that you called out there, those cache and the files or the, and the path

48:26 history, I guess.

48:26 Anything else worth mentioning real quick?

48:28 I think so.

48:29 I mean, the misc one is just a miscellaneous package in the miscellaneous packages, I would

48:33 say.

48:33 So yeah, when I saw that, I thought this is probably the longer thing, maybe it's in there.

48:38 It's like the meta, meta miscellaneous, it's like the, or miscellaneous squared or something

48:44 like that.

48:45 That's right.

48:45 A bunch of random helper things in there.

48:48 That's cool.

48:48 Then you have some stuff.

48:49 Let me just go real quickly flip through.

48:51 Like there's some stuff on writing, like tutorials and articles, and then some video talks about

48:56 Async.io in there, right?

48:57 I think there are good ones.

48:59 I think a lot of them are actually from David Beasley.

49:02 I'm not sure.

49:03 Yeah.

49:03 David Beasley has done some cool stuff with kind of recreating Async.io live in the early

49:08 days.

49:09 Yeah.

49:09 And then Yuri, who we spoke about.

49:11 Yeah.

49:12 If you really want to know like how you can think about like a mental model of Async.io,

49:16 I think these are very good talks.

49:18 She's better understand.

49:20 Absolutely.

49:20 Cool.

49:21 I don't know about this guy though.

49:22 All right.

49:23 The last thing you closed it out with is alternative implementations to Async.io, not just like

49:29 a tool you can use within Async.io, but there's Curio and Trio are like probably the big two

49:34 there, right?

49:34 I think Curio, I think it's from David Beasley as well, but I don't think it's maintained

49:39 really, but I think it's been a nice experiment.

49:41 And at the time I looked at it, it was kind of minimal in the implementation.

49:45 So you could kind of digest and see how something would be done like Async.io.

49:49 Okay.

49:50 There's Trio, which I think is still out there.

49:52 And that's also why any.io exists probably because it kind of, you can use any.io as a

49:58 front end for Trio or Async.io and add some more high level features on top of Async.io.

50:04 Yeah.

50:04 I recently had Alex from Any.io, Creative Any.io on there and just real quick shout out

50:09 for some of the things I thought was cool over there is it could run on top of Async.io or

50:14 Trio, which is cool.

50:15 It also has some really interesting aspects for like converting threads into converting regular

50:23 functions into awaitable things by running them on other threads.

50:27 And it can either do that on a thread or it can even do that on a sub process.

50:30 So you can like go and say await run process and then you get its value back, right?

50:36 Or you could do that with sort of multi-processing or even create a asynchronous for loop over

50:43 the output stream, like standard out of some process.

50:47 I could have used this a few times in the past.

50:49 Yes, I know.

50:50 Like this is really, really neat to be able to do that.

50:53 And the other thing is the synchronization primitives like events and semaphores.

50:56 There's, he says, I haven't tried it out really, but they're supposed to be a little bit less

51:01 likely to get deadlocks or race conditions because you can't, they're not reentrant basically.

51:05 Okay.

51:06 Nice.

51:06 Yeah.

51:06 So there's a cool bunch of cool little helper type things in the IO there.

51:10 But well, that's pretty much it.

51:12 I think for, for the list, that was a lot, but a lot of good stuff.

51:16 A lot of awesome stuff.

51:17 Wouldn't you say?

51:18 Yeah.

51:18 And I'm sure there's more awesome stuff out there.

51:21 Like the Esco live one we've covered.

51:23 You added one five minutes before the, before the talk.

51:26 Because I was going through, because I was going through, I'm like, oh yeah, this one,

51:30 like I was looking at motor.

51:31 I'm like, oh, well, the stuff built on motor.

51:32 There's some good ones there.

51:33 Let's throw those in and people can vote for them if they want.

51:35 But yeah, there's, I'm sure that people listening, if they maintain one of these libraries

51:40 or they're big fans and use one a lot that's not on the list, you know, go make a PR,

51:44 right?

51:44 Absolutely.

51:45 Yeah.

51:45 I also plan to add some more automation so that we can, for example, check for dead

51:50 links.

51:50 It would also be nice to kind of catch outdated libraries, like the, which one, like the IO

51:55 mock we've seen before.

51:56 Yes, exactly.

51:57 Like, you know, if it hasn't been touched in six years, it probably isn't needed anymore,

52:00 right?

52:01 Could fade.

52:01 Exactly.

52:02 Awesome.

52:02 I mean, this is a great resource and I, I sort of shouted out the popularity of some of

52:07 these projects to give us.

52:07 And it's like, your list has got 3.7 thousand stars.

52:10 Like that's pretty awesome.

52:11 That's a lot of people who got value from it.

52:13 It is.

52:14 And I can only recommend to, to look at these lists, you know, whatever list it is, there,

52:18 there are gems in there and it's kind of nice to discover them.

52:21 Yeah, absolutely.

52:21 Really quick out in the audience.

52:23 Mato's logic says starlit, I'm sorry.

52:26 Star light is an async framework built on top of star lit and Pydantic, which is a good

52:33 candidate.

52:33 I didn't actually give a shout out to it, but I thought, oh no, it's not on there.

52:38 Okay.

52:38 Well, PRs, PRs are accepted and reviewed star light.

52:42 There you go.

52:43 Cool.

52:43 All right.

52:44 Timo, this is really excellent project here and a ton of people are getting value from it.

52:49 So thanks for putting it together.

52:50 Yeah.

52:50 Thanks to all the people who, who suggested the awesome stuff.

52:53 So I'm, as I said, merely the maintainer of the list.

52:56 So keep them coming and we'll make it even better.

52:59 Excellent.

52:59 All right.

53:00 Before we get out of here, final two questions.

53:02 I feel like you could just randomly pick one from your list, but the notable PyPI part

53:06 project you want to give a shout out to?

53:08 I think it's not even on there, but this, it will be tenacity.

53:11 Oh yeah.

53:11 Tenacity is good.

53:12 Yeah.

53:13 It's a library for retrying stuff.

53:15 I think it has async.

53:17 I'm pretty sure it has async.

53:19 Okay.

53:19 Let's see.

53:20 But it makes it super nice if you have like network.

53:22 Async retries.

53:24 There you go.

53:24 Yeah.

53:25 So it's, it's even the same decorator, which is cool from an API perspective.

53:28 I don't even think you have to import something else, like something differently.

53:31 Right.

53:31 Because you can actually inspect the function, which is being decorated and you can decide what

53:35 to run.

53:35 Pretty cool.

53:37 I suggest you use it if you, if you want to retry something or have like unstable endpoints

53:41 or whatever.

53:42 Yeah.

53:42 And all the features you need to call it.

53:44 This is a great library.

53:45 You know, if you were consuming someone else's API and that thing is flaky, you know, what

53:51 are you supposed to do?

53:52 Right.

53:52 You, you've got to call it potentially, but you can't count it always working.

53:56 I've run into that problem on a lot of my projects as well.

54:00 And I've either done something like tenacity where you just say, retry it with some kind of

54:04 exponential back off or I'll go through and cash it in my database and say, I'm going to

54:10 try it.

54:10 If it fails, I'm going to go get it from the database and go, it might be a little bit stale,

54:14 but at least this, you know, if it's something kind of stable, like a currency lookup, like,

54:19 okay, an hour ago, the dollar to Swiss francs, the lookup was this.

54:24 And it might not be perfect, but it's better than just going 500 server error.

54:28 I don't know how many like wire loops I've written in my life to kind of check a timeout

54:34 and then retry and sleep and these kind of things.

54:36 And it's hard to get this right because, you know, you want to catch termination of the,

54:41 of the program and cancel these things.

54:43 So it's nice to have a library for all.

54:45 Yeah, cool.

54:45 And the fact that they support async is like perfectly blends in.

54:49 You should add it to the news as well.

54:50 Yeah.

54:51 Yeah.

54:51 Actually it would belong there, wouldn't it now?

54:53 All right.

54:53 And then the other question is if you're going to write some Python code, what editor are you

54:57 using these days?

54:58 These days, VS Code.

54:59 It's important though that it has been key bindings.

55:02 I've been an end user for quite some time.

55:04 So I certainly need that.

55:05 Yeah.

55:06 But VS Code is my editor to go these days.

55:09 All right.

55:09 Final call to action.

55:10 People are interested in your awesome list.

55:13 What do you tell them?

55:14 Please contribute to your awesome ideas.

55:16 Make a PR.

55:16 Also, if you have ideas around automation, please send them our way.

55:20 Create a pull request or, you know, and achieve your nice.

55:24 As a maintainer, that's a very welcome thing, right?

55:26 It is.

55:27 Find things you don't have to maintain, I'm sure.

55:29 Absolutely.

55:30 Awesome.

55:31 All right.

55:32 Well, thanks so much for being here.

55:33 Thanks everyone to listen.

55:34 Thank you.

55:35 Bye.

55:35 Bye.

55:35 Bye.

55:37 This has been another episode of Talk Python to Me.

55:40 Thank you to our sponsors.

55:42 Be sure to check out what they're offering.

55:43 It really helps support the show.

55:45 Starting a business is hard.

55:47 Microsoft for Startups, Founders Hub, provides all founders at any stage with free resources

55:53 and connections to solve startup challenges.

55:55 Apply for free today at talkpython.fm/founders hub.

56:00 Take some stress out of your life.

56:02 Get notified immediately about errors and performance issues in your web or mobile applications with

56:08 Sentry.

56:08 Just visit talkpython.fm/sentry and get started for free.

56:13 And be sure to use the promo code talkpython, all one word.

56:17 Want to level up your Python?

56:18 We have one of the largest catalogs of Python video courses over at Talk Python.

56:22 Our content ranges from true beginners to deeply advanced topics like memory and async.

56:27 And best of all, there's not a subscription in sight.

56:30 Check it out for yourself at training.talkpython.fm.

56:33 Be sure to subscribe to the show.

56:35 Open your favorite podcast app and search for Python.

56:38 We should be right at the top.

56:39 You can also find the iTunes feed at /itunes, the Google Play feed at /play,

56:44 and the direct RSS feed at /rss on talkpython.fm.

56:48 We're live streaming most of our recordings these days.

56:52 If you want to be part of the show and have your comments featured on the air,

56:55 be sure to subscribe to our YouTube channel at talkpython.fm/youtube.

57:00 This is your host, Michael Kennedy.

57:02 Thanks so much for listening.

57:03 I really appreciate it.

57:04 Now get out there and write some Python code.

57:06 I'll see you next time.

Back to show page
Talk Python's Mastodon Michael Kennedy's Mastodon