#107: Python concurrency with Curio Transcript
00:00 You've heard me go on and on about how Python 3.5's async and await features change the game
00:06 for asynchronous programming in Python. But what exactly does that mean? How does that look in the
00:10 APIs? How does it work internally? Today, I'm here with David Beasley, who has been deeply exploring
00:17 the space with his project Curio. And that's what this episode of Talk Python to Me is all about.
00:22 It's episode 107, recorded April 14, 2017.
00:27 Welcome to Talk Python to Me, a weekly podcast on Python, the language, the libraries, the
00:56 ecosystem, and the personalities. This is your host, Michael Kennedy. Follow me on Twitter,
01:01 where I'm @mkennedy. Keep up with the show and listen to past episodes at talkpython.fm
01:06 and follow the show on Twitter via at Talk Python. This episode is brought to you by Rollbar and
01:12 Hired. Thank them both for supporting the show. Check them out at Rollbar and at Hired underscore
01:20 HQ on Twitter and tell them thank you. David, welcome to Talk Python.
01:25 Hi, how are you doing? I'm doing great. It's great to have you back. It's been going on two years since
01:32 you were on one of my first episodes, episode number 12, talking about packaging and modules and
01:38 diving deep into understanding those. And I think we're going to be diving deep into another topic,
01:43 another area of Python today, but this time in concurrency.
01:47 Yeah, it should be fun. It's something I've talked about in the past.
01:50 Yeah, yeah. You've definitely been talking a lot about it lately and amazing presentations,
01:55 which we'll get to. I know many people know you. It's been two years since I last asked you this
02:01 question. So maybe just briefly, you could tell us how you got into Python programming,
02:06 that sort of thing.
02:07 All right. Well, not to get into too much detail, I guess I first found Python in 1996.
02:13 And I was doing some scientific computing, a lot of parallel computing kinds of things,
02:17 and just found it for doing, you know, like basically found it for scripting scientific
02:21 calculations. And it kind of grew from there. In the more modern era, you know, I'm known for
02:27 writing a couple of Python books. So that's where people sort of know me.
02:30 Yeah, absolutely. What books are the most famous ones you've written, most well known?
02:35 Yeah, the Python Essential Reference, that's been around for a while. And then they did the third
02:40 edition of the Python Cookbook with O'Reilly.
02:42 Okay, excellent. Yeah, those are both great. And these days, what are you doing in terms of
02:48 work and programming with Python and other things?
02:51 Most of my work is training, actually. So I do a lot of teaching of Python classes. That's what's
02:56 mainly paying the bills. And then also funding, you know, sort of hacking on various open source
03:02 projects and the other time when I'm not doing that.
03:05 Oh, that's great. Yeah, I've, you know, done a lot of training, a lot of in-person training
03:09 previously. And I thought it was just, I think it's a really great career. I think it's a perfect
03:15 balance or a close, a great balance, let's say, where you get to teach things to people,
03:22 see their reactions, see how they take it, see, you know, they kind of test your understanding of
03:29 it. That's part of your job. And then the part is just to research and learn and stay on top of
03:33 whatever it is you're teaching. It's really nice, I think.
03:35 Yeah, I'm always trying stuff out with teaching. I mean, I find it, you know, it kind of informs like
03:40 talks. Think of it, you know, at conferences and also informs books and things. You know, a lot of,
03:45 you know, a lot of what people would see at a conference is probably something that I've tested
03:48 out in the context of teaching, or I've tried to do it different ways and just kind of seen people's
03:54 reactions and confused looks. This isn't working. We'll get to try this. Of course, the iteration is
04:01 so much faster, right? You could teach two or three classes in a month. How many conference talks do you
04:06 give in a month, right? Or how many books do you write in a month? Not nearly as many.
04:09 Right, right. I mean, I'm supposed to be working on a book update right now. And it's, it's going slowly.
04:15 But, you know, I'm thinking a lot about these topics, you know, like how to present material, how to think
04:19 about it.
04:20 Yeah, that's great. Yeah. So let's go ahead and start talking about our main topic here, which is
04:24 concurrency. And maybe we could start by just kind of talking about the concurrent options in Python
04:32 in general a little bit. And how do you feel that we're doing in 2017 with Python 3.5, 3.6,
04:39 compared to, say, five years ago?
04:41 Oh, okay. That's an interesting question. Well, I mean, the gist of the problem with concurrency is
04:46 it's doing more than one thing at a time. I mean, that's the basic problem. And it comes up a lot in
04:51 network programming, especially. So that's where a lot of people are very interested in it. Python has
04:58 certainly been involved with concurrency for a long time. I mean, threads have been part of Python
05:03 since 1992, I think. So, you know, that goes way back. And there's certainly been the option of,
05:10 you know, launching separate Python interpreters. I mean, you could have like multiple interpreters
05:15 might be a way of doing that. So these, these have been kind of classic approaches that have been
05:18 around for a while. Kind of, kind of a side, alongside that, you have a lot of people messing around
05:25 with things like callback functions, event loops, packages based on that. So things like the
05:30 twisted framework sort of emerges out of that. So a lot of this has been going on for quite some time.
05:36 I mean, maybe over just throughout, throughout Python's history. This question about where it
05:41 goes in Python 3. I mean, it's kind of a, I don't know, I'm trying to think like how to, how to
05:46 chew on that. It's a big question, huh? No, it is a big question because you've got the,
05:51 well, I mean, obviously the big development is the asyncio library. I mean, you get that added to
05:57 Python and that is tying together a lot of, a lot of ideas from different places. I mean, a lot of,
06:05 a lot of concepts about, you know, event loops and generator functions and coroutines and all these
06:12 things are kind of coming together in, in that library. There's a lot of excitement around that
06:18 library, but it's also a really difficult concept. Like that's a very difficult library to wrap your
06:24 brain around. Yeah. It takes a bunch of things that are individually pretty conceptually hard and then
06:30 puts them all together. Yeah. I actually realized that, you know, in hindsight, I never really quite
06:35 understood that library the first time I heard about it. I mean, I, I watched Guido give a keynote talk
06:41 about it at PyCon and I'm trying to think which one that was. It might've been 2012, maybe 2013. And my
06:50 takeaway from the, from the talk is that, Oh, this is going to be cool. We're going to do, we're going
06:55 to do things like coroutines for, for async. And we'll probably talk about that later, but I went and rewatched
07:02 rewatched that talk recently, maybe three months ago. Cause I was, I was like, what did, what did Guido
07:07 like say in that talk exactly? And it was, it was not at all what I remembered, you know, you rewatched
07:13 the talk. He's talking more about, you know, trying to have like a common event loop in Python to have
07:21 some kind of interoperability with some of these libraries, like, you know, twisted and tornado and,
07:26 you know, maybe G event or something. And it, you know, the, the focus on coroutines was more
07:31 incidental. So that's really interesting. Yeah. So if for example, you're using twisted, it has some
07:38 kind of event loop that's doing the callbacks for you as, as your things complete or whatever. And if
07:45 you're using something else that also has an event loop, those things might not know anything about each
07:49 other. Right. Right. Right. Right. Yeah. That, that can definitely be a challenge. So that takes us up to,
07:56 what, three, three, three, four, when, three, three, three, right. Yeah. And then in three, five,
08:02 we got async and await, which was, you know, really take these ideas and make them more, more accessible
08:11 to people, I think. Right. I think, yeah. Trying to put like a better face on it. I think, you know,
08:16 it's like a putting like a different, I don't know, I don't know how to, almost like a different API on top of
08:22 that machinery, I think to present it in a more coherent way. It's certainly not a, that, that
08:29 approach is not a Python invention. I mean, I, I, I don't know that they directly cite it, but it had
08:34 been done in C# before. Yeah. And it, you know, what's interesting, like the, the history in C#
08:39 kind of follows the same way. They didn't come up with that initially either. They came up with just
08:44 this idea of like a task framework and it was all callback driven and whatnot. And then somebody said,
08:50 oh, look, this callback way of writing, this is super not nice, right? It, it works, but it's not
08:56 the same as writing serial code. If we put this async and await on it, it will be. And I, you know,
09:01 it has exactly the same benefit for Python is that it takes code that would have to look special and it
09:07 kind of makes it look serial, right? Yeah. It makes it like, I mean, that really, that is kind of
09:12 the whole focus of it, you know, writing code with callbacks. I mean, you see that like in every
09:18 talk about async, you know, there's usually a slide. It's like, oh, callback hell or something,
09:22 right? And everybody kind of moans. Yeah, exactly. I've seen, seen callback hell. And then, and then
09:28 there's different, different approaches for how to, you know, kind of untangle yourself from that.
09:34 And, you know, it's, you know, it's interesting, you know, where, where it tends to push everyone is
09:40 more into just like serial code. You know, people want something that looks a lot like maybe thread
09:45 programming or just kind of, you know, just kind of straight forward code. But then, you know,
09:50 there's this question, how do you get there? So how does it fooling around with generators and,
09:55 you know, tasklets and green threads and all the, all these things that you see in these libraries
10:01 are all kind of focused on that general, you know, that general problem.
10:05 Yeah. So that's really, that's a really interesting thing to understand kind of from the ground up,
10:11 I think. And so one of the subjects I wanted to cover while we were talking today is some of the
10:18 ideas that you brought up in a talk that you gave at PyCon 2015 called Python concurrency from the ground
10:24 up alive. And before we get into that, I just want to say that was such a masterful talk.
10:30 You did a fine job on that talk. I'm, for those of you who haven't seen it, David basically,
10:35 David basically pulls up the editor and says, we're going to write a web server and we're going to
10:41 explore all our TCP server. And we're going to explore all the ways that you might approach
10:47 concurrency from it and the ways we might invent something similar to the, the asyncio that's built
10:55 into Python and it was really well done. Okay. Thank you.
10:59 Yeah. Yeah. So people, I'll link to that in the show notes and people should definitely check it out,
11:04 but maybe we could just kind of talk through some of the ideas of like, if we start with a serial
11:10 server and a serial client, like how do we, what are the ways in which we can build up to that? Like
11:16 we can use threading, we could use coroutines. There's lots of things, right?
11:20 Right. Right. So obviously you start with serial code. It's fine long as you don't have too many
11:24 people requesting from the server. Right. But as soon as you have some long request, everything is
11:28 blocked. Right. Right. The whole big picture of that talk, I'm going to try to distill it down from
11:34 like high, high level view here. It's all about scheduling, basically task scheduling. And if you
11:42 have normal serial code, you're executing statements, you're going line by line through the code. If you
11:49 hit an operation, like receive, like I want to receive data on a socket or something, that code is going to
11:54 block. If it's like, if there's nothing available, it's going to block and you're going to have to wait.
12:00 And that really is the gist of the whole problem, which is what happens that, like what happens when
12:07 you block? And if you do nothing, then your whole program just freezes. I mean, just everything stops
12:15 and then nothing can happen. Some ways around that, one approach to do it is to use threads in the
12:22 operating system. Essentially with threads, you're running multiple sort of serial tasks at once.
12:30 And if one of them decides to block, needs to receive, well then, you know, the others are still
12:36 allowed to run. So you're essentially allowing the operating system to deal with it. It would take
12:41 care of scheduling the, scheduling the threads and making sure that things work. The other approach,
12:48 and this is something that the talk gets into at the end, is to do it yourself. Don't have the
12:53 operating system do it. Take care of that blocking, blocking on your own. And one of the tricks that's
13:00 used for that is to use Python generator function. And that is used there is actually just the
13:07 behavior of that yield statement. So if you haven't written a generator function before, I think most
13:15 people kind of know them in the context of iteration and the for loop. You can write this function where
13:20 you use the yield statement to emit values out of a for loop. And the thing that's really cool about
13:27 that yield statement is that it causes the function to just suspend itself right there at the yield. It's
13:33 like it emits a value and then it suspends. And that's exactly the kind of thing you need to do
13:39 this concurrency thing. You can say, well, if there's no data to receive, I can suspend myself.
13:46 And essentially, you can take over the role that like an operating system would normally do at that point.
13:51 Yeah. And I think that's a really, really interesting insight that you can say, I'm going to take this
13:57 thing that really doesn't generate anything and make it a generator anyway. And so you had some
14:04 interesting examples of like, we're going to basically simulate parallelism with generators.
14:09 And one of the reasons you might care about that is you can switch things over to threads,
14:13 but that actually slows things down quite a bit. And especially if there's computational stuff,
14:19 you might have to push that out to like do multi processing. And then it really slows down like 10
14:25 times what it might normally be. And so you had this great example that maybe we could talk about
14:29 just a little bit where you say like, let's just come up with a generators that can count down from
14:34 like, you give it a number like 10, it'll count down 10, nine, eight down to one. And then it's done.
14:39 And you generate a bunch of the well, you generate multiple of these with different numbers and whatnot,
14:44 and put them all into like a task list of things that have to be run. And then you one at a time,
14:51 sort of round robin work with those generators. And I think that really highlights, here's how this event
14:58 loop can work, right? We can actually process these in a semi fair way across all these different
15:04 generators, right?
15:05 Right. But you can like cycle, you know, you can kind of cycle between them.
15:08 Yeah, which is really doesn't make a lot of sense when you just have a countdown little thing,
15:13 right? But then you say, Okay, well, now let's apply the same idea to functions like a while true loop,
15:21 that is going while true, come over here and wait to receive from a socket, then process the response
15:29 while true way, you know, and if you put yield statements throughout, right before all the blocking
15:36 places, you can kind of accomplish the same thing with the same technique, right?
15:40 Yeah, yeah. Okay, I kind of describe it. And I don't know, this might be might be sort of silly,
15:45 but the whole approach in that talk, I sort of view it as analogous to maybe like the game of hockey or
15:50 something like you've got a, you've got a task, and it's out on the ice, you know, and it's doing its
15:55 thing. But if for some reason, it's got to read, it's got to receive and like, it can't proceed.
16:00 It gets thrown into the penalty box, right? One of the rules is you can't block and anytime it fails
16:06 that like, yeah, you take a blocking penalty, man. So it's like, it's like, okay, you blocked,
16:11 you're going to the penalty box, and you're going to sit in the penalty box for as long as it takes
16:16 until like some kind of data comes in. And then once some data has arrived, and it's like, okay,
16:21 you get to go back out on the ice. So it's very much that model, you know, it's like tasks get to
16:26 run as long as there's things to do. But you know, once there's nothing to do, it's now you go, you go
16:31 sit in the penalty box. And yeah, I think that's a really interesting analogy. And it's definitely makes
16:36 a it's a good way to think about it. The challenge, though, and what it was, I think this is pretty
16:41 obvious, right? I'm going through and if I could pull out any task, and I could ask it, hey, do you
16:47 have work to do? Then I'm going to let you do it. Otherwise, you go back to waiting, or you go to the
16:52 penalty box. And when you decided you have work to do, you can come back out. It's interesting.
16:56 But then how do you know when it actually has work to do, right? Like on the socket, how do I know,
17:02 or if I'm doing something computational, how do I know that that task is ready to run?
17:05 Oh, yeah. Well, to get that, you need the help of the operating system. So there's usually there are
17:12 some system calls related to like polling of sockets, like the select call is one there's
17:17 things with like, you know, like the poll function, you know, there's there's like low
17:23 level kind of event API's in the operating system where you can present it with like a whole collection
17:29 of sockets. And you can say, Okay, I have these 1000 sockets. Why don't you watch these? And then
17:35 if anything happens, tell me about it. Yeah. And that works really well for sockets. But what if I
17:40 gave it like a piece of just a function, a Python function that was computationally expensive?
17:45 Yeah, if it's computationally expensive, it's just going to run. It's actually a problem with these,
17:49 like this event loop thing. If you have something that runs like, I don't know, it's going to go
17:53 mine a Bitcoin or something. It's just going to run. And there's no way to, there's no way to get it
17:59 back until it finishes. So yeah, it's true. It's I mean, they're all really the event loop really is
18:05 on the same thread, right? I guess there are some things you can do, like you can say, Well,
18:08 this part is computational. So we're going to kick that off in some multi processing way,
18:13 or something like that, that's possible. You had some kind of socket trick where you're even when
18:18 you weren't using sockets, you were using it to signal with your commentary, like if you're going
18:21 to do like CPU work somewhere else, in some sense, you turn that back into an IO problem. I mean,
18:28 you might have some work that gets that gets carried out somewhere else. And then when it done,
18:33 when it's done, it gets signaled on a socket saying, Hey, you know, that thing was done.
18:38 Yeah. So okay. Yeah. So yeah, very, it was a very interesting technique. And in the end,
18:43 down in the internals, you kind of had to deal with some of the callbacks, but the way it got consumed,
18:48 right, it was a pretty, pretty straightforward. So that was your talk. And like I said, people should
18:55 absolutely go watch it. It's really quite amazing. And then the other thing that I kind of see as
19:02 the frameworkification of this, these ideas, I'm not sure what the origin is, I'll ask you,
19:08 which, you know, which one came first, but is this project you have called Curio? Do you want to tell
19:13 people what Curio is? Yeah. Okay. So Curio is a library for doing concurrency in Python. I mean,
19:19 it exploits a lot of this async and awaits the, I'd say ultimately sits in kind of the same spot as
19:26 asyncio, although it has a very different flavor to it.
19:30 One of the things that I think is really interesting with Curio is like, there's been,
19:34 as we talked at the beginning, there's been these ideas for a long time of doing some sort of
19:40 asynchrony through callbacks and things like that. Like with Twisted, we've got asyncio built in the
19:47 earlier versions of Python. But in Python 3.5, we have async and await and onward, of course. And you
19:55 kind of took all of those ideas and said, let's refresh, let's rethink them. Like, how would this
20:00 look? How would this API look if we actually had this version of Python concurrency, not what we had
20:05 before, right? Like that's, that was how I was reading it when I was going through it.
20:09 Yeah, that's part of it. I mean, okay, so there's a little bit of a complicated, complicated background
20:14 on this. So let me back up. In a past life, going back a ways, I was a professor in computer science.
20:22 And the main course that I taught was operating systems. And in that course, this is a typical
20:28 course where you'd make students cry on some huge project thing. And I was, I was just a bloodbath,
20:35 bloodbath of a course. So we'd make people write an operating system kernel and see.
20:40 And, you know, that curl had to do all of this stuff, had to do IO and had to do like multitasking
20:47 and task switching and all these things. And it turns out that all the stuff in that project
20:52 was exactly the same kind of thing that people have to do in these asyncio libraries.
20:57 And that's what they're doing. And it's like, they're doing IO and they're coordinating tasks and,
21:02 you know, switching stuff. So it's, it's, the problem is essentially the same.
21:06 It's just in a different environment instead of down at the low level of C and like interrupt
21:12 handlers and device drivers, you're up in Python and it's much higher, higher level, but it's,
21:17 it's a similar topic. And having done that, I mean, I've always kind of, kind of had a,
21:23 an interest in systems topics. So gave a, you know, a well-known PyCon presentation on the gill
21:30 that would have been, I don't know, maybe 2010 or something, something like that. And I also had
21:35 done some tutorials at PyCon about coroutines, sort of exploring, you know, this idea of using generators
21:42 and, and coroutines for concurrency. So it's been a, been a topic that personally been kind of
21:49 exploring for, for a long time. But one of the things that has kind of, I don't know,
21:55 bothered me over those years is that all of my presentations on that have been completely out of
22:01 line with what is actually going on in Python. I mean, like, if you look at that concurrency talk,
22:08 that is not at all how like asyncio approaches concurrency, like approaches that problem. If you look
22:16 at a lot of the, you know, a lot of the presentations and things that what the, what they talk about is
22:21 they're like, Oh, we have an event loop. And then we put callbacks on top of the event loop. And then
22:26 usually there's like a transition into, into a discussion of like futures or promises and like
22:32 a whole approach based on like futures and promises and tasks. And it starts getting like a lot of kind of
22:40 moving, moving gears. And frankly, I just have not been able to wrap my head around that stuff.
22:47 I look at that approach and it is completely different than anything that would have been
22:52 taught in an operating systems class. But I've never seen an operating system kernel built on top of
22:58 futures, for instance. I actually went and got like my old operating system books about, you know,
23:05 not too long ago. And I was like, God, did any of those books talk about futures or promises? And
23:09 they're nowhere in any of the, like the operating system texts. Do you, do you see that? So I've
23:15 been struggling with this kind of mismatch in a way where it's, where it's like, Hmm, there's this whole
23:20 approach with like futures. And then there's this thing that I did in the talk, which is a completely
23:26 different, different thing. And I've been kind of fascinated with that kind of mismatch,
23:32 you know, like, why is that? Or like, what is, what is, what is going on there? And in some sense,
23:38 the curio project is maybe a, maybe it's kind of like a green field project, just trying to do asyncio,
23:47 but more in the operating system kind of model where I'm thinking more in terms of like task scheduling and
23:53 the structure of how I would do it in like a kernel project, not in the framework of like the, you know, the futures and promises and callbacks and,
24:04 you know, all of that stuff. So that's a big part of that project is like just kind of a re-envisioning
24:10 of how this might work.
24:12 Right. Okay.
24:13 Yeah.
24:14 Do you feel like it provides a cleaner model by not making you think about all the callbacks and
24:20 futures and stuff like that?
24:22 I think it does. I mean, it is really wild at first glance because it pretty much kills off
24:28 everything that you're used to. Like, in fact, there are no callbacks in curio at all. It's like,
24:34 there's, so there's no callbacks, there's no futures. There's like almost none of that machinery that you see in,
24:40 in kind of the asyncio library. You know, one way I've described it, it's almost like I took like the async and await feature that I got added in 3.5 and it's used it as a starting point for some completely different approach.
24:54 Yeah, absolutely. And so it's really, it's really involving a lot of async and await and coroutines basically. Right.
25:03 So a lot of the starting points, a lot of things you want to do, you're like, provide this class with some kind of async method, right?
25:11 An async coroutine. And then, then it runs from there. Right. Right. Right.
25:15 Yeah. So there, you know, it might, I don't know what people have looked at it or not, but there are a lot of building blocks,
25:22 a lot of really nice parts to this library.
25:27 When I first thought about it and first checked it out, I kind of thought, okay, this will be,
25:30 maybe it's got like a core event loop and it's doing a few things differently, but there's a lot of building blocks here.
25:36 You can build some really interesting things with it.
25:38 Yeah. There's some, some really odd stuff going on. I don't know.
25:42 It's like how we want to get into that. But one thing about actually getting back to the operating system model on that,
25:49 these async libraries, I mean, let's see if this makes sense. A lot, a lot of these async libraries are kind of like an all in proposition where like you're either coding in the asynchronous world or you're not.
26:03 And it tends to be kind of a separation between those two worlds. Like it, even,
26:07 even if you're working with callbacks, it's kind of like you have to program in kind of the callback style or you're,
26:13 you're kind of out of luck. And I think Curio kind of embraces that as well.
26:19 I mean, one of the, one thing from operating systems is there's usually like a really strict separation between what is the operating system kernel.
26:27 And then what is like a user space program, even at the level of like kind of like a protection kind of thing, like, you know,
26:35 like a user program isn't even allowed to like see the kernel in any meaningful way.
26:41 I mean, it's, it's, it's like very strict separation.
26:44 That is also something that's going on in the, in this Curio project.
26:49 I mean, there's kind of the world of async, you have all these async functions and await and all of that.
26:54 And then there's kind of the kernel and like those two worlds are like really separated from each other in this,
27:00 in the Curio project.
27:01 So that's all, that's another kind of unusual thing about it.
27:04 Yeah, that is really interesting.
27:05 And I, I, now I can see the, the operating system analogies and there is a kernel,
27:10 a thing you actually call the kernel in Curio, right?
27:13 Yeah.
27:14 Yeah.
27:28 This portion of Talk Python to me has been brought to you by Rollbar.
27:31 One of the frustrating things about being a developer is dealing with errors,
27:35 relying on users to report errors, digging through log files, trying to debug issues,
27:40 or a million alerts just flooding your inbox and ruining your day.
27:43 With Rollbar's full stack error monitoring, you'll get the context insights and control that you need to find and fix bugs faster.
27:50 It's easy to install.
27:52 You can start tracking production errors and deployments in eight minutes or even less.
27:57 Rollbar works with all the major languages and frameworks, including the Python ones,
28:01 such as Django, Flask, Pyramid, as well as Ruby, JavaScript, Node, iOS, and Android.
28:06 You can integrate Rollbar into your existing workflow, send error alerts to Slack or HipChat,
28:10 or even automatically create issues in Jira, Pivotal Tracker, and a whole bunch more.
28:15 Rollbar has put together a special offer for Talk Python to me listeners.
28:18 Visit rollbar.com slash Talk Python to me, sign up, and get the bootstrap plan free for 90 days.
28:24 That's 300,000 errors tracked all for free.
28:27 But hey, just between you and me, I really hope you don't encounter that many errors.
28:31 Loved by developers at awesome companies like Heroku, Twilio, Kayak, Instacart, Zendesk, Twitch, and more.
28:37 Give Rollbar a try today.
28:38 Go to rollbar.com slash Talk Python to me.
28:49 Basically, you use the constructs and you pass these async coroutines to it, and that's that.
28:54 Okay.
28:55 So if you're doing, like you said on the all-in part, if you write some sort of coroutine,
29:01 and it comes down to a point where you're either doing something computational or blocking,
29:05 and you block there, you kind of take everyone out, right?
29:09 What do you mean take everyone out?
29:11 Well, you can clog up the event loop.
29:14 Oh, yeah.
29:15 Yeah.
29:15 Definitely.
29:16 Yeah.
29:16 I mean, you can throw a wrench into the hole.
29:18 Hey, we're just going to keep going and letting it.
29:21 You do your work, and when you're done, come back, and then we'll pick up where you left off.
29:26 Yeah.
29:27 So how do you deal with that?
29:29 Like, if I've got some async coroutine I want to run in Curio, and it's got something computational,
29:35 how do I make that work?
29:37 Sometimes you just have to do something that's going to take a while, right?
29:40 Yeah.
29:41 I mean, if it's concerned, you have to punt it out either to a thread or another process.
29:44 Yeah.
29:45 Okay.
29:45 Like with multiprocessing or something?
29:47 Yeah.
29:47 It's kind of the standard technique for all these async things.
29:51 It's like you've got something computational, and it's going to block.
29:53 Yeah.
29:54 You've got to punt it out somewhere.
29:56 And then do you have a way, like a construct, to await a thread or await some kind of multiprocessing call?
30:02 Yeah.
30:03 There's a function in there.
30:03 You can ask to run something in a thread or you can run something in a process.
30:08 Okay.
30:08 It will take care of it and wait for the result to come back, but it won't block the sort of internal loop, basically.
30:16 Okay.
30:17 Yeah.
30:17 Very nice.
30:17 So if we have some time, we could talk about some of the individual building blocks, but what can you build with Curio?
30:26 Curio, do you think?
30:27 I mean, it looks like it's a little bit below like a web framework, but it's close to a framework for building asynchronous programs on its own.
30:36 Like where do you think this fits in?
30:38 Okay.
30:38 Yeah.
30:38 It is definitely not a web framework.
30:41 In fact, I don't even, I think there's any HTTP support in it right now.
30:46 So I think it is more of a framework for concurrency, you know, actually setting up tasks, communicating between tasks, coordinating things.
30:54 It's the kind of thing that you might start building libraries on top of, you know, maybe libraries to, you know, interact with Redis or interact with databases or even to do HTTP.
31:07 But it's definitely like a lower level, a lower level thing.
31:10 So it's, you know, a lot of like coordinating tasks and things of that name.
31:15 I see.
31:15 So if I wanted to create some framework that was backed with Redis, I could use Curio to make a really nice async and await framework and somehow do the network IO internally.
31:27 And people might not even know that it's Curio, right?
31:29 They might just know my framework and it talks to Redis and internal part of that could be Curio.
31:34 Yeah, maybe.
31:35 Yeah.
31:35 Yeah.
31:36 I mean, I sort of see it personally as something I might implement a lot of sort of microservice code with.
31:41 Yeah.
31:41 Like a lot of little like web services and stuff, things like that.
31:45 Done a little bit of playing around trying to implement a game server with it.
31:48 Sure.
31:49 Like a socket-based game server.
31:50 Yeah.
31:50 Socket-based game server.
31:52 I think there's a lot of uses with like testing that kind of, I think there's a lot of people,
31:57 who do network programming that's not necessarily web programming.
32:00 And so I think it kind of fits into that.
32:03 Yeah.
32:03 And it has really good support for TCP and UDP type stuff, right?
32:09 Right.
32:09 Right.
32:10 Okay.
32:10 If I wanted to take a framework that maybe I'm already using, let's say Django, Flask, Pyramid, something like that, that doesn't have any support for this idea of concurrency or async await,
32:24 could I somehow use Curio in my own code?
32:29 If I'm willing to sort of do some kind of callback mechanism or like notification mechanism in my web app, right, for like asynchronous stuff?
32:38 Or would those things just not make any sense together?
32:41 I think it would be tough.
32:42 If you've got code that was written originally for kind of the synchronous world, getting that into any async framework, I mean, even asyncio or anything like that can be, it can be kind of a tough proposition just because there's just the programming model can be very different.
32:59 And you have to instrument a lot of code with like these async and await calls.
33:04 Right.
33:04 It's unclear.
33:06 Yeah, yeah.
33:07 I'm thinking like if you have like WebSockets or some kind of call and then pull JavaScript thing, like maybe those parts of it could somehow use Curio?
33:18 Maybe.
33:19 Yeah, maybe.
33:19 I don't know.
33:20 Maybe.
33:21 I mean, you actually kind of this opens up kind of a, I don't know, an avenue of discussion, which is, you know, actually where does this thing, you know, where does it fit in kind of the grand scheme of things?
33:33 And I think, you know, one thing with these async frameworks is just stepping back for a moment and thinking like, okay, where, like, what is the use case for these things?
33:43 Or like, what are they really good at?
33:46 And one of the things that they're really good at is handling a gigantic number of connections, like a high degree of concurrency.
33:54 Right.
33:54 Where you might have like, like, let's say I had 100,000 clients connected to some server and I've got to maintain, you know, some pool of, you know, 10,000, 100,000 socket connections.
34:06 That is where these async things tend to shine because you can't just spin up like 10,000 threads.
34:12 Yeah, you can't.
34:14 Just the memory required for the stack space would be problematic, right?
34:18 Right, right.
34:18 So they're really good at that.
34:21 But so you have like a high degree of concurrency.
34:24 But at the same, you know, even though you have like a high, like a lot of clients, it doesn't mean that those clients are all doing things at the exact same time either.
34:34 I mean, you might have like a server that has like, look, it has like 100,000 connections open, but maybe it's doing sort of push notifications or kind of low traffic stuff.
34:44 I mean, it's not like you're going to have like 100,000 connections kind of open, just completely hammering your machine all the time.
34:50 Right.
34:51 Something like Slack or something, right, where everybody's got it open, but the amount of traffic is actually quite low.
34:56 But you want it to be basically instant, right?
34:58 Right.
34:59 So the kind of stuff I'm thinking about is like, okay, so maybe let's say you have like 100,000 connections open.
35:04 Could you still use something like Flask or Django or something?
35:10 Like, could you still use that in some capacity?
35:14 Now, I mean, you can't, you're not going to be able to spin up like 100,000 threads running Django or whatever, but could you have some coordination between, you know, these like tasks and something like Curio or AsyncIO and coordinate that with maybe a smaller number of threads or processes or whatever that are running like a more traditional framework?
35:35 Yeah, exactly.
35:36 That was what I was thinking.
35:37 I'm not sure if it's possible, though.
35:38 I don't know.
35:39 I mean, so one thing in Curio that I think is actually one of the more interesting parts of the project is I'm trying to do a lot of coordination between async and traditional thread programming.
35:53 As an example of that, one thing that Curio has is it has this universal Q object.
36:00 This is probably one of the most insane things in the whole in the whole library, but a standard way to communicate between threads is just to use a queue.
36:09 Right, because shared shared data is problematic, right?
36:12 You've got a lock on it and all sorts of stuff.
36:14 Right.
36:14 So you have a queue and, you know, you like share between threads.
36:18 So there's this thing in Curio that lets, that basically allows queuing to work between async tasks in Curio and thread programs in a really like seamless way.
36:31 Like essentially the thread, the thread part of it just thinks it's working with a normal queue and everything works normal.
36:37 And like the Curio side works with like a, like an async queue and it thinks that everything is kind of normal.
36:43 And you get this like queuing going back and forth between kind of the two worlds.
36:47 And it's kind of, it's, it's sort of seamless in a really disturbing way.
36:51 It's a, it's like maybe you could have like, you know, a hundred thousand tasks kind of managing sockets, but then talking to some pool of threads through queuing and it, and it all kind of works.
37:02 And it's, this is sort of an area that is not, at least as far as I know, not being explored more traditionally in asyncio, for instance.
37:10 They have a queue there, but it's not compatible with thread.
37:13 Right.
37:14 So this is a really interesting idea.
37:15 This universal queue, it's kind of like a dual facade, right?
37:19 The, the different worlds can see it as part of theirs, right?
37:23 Yeah.
37:24 Yeah.
37:24 Somebody contributed a feature to Curio to allow it to submit work to asyncio.
37:30 Okay.
37:30 So like you could have Curio and you could have threads and you can have asyncio.
37:35 This queuing object in Curio actually works in all three of those worlds.
37:39 Done some tests on that.
37:41 So like you could have a queue where one end of the queue is like an asyncio task and the other end is a thread.
37:47 And then, or, you know, you could have like a thread and a Curio task putting things on a queue that's being read by asyncio and other, other things.
37:57 That's a really kind of wild, crazy thing to be playing with.
38:00 I mean, it's, yeah, that's really interesting.
38:02 I, you know, I didn't know about universal queue, but the library is full of these, these really amazing little data structures and, and functions and stuff.
38:11 So it's, it's quite neat.
38:12 I mean, the other thing that's kind of wild and well, they're kind of, kind of interesting in Curio too, is just the, the way that the task model works and has a lot of support for things like cancellation of tasks.
38:24 Right.
38:24 And that turns out to be a really tricky problem.
38:27 It's like, okay, you set up a whole bunch of work and then you want to cancel it.
38:30 Can you do that?
38:31 And that's something that you can do in Curio.
38:34 And I, it's, it's very interesting because it's something that you can't do with threads traditionally.
38:39 Right.
38:39 If you just kill a thread, maybe it's holding onto some kernel level thing and you forced it to just, you know, leak it or whatever.
38:46 Right.
38:46 It's going to be bad.
38:47 You have a way to, to even kill a thread.
38:50 I mean, there's no API for killing a thread and then I think some people have done it going through C types, but that, that just makes my skin crawl.
38:59 It's like, like killing threads by going through C types seems like a really good way to just not have your program work.
39:07 So.
39:08 Yeah, exactly.
39:08 Like if it's holding onto something important, like let's say, imagine it's holding onto the GIL and you kill it.
39:14 What happens then?
39:16 That might not be great.
39:18 Yeah.
39:19 Okay.
39:19 So this cancellation thing, you're right that that is not simple.
39:22 How does it work?
39:23 Is it like basically every time one of these async coroutines yields or if it's not in a running state, you can just say, okay, this is getting canceled?
39:31 Pretty much.
39:32 That's it.
39:33 Yeah.
39:33 Since every operation requires the, you know, kind of the support of this kernel, if somebody wants to cancel something, I mean, as soon as it, you know, if somebody wants to cancel something, it's either blocked in there already.
39:47 Or you can just wait for it.
39:48 Or you can just wait for it to finish.
39:49 yeah, you can, you can essentially just, just blow it away.
39:51 I mean, it's, you raise an exception at the yield statement saying, okay, you're done.
39:55 Yeah.
39:56 Okay.
39:56 That's what I was, that's always my next question is like, you can't just take it out of the running task list and throw it away because it might, might've been in the middle of something that needs to be unwound, like created a file and it needs to close the handle or something.
40:08 Right.
40:08 So you, you basically just raise like a task cancellation exception or something.
40:12 Uh huh.
40:13 Okay.
40:13 Yeah.
40:13 So you get, it gets a cancel there or, and then it, it can choose to clean up if it wants, but it's sort of a graceful, you know, graceful shutdown from that.
40:22 Yeah.
40:22 That's a nice, that's a really nice feature.
40:23 So let me ask you about integrating with some other things.
40:27 Like I'm, I have databases on my mind right now for, for some reason.
40:31 So there's a bunch of nice ORMs or ODMs.
40:35 If you're doing a SQL and Python, you know, SQLAlchemy, Mongo engine, Peewee pony, and so on.
40:42 The one ORM that I've seen that seems to integrate really nicely with async and await is Peewee ORM.
40:49 You can basically await on the queries that you're getting back from it, which is super cool.
40:55 Would Curio integrate pretty seamlessly with a framework like that?
40:59 I don't know.
41:00 Do you know how they, how they're doing that under the cover?
41:03 I looked at Peewee before we met here because you mentioned it.
41:07 I didn't see that feature off the top of my head, but.
41:10 To basically the extent that I know is I've seen reference to it where it basically supports async and await on, on the queries, right?
41:20 The things that it's going to talk to the database, but I don't know what's happening internally there.
41:24 Okay.
41:24 Yeah.
41:24 I don't know.
41:25 I would have to take a, I would have to take a look at it.
41:28 My initial guess is it probably would not work just because Curio is so out in left field right now.
41:35 It's, you know, like if they've written, if they've written that specifically to work on top of asyncio.
41:40 I see.
41:40 Yeah.
41:41 So it might not, right?
41:42 Hit or miss on that.
41:43 Yeah.
41:43 Yeah.
41:44 Yeah.
41:44 I mean, the API is you just have an async method and you just await either object.create or objects.query and so on.
41:51 But I don't know what's internal.
41:52 It's probably asyncio, I would guess.
41:55 Yeah.
41:56 This portion of Talk Python to Me is brought to you by Hired.
41:59 Hired is the platform for top Python developer jobs.
42:03 Create your profile and instantly get access to thousands of companies who will compete to work with you.
42:08 Take it from one of Hired's users who recently got a job and said, I had my first offer within four days and I ended up getting eight offers in total.
42:15 I've worked with recruiters in the past, but they were pretty hit and miss.
42:19 I tried LinkedIn, but I found Hired to be the best.
42:21 I really like knowing the salary up front and privacy was also a huge seller for me.
42:26 Well, that sounds pretty awesome, doesn't it?
42:28 But wait until you hear about the signing bonus.
42:30 Everyone who accepts a job from Hired gets a $300 signing bonus.
42:33 And as Talk Python listeners, it gets even sweeter.
42:36 Use the link talkpython.fm/Hired and Hired will double the signing bonus to $600.
42:41 Opportunity is knocking.
42:44 Visit talkpython.fm/Hired and answer the door.
42:48 I've been thinking about this.
42:49 I mean, the same problem.
42:51 I'm not familiar with the PeeWee ORM, but I am familiar with SQLAlchemy.
42:55 Yeah, that was my next question.
42:56 What about things like SQLAlchemy or Mongo Engine that have no concept of this at all?
43:00 Could we somehow shoehorn them into working with Curio or these types of things?
43:05 Yeah, maybe.
43:06 Keep in mind, it's highly experimental.
43:07 It's highly experimental.
43:08 And what I'm about to talk, I mean, may not work.
43:11 But one thing that I've been playing with in Curio is this, there's a concept in there known as an async thread.
43:19 I mean, at first glance, it's like, oh, God, this is insane.
43:24 I thought that was wonderful.
43:25 That's really cool.
43:26 Oh, no.
43:26 Async threads are nuts.
43:28 Let me see if I can explain.
43:31 Okay, so in threads, with thread programming, you have threads, and then you have all these asynchronous primitives.
43:37 Or you have all these synchronization primitives.
43:39 You have locks and cues and semaphores and all this stuff.
43:43 So in thread programming, you have all this stuff that you normally use to write programs.
43:47 It turns out that almost all of that functionality is replicated in these async libraries.
43:53 I mean, like if you look at async.io, it has like events and semaphores and locks and cues and stuff.
44:01 And Curio has events and locks and cues and all that stuff.
44:04 But the limitation of that, of the async libraries, is that those things don't work with threads.
44:11 Like if you read the docs, it has like this huge warning on it.
44:14 It's like this is not thread safe.
44:16 You know, if you use a thread with this, you're going to die.
44:19 You'll be sorry.
44:20 Yeah.
44:21 Yeah.
44:21 So you have all these things that you would normally use with threads in these async libraries, but you can't use them in thread code.
44:29 So I've got this idea where I wonder if you could like flip the whole programming model around where you could create like an actual real live thread.
44:41 But then have the thread like sitting behind the thread.
44:46 You could have like a little task that interacts with the event loop.
44:50 Like it interacts with the async world.
44:53 Right.
44:53 So instead of being having a bunch of processes and the event loop is running on one of them or not a bunch of threads.
44:58 And you're like the event loop controls all the threads in a sense.
45:02 Right.
45:02 So what you have is you have like one event loop.
45:05 But then you have like a real thread.
45:07 Keep in mind this would be like a POSIX thread, a real life, like fully realized thread.
45:11 But like sitting like right next to that thread out of view, like out of sight would be like a little tiny, like asynchronous, like a little task on the event loop that is watching for the thread to make like certain kinds of requests.
45:27 And I was thinking, it's like, what if you took like in the thread, you took all these requests for all these synchronization primitives and you just kind of handed it over to this like little helper on the side.
45:37 And then you let it interact with the with the event loop.
45:40 And the thing that's really kind of wild about that.
45:43 So so Curio supports this concept.
45:45 It turns out you get all of these features with like tasks and stuff in Curio showing up in threads like you can cancel threads.
45:53 You can do all the synchronizations with threads and all these all these other things.
45:59 I've been I've been thinking about that in the context of some of this database, you know, this database stuff.
46:05 Like, let's say I did want to interact with something like SQLAlchemy.
46:08 Maybe I could have like a pool of threads or something that that would take care of the SQLAlchemy side of it, but then kind of coordinate it with with sort of tasks on the event loop through this kind of acing thread mechanism.
46:22 That's very interesting, like some kind of adapter that kind of looks like SQLAlchemy, but really routes over to another thread where maybe it creates the session and it does the all the filtering order by query stuff and then brings it back over when it's when it returns or something like that.
46:40 Yeah, maybe I'm not even sure it would be an adapter, but I'm kind of thinking of like this.
46:45 The model in my mind is that, OK, if you're using async, you know, let's say you did have a server and you've got like 10,000 connections sitting there.
46:52 It's extremely unlikely that I'm going to have 10,000 or that I would want to make 10,000 concurrent requests on the database.
47:00 I mean, most of these connections are probably sitting idle most of the time or doing other things.
47:05 So I'm thinking like, well, maybe I could have like, you know, maybe 100 threads or, you know, maybe that's too many.
47:13 But you could have like a pool of threads that are sort of responsible for doing the database side of it.
47:19 And then you could coordinate that with this, you know, 10,000 asynchronous tasks in some way.
47:25 So it's going to be kind of a hybrid model where like some of the work takes place in threads and other work takes place in these async tasks.
47:32 But it's kind of done in a more kind of seamless way.
47:35 Well, it sounds really cool.
47:37 I'd love to see it and try it out, but I don't know if it'll work either.
47:41 Yeah, I don't know.
47:42 Yeah, they're finding the time sometimes to explore that.
47:45 It's a challenge for sure.
47:47 Yeah.
47:48 Yeah, but if you could take these really popular, really nice frameworks like SQLAlchemy and somehow click them into this world without actually rewriting them from the ground up, that would be really cool.
48:02 Yeah, I agree.
48:02 I think that'd be I think it'd be fun.
48:04 Yeah.
48:04 It's something I want to try.
48:06 I mean, I've got this sort of web service project that I did a couple years ago.
48:11 And right now it's sitting on Python 2.7 with SQLAlchemy and a bunch of stuff.
48:21 And I look at that and it's like, man, I really want to rewrite this thing.
48:25 You know, in Python 3.6 with Curio and try this thing.
48:30 It's just, you know, there's only finite resources in a day.
48:33 So it's still on the to-do list to sort of get to that.
48:37 Sure.
48:37 But some sort of framework that bridge that divide, I think, would be generally applicable in a lot of places.
48:43 And a lot of people would be excited about it.
48:45 Very cool.
48:46 So you talked about the AsyncIO module a lot.
48:49 Curio is not built on it, right?
48:50 It's kind of something different?
48:52 It is a completely different universe.
48:55 Okay.
48:56 It doesn't use AsyncIO.
48:58 It's doing its own thing.
48:59 That was mostly because you said you want this to be like a greenfield.
49:02 Like, how would the ideal world of the space look like?
49:05 Not like, let's pick up the old model and see what we can do with it?
49:08 Yeah, partly that.
49:10 Yeah.
49:10 And partly, you know, I'm trying to, you know, actually, Curio is kind of a project where I'm trying to learn about this stuff myself.
49:16 You know, just trying to learn, like, you know, what is Async and Await all about in Python?
49:21 What can you do with it?
49:23 Or how can you abuse it?
49:24 You know, sort of, you know, what kind of horrible, insane things are possible with it?
49:29 In some sense, you know, Curio is a project exploring a lot of that.
49:33 You know, exploring a lot of ideas about APIs.
49:35 And really kind of even the programming environment itself.
49:39 So it's not built on AsyncIO.
49:42 And it's, I don't even think it's really meant to clone AsyncIO.
49:45 I mean, it's kind of its own thing right now.
49:47 Right, sure.
49:47 Okay.
49:48 So where are you going with this in the future?
49:51 What are your plans?
49:52 Well, part of the plan is just figuring out how to write about this in books.
49:57 I mentioned I'm supposed to be updating my Python book.
49:59 So a big part of it is I'm thinking about just how to approach Async and Await in the context of writing and teaching.
50:07 And so there's that element of it.
50:10 Actually, do you want a little rant on that, by the way?
50:12 Yeah, go for it.
50:12 I have convinced myself that the approach of teaching Async needs to be flipped in some way.
50:20 And let me describe what I mean by that.
50:22 If you see a typical tutorial on a lot of this Async programming, it ends up being this very kind of bottom-up approach.
50:30 Where it's like, okay, you have sockets, and then you have the event loop, and then you start building all this stuff on the event loop.
50:36 You have callbacks, and then it's like, oh, we have callbacks, and we have futures.
50:39 And then you start layering and layering and layering and layering.
50:43 And then at some point, you reach this like, oh, and we have Async and Await.
50:47 Yeah.
50:47 And it's like Async and Await is, yeah, finally, oh, it's awesome.
50:50 You have Async and Await.
50:52 The problem with this approach is that I just have never been able to teach it.
50:57 I mean, I've tried this in classes, kind of doing the bottom-up approach to this Async stuff.
51:03 And every single time, it seems like you get about halfway through it, and then you're just looking at a room with deer in the headlights.
51:12 Yeah, you've gone through this strainer, and you've stripped everybody's interest out by the time you get to the interesting part.
51:17 Oh, it's horrible.
51:19 So you're just looking at all this deer in the headlight look, and it's like, oh, oh, God.
51:25 You think about it, it's like, okay, wait a minute.
51:28 Okay, so let's say I had to describe file I.O. to somebody.
51:33 Like, you open a file on your computer.
51:36 Like, you open a file in Python, and you read from it.
51:40 Is my description of that going to start with, like, well, okay, you have CPU registers.
51:46 And what you do is you load the CPU registers with, like, a system call number, and then, like, a buffer address.
51:52 And then you execute a trap.
51:53 And then the trap goes into the operating system.
51:56 And it's going to do something.
51:57 I don't know.
51:58 It'll find, like, a file I know.
51:59 And it'll probably check the buffer cache.
52:01 And then, you know, it'll go do some stuff with the disk scheduler and bring some stuff in.
52:05 And then there's, like, copying.
52:07 And then is that the description of, like, how I'm going to describe file I.O. to somebody?
52:11 All right.
52:11 People are like, I just want to read JSON.
52:13 Exactly.
52:14 I'm like, I'm thinking of, you know, so this is my thinking on async, too.
52:18 It's like, does anybody actually care how this stuff works?
52:25 Like, seriously, like, do you care that there's an event loop or a future or a task or whatever it is in there?
52:34 And I'm not sure that you do.
52:36 I'm almost wondering whether, like, the approach to teaching this async stuff is to do, like, this total, like, top-down thing.
52:44 You just, like, you basically say, hey, you have async functions and you have a wait.
52:48 And you just start using it.
52:49 Yeah.
52:50 You don't even say, like, don't even mention, like, generators or coroutines or the yield statement.
52:56 Yes, it's built on that.
52:58 But do you care?
53:00 Yeah, I think you're totally right.
53:02 I mean, you probably care in, like, six months once you've been using it a while.
53:06 You might want to look inside, right?
53:07 But when you don't even know what async and await is, you're right.
53:11 You absolutely don't care.
53:12 I mean, it seems like let's write a program, show that it's blocked, show that we unblock it with async and await.
53:18 Awesome, right?
53:19 That could be the way to get started.
53:20 Yeah.
53:21 So I've been thinking about that a lot, you know, in the context of, like, the book writing.
53:25 It's like, hmm, how am I going to – how do I bring async and await into this book project?
53:30 It's like, am I going to do it – you know, am I going to go the top-down approach where it's going to require kind of a leap of faith where it's like, yeah, okay, you just do it.
53:40 It's like a file.
53:41 It's like you don't care.
53:42 Like, in my whole programming life, have I ever cared how, like, a system call works?
53:47 No.
53:48 The answer is no.
53:49 Like, I mean, other than teaching the operating system class, I have never once cared how a system call works.
53:56 And I kind of feel maybe the same way about async and await.
53:59 It's like if you approach it in the right way, you know.
54:03 Yeah, it doesn't have to be so daunting, right?
54:05 Yeah, it doesn't have to be so daunting.
54:06 And actually, in that context, I'm almost wondering whether something like the asyncio module is like an – it's like assembly code phrasing.
54:15 Yeah, a little bit.
54:16 Yeah, yeah, yeah, a little bit.
54:17 Yeah.
54:17 Very overwhelming, right?
54:18 I get in there and it's like, oh, coroutines wrapped by futures and tasks and blah, blah, blah, blah.
54:24 And you're like, ah, like head is exploding.
54:27 It's like, eh.
54:28 It doesn't have to explode, right?
54:29 Yeah, maybe you don't even need to know that stuff.
54:32 Yeah, that's awesome.
54:33 So what book is this that these ideas are going to land in?
54:36 Ultimately, it will be the Python Essential reference book.
54:39 Nice.
54:39 Awesome.
54:40 I'm trying to figure out how to put async and await into the first chapter, which is like tutorial introduction.
54:46 Yeah.
54:46 I got this idea where it's like, yeah, I'm just going to drop it in the tutorial like right away and just see what I can get away with.
54:54 Yeah, that'd be cool.
54:55 Just set the tone like, no, this is kind of normal now.
54:57 We're doing this now.
54:58 Yeah, it's just this normal, yeah.
54:59 Like what else would you use?
55:02 Yeah, sure.
55:02 No, I don't know whether I can get away with that or not.
55:06 Well, if you want a vote of one for your flipping this presentation or the presentation style for how you present it to people, I think that's the right way to do it.
55:15 So it's unanimous.
55:17 Yeah, okay.
55:18 In some sense, a curio project is kind of experimenting with that too.
55:22 You know, it's, I don't know, focusing more on the async and await side of the equation as opposed to the low-level mechanics.
55:30 Yeah, it's a really cool project.
55:32 And I like where you're going with it.
55:34 It's definitely worth checking out to understand this whole async world better.
55:38 So, David, I think we're getting pretty much out of time.
55:41 Don't want to use up your whole morning.
55:43 So let me ask you a couple of questions real quick, as I always do.
55:46 Yeah.
55:47 I think I can guess this from your presentations.
55:50 But if you're going to write some Python code, what editor do you open up?
55:54 It's got to be Emacs.
55:55 Emacs, right on.
55:57 And favorite PyPI package in addition to curio, of course.
56:01 Is curio on PyPI or is it just installable from GitHub?
56:05 It's on there.
56:05 It should probably be a GitHub version if you're going to do anything interesting with it, though.
56:09 It's moving pretty fast.
56:11 It's moving along.
56:12 Yeah.
56:12 I don't always update it.
56:14 Think of it more like this.
56:14 Like, if people maybe don't know about a package that you recently found, you're like, this is really cool.
56:19 You guys should try this out more than just the popularity contest.
56:22 One of the goals of curio is not so much curio itself, but to basically change a lot of the thinking around async and await.
56:31 You know, it really is kind of an exploratory project where it's like, let's see what we can do with async and await that's maybe outside the context of asyncio.
56:40 And this trio project is something that has been kind of inspired by curio, if you will, is taking things in a slightly different direction.
56:52 So I would recommend people look at that.
56:53 All right.
56:53 I mean, if you're interested in, like, concurrency and some of this async stuff, it will give you yet a third spin on the whole universe.
57:01 So that's also an experimental project, but maybe I would advise that.
57:06 All right.
57:06 Yeah.
57:06 Very, very cool.
57:07 And speaking of packages, I think the PeeWee async thing that I was talking about might be a separate package.
57:13 I'm not sure if it's built in.
57:14 So just be aware of that.
57:16 All right.
57:17 All right.
57:17 Final call to action.
57:18 Are you looking for people to contribute to this project?
57:21 What can people do now that they're, you know, know about curio?
57:24 Oh, it's definitely something that I'm looking for contributors.
57:27 I think the place where a lot of contributions could be made on a package are more in supporting some of these other networking protocols.
57:35 So getting it to hook up with things like Postgres, MySQL, Redis, ZeroMQ, things like that.
57:43 There's a whole, you know, there's kind of a whole space of things that could be done there.
57:47 Really big project would be interesting would be support for HTTP.
57:52 Yeah.
57:52 Like some sort of a WSGI integration.
57:56 Yeah.
57:56 And that might be a whole separate podcast because there is like a whole kind of interest in HTTP and HTTP2 right now where, you know, people are implementing the protocols independently of the actual IO layer.
58:14 This would be like Corey Benfield's work.
58:16 And I think Nathaniel Smith is also working on this with like HTTP2 where, you know, he's implemented the protocol as its own library.
58:26 But then the protocol can be used from threads or used from async or used from twisted or used from different places.
58:33 And that's actually a really, that's a really interesting avenue of work.
58:37 Yeah.
58:37 That's, yeah.
58:38 Thanks for recommending that.
58:39 That's cool.
58:39 So maybe people could use that to build on top of or something for the HTTP layer.
58:44 Right, right.
58:45 For their framework.
58:45 Yeah.
58:46 There's been some work with that in Curio already.
58:48 People, they've, it's actually been shown that you, I mean, you can use those libraries from, from Curio,
58:54 but it has not been sort of packaged at a level of into, into like what I would call like a nice framework.
59:00 Sure.
59:00 Kind of at a, we're kind of operating at a lower level right now.
59:04 And it's, you know, turning it into more of like a framework.
59:07 It's a whole different question really.
59:10 So.
59:10 Yeah.
59:10 Well, it's definitely a really great start.
59:12 And if it turns into one of those frameworks, I would, I would love to play even more with it.
59:17 So very nice work on Curio.
59:19 David, thank you for coming on the show to share all this async stuff with us.
59:23 All right.
59:24 Thank you very much.
59:24 This has been another episode of Talk Python to Me.
59:28 Today's guest has been David Beasley.
59:31 And this episode has been sponsored by Rollbar and Hired.
59:33 Rollbar takes the pain out of errors.
59:36 They give you the context and insight you need to quickly locate and fix errors that might have gone unnoticed until your users complain, of course.
59:45 As Talk Python to Me listeners, track a ridiculous number of errors for free at rollbar.com slash Talk Python to Me.
59:51 Hired wants to help you find your next big thing.
59:54 Visit talkpython.fm/Hired to get five or more offers with salary and equity presented right up front and a special listener signing bonus of $600.
01:00:04 Are you or your colleagues trying to learn Python?
01:00:06 Well, be sure to visit training.talkpython.fm.
01:00:09 We now have year-long course bundles and a couple of new classes released just this week.
01:00:15 Have a look around.
01:00:16 I'm sure you'll find a class you'll enjoy.
01:00:18 Be sure to subscribe to the show.
01:00:20 Open your favorite podcatcher and search for Python.
01:00:22 We should be right at the top.
01:00:23 You can also find the iTunes feed at /itunes, Google Play feed at /play, and direct RSS feed at /rss on talkpython.fm.
01:00:32 Our theme music is Developers, Developers, Developers by Corey Smith, who goes by Smix.
01:00:37 Corey just recently started selling his tracks on iTunes, so I recommend you check it out at talkpython.fm/music.
01:00:45 You can browse his tracks he has for sale on iTunes and listen to the full-length version of the theme song.
01:00:49 This is your host, Michael Kennedy.
01:00:52 Thanks so much for listening.
01:00:53 I really appreciate it.
01:00:54 Smix, let's get out of here.
01:00:56 Stating with my voice, there's no norm that I can feel within.
01:01:00 Haven't been sleeping, I've been using lots of rest.
01:01:03 I'll pass the mic back to who rocked his best.
01:01:07 First, develop, first, develop, first, develop, first.
01:01:09 You're welcome.
01:01:16 Developers, developers, developers, developers.
01:01:18 .