Simplifying Python's Async with Trio
On this episode, you'll Nathaniel Smith who wrote the Trio async framework that significantly simplifies complex coordinating operations using async and await.
Links from the show
Trio: github.com/python-trio/trio
Nathaniel's PyCon Talk: youtube.com
Notes on structured concurrency, or: Go statement considered harmful: vorpus.org
Timeouts and cancellation for humans: vorpus.org
Other Async Frameworks of Note
Unsync: asherman.io
Curio: github.com/dabeaz/curio
Episode transcripts: talkpython.fm
--- Stay in touch with us ---
Subscribe to Talk Python on YouTube: youtube.com
Talk Python on Bluesky: @talkpython.fm at bsky.app
Talk Python on Mastodon: talkpython
Michael on Bluesky: @mkennedy.codes at bsky.app
Michael on Mastodon: mkennedy
Episode Transcript
Collapse transcript
00:00 Ever since Python 3.5 was released, we've had a really powerful way to write I/O bound async code
00:06 using the async and await keywords. On this episode, you'll meet Nathaniel Smith, who wrote
00:11 the Trio async framework that significantly simplifies complex coordinating operations
00:16 using async and await. This is Talk Python to Me, episode 167, recorded June 21st, 2018.
00:23 Welcome to Talk Python to Me, a weekly podcast on Python, the language, the libraries, the
00:42 ecosystem and the personalities. This is your host, Michael Kennedy. Follow me on Twitter
00:47 where I'm @mkennedy. Keep up with the show and listen to past episodes at talkpython.fm
00:52 and follow the show on Twitter via @talkpython. This episode is sponsored by Linode and Rollbar.
00:58 Please check out what they're offering during their segments. It really helps support the
01:01 show. Nathaniel, welcome to Talk Python.
01:05 Hey, thanks for having me.
01:07 It's great to have you here. I've wanted to have you on for a long time. I, you know,
01:10 listeners of the show probably know that I'm a fan of async programming and parallel programming.
01:18 It's definitely gotten some feedback and some attention, but I think it's super important
01:23 for Python. And I think what you're doing is incredible. Some of the stuff you showed at
01:27 PyCon this year was incredible. So I'm just, I'm really excited to talk about it. Yeah, it's
01:33 going to be fun. Before we get to that though, let's get your story into programming and Python.
01:36 So let's see. So for getting into programming, I mean, I guess I was fortunate in my choice of
01:41 parents. I was sort of, you know, born into sort of a relatively affluent family. And my mother was
01:47 also a professional programmer. So she mostly stopped, you know, switched to sort of part-time consulting
01:52 consulting when the kids came along. So she could be a stay at home mom. So actually I first learned a
01:58 program through in my elementary school, they did have this program, this teaching logo to us that my
02:05 mom created this project at my elementary school. Wow. That's pretty awesome. Many moms volunteer at
02:13 schools, but not any of them. Right. Learning programs. Yeah, no, you know, my mom is pretty
02:20 incredible. She also loves podcasts. So I mean, I guess, hey, hi, mom. Maybe she'll listen to this.
02:26 Yeah. I mean, she later went on to get a master's in education, and now teaches art and science and
02:34 programming. Yeah. But so yeah, so I got started early and was fortunate enough to have my own computer
02:39 starting around like age 10 and so on. So this was like in the days when I was like,
02:43 it was DOS. Windows 3.1 was pretty exciting. Like 386, 46 type thing. Oh yeah. Yeah. Turbo button to
02:50 like make it go faster. Yeah. Yeah. We don't have turbo buttons anymore. No, it just always goes fast.
02:55 Isn't that weird, right? Yeah. Yeah. Isn't there a little button to make your laptop half the speed?
02:59 Why would everyone want that? So, and then I, from there I installed Linux when I was like 13 on my
03:07 computer messing around. I had no idea what I was doing. It was Slackware 96 because like, was the name of it?
03:12 It was, it's like, you know, Windows 95 was the big thing then. So like, this was one better, right?
03:16 96.
03:17 Exactly. Yeah. Right. So yeah. So I've kind of been, you know, bumming around that kind of,
03:23 you know, programming open source world for something years now. It took me a little longer to get into
03:28 Python. I don't remember exactly when it probably, you know, five years later. So I know the first time
03:34 people were talking about, you know, Python was this cool new thing. It's, you know, kind of like
03:37 Perl, but so forth. first time I read the tutorial, I got to the point where it said that
03:43 Lambda was syntactically limited to a single expression and just like rage quit. I was like,
03:47 whatever, stupid language.
03:49 Where are the curly braces? Come on.
03:52 Yeah. Well, it wasn't, I mean, I guess also like, so like, but I had like sort of like logo,
03:55 I learned scheme and things like that. So like, this was like obviously terrible. I'd later came
04:01 back to it, got over that. And I, and under, you know, understand now why that makes sense in terms
04:05 of the block structure and everything, but the statement expression distinction that Python has,
04:10 but, it took me a little while, but yeah. So I, yeah, I guess I got into Python around
04:16 like 2.2 or so. And it sort of gradually become, you know, just sort of my main language.
04:22 In the meantime, I was also like, you know, I was getting a PhD, stuff like that. So using Python
04:27 for both like hobby hacking stuff. And also like for my, in my work, you know, for data analysis,
04:34 I wrote some various libraries there. What's your PhD in?
04:37 Yeah. Cognitive science.
04:38 Oh, right. Okay. Very awesome.
04:39 Yeah. Yeah. So it was like studying how people understand language and brains understand language.
04:44 Yeah. So yeah. So along the way, and then, yeah, getting embedded and all these different sort of,
04:49 you know, entangled in all these different sort of open source projects, like NumPy and so on.
04:53 Yeah. Yeah. That's really great. So what do you do these days?
04:57 So I'm actually kind of in a bit of transition right now. So the last few years, I've been
05:01 working at UC Berkeley at the Berkeley Institute for Data Science, which is a sort of new
05:07 data science center. That's what started up on campus. And so in my position has been sort of
05:12 this unique, one really where it's, been, I've been grant funded to sort of figure out how to,
05:19 you know, make sure the Python, you know, continues to work well, works better for scientists. There's
05:25 a, actually there's a blog post we can like put a link to there. I wrote just sort of talking about
05:30 all this sort of stuff I've done there, but some highlights being like, I got, some grants for
05:35 a NumPy development, $1.3 million over a few years, which is basically the first time NumPy
05:41 has been funded. We'll have full-time people working on it. Did I made a color map people like,
05:48 did some work on packaging? So like, many Linux sort of led that effort so we can have wheels on
05:54 Linux now. That sounds like a really fun job. Like you get to actually make a difference in
06:00 the open source space. Exactly. Yeah. I mean, it partly it's, it's sort of funny. Like this is what
06:04 I talk about in the blog post, but like, you know, open source is just sort of like desperately underfunded.
06:08 And like, it's amazing how well and effectively we can use the little bits of volunteer time and
06:14 things that people manage or, you know, people have spending, you know, a few hours a week.
06:18 But there's a lot of things that like larger projects that have huge impact that can't be
06:24 done in that mode where you actually need to like sit down for a little while and think about it and,
06:28 you know, understand the landscape, put some focused effort into something.
06:31 Yeah. It's a difference on like what you take as your goal. If you have a month of uninterrupted full
06:38 time, some project or a goal part of project versus I'm going to squeeze this in
06:44 Saturday morning before the kids get up, right? These are the same types of things you attack.
06:49 Exactly. Yeah. I mean, it's like a month of time is not that much in the grand scheme of things. One
06:53 person, one month is like compared to, you know, the amount of labor that like, you know, Google has
06:58 available. Yeah. But it's still, you know, enables all kinds of things that right now just
07:03 aren't happening. And so there's actually lots of low hanging fruit because no, there is nobody
07:07 almost who has that kind of time to worry about these sort of broad open source ecosystem kind of projects.
07:14 There's just, there's all kinds of... We saw how much of a difference was made with the Mozilla
07:19 grant, the Python package.
07:21 Or the PyPI. Yes.
07:22 Yeah. The PyPI, like that was like, hey, look, $170,000. Oh, actually this can happen. It's
07:27 been dragging on for years. Now it's, you know, it's real.
07:29 Yeah. Yeah. I mean, getting the team together to make that happen. Yeah. I had a little bit of
07:34 involvement on that. I'm on the PSF's packaging working group. I didn't, I wasn't heavily involved
07:39 in that. You know, I like made some introductions, gave some feedback on the grant and things.
07:43 So it was, yeah, it's just super exciting to see that up close. Because yeah, it was very successful.
07:48 And it wasn't...
07:49 Like it really got a lot out of how much investment was put into it.
07:53 Yeah. I mean, it was like, that had been dragging on for literally six years. The old PyPI was just
07:58 totally unworkable. But, you know.
08:03 It was like Kryptonon for people who wanted to contribute to open source. They looked at it and
08:08 like, oh, no, no, I'm not touching that. That's...
08:10 Yeah. I made one patch to the old PyPI. And it was just a trivial thing. It was just that we wanted it
08:16 to start to say that, oh yeah, many Linux is now a legal thing for a wheel to have in its name.
08:21 So it was just like, there's like a list of strings that allowed, I'm just adding one new string,
08:25 right? That was the most terrifying patch I've ever made. PR I've ever made. Because it's like,
08:30 if I have a typo, then PyPI dies, right? And there are no tests. There's nothing, right?
08:37 The consequences of failure is so high.
08:40 Yeah. Exactly.
08:41 Yeah. The new PyPI is way, way better. You should all go and help make it contribute. Because now
08:47 it's all set up. You can develop locally and there's a real test suite.
08:52 Yeah. It's really nice. I had the people involved on the show a couple episodes back too. So it
08:58 definitely got a chance to dig into that. Oh yeah. Right. That's right. Yeah. Yeah.
09:01 So let's talk a little bit about async. Yeah.
09:04 Yeah.
09:04 Because I think that that's one of the sort of revitalized areas of Python last,
09:09 since 3.4, 3.5, it really started coming along, right? So what, in 3.4, we got asyncio as a thing.
09:16 And in 3.5, it really, I feel like it really got more accessible with async and await.
09:22 Right. With the new syntax features to make it easier to use. Exactly. Yeah.
09:26 Yeah. Yeah, exactly. Like the keep it, the foundation was built previously, but it was,
09:31 it was still this sort of callback hell type of programming, right? Right.
09:37 Right. We should maybe take a little bit of a step back and just, you know, let people know about
09:42 the presentation you gave at PyCon, which introduces your project TRIA, which is what we're going to
09:46 focus on. But in your talk, you sort of set the stage and said, look, there's these two
09:51 philosophies or types of async programming that you might consider. So maybe, maybe you could touch on
09:58 that a little. Well, so, I mean, I think the first thing to say is for those who aren't kind of already,
10:03 you know, up on the jargon, async is sort of a subset of concurrent programming. And so concurrency,
10:08 meaning writing programs that do two things at the same time, which is very handy. You know,
10:14 it's something in our real life we do all the time, you know, you know, I'm working on one thing and my
10:18 friends working on another thing at the same time. It's very natural. But writing a program that does
10:22 that is a little bit trickier, especially, you know, in Python is kind of generally a sequential
10:27 language, right? It makes it easy to do, I want to do this and then that and then the other thing.
10:31 It doesn't directly have built into like the syntax, whatever ways to say, I want to do these
10:36 two things at the same time. And then when they're both done, do these other things.
10:39 So, so generally, there's this general question of like, how do you write concurrent programs in Python?
10:43 And then I think what you were thinking of, there's kind of two philosophies of concurrency,
10:48 which is one is the kind of preemptive concurrency that threads give you, where just everything
10:56 just kind of runs all the time interleaved kind of in arbitrary ways.
11:00 To be clear, this is in general threads, not Python threads, because we have this whole
11:04 thing called the gil.
11:05 From the programmer, from the user's point of view, the gil doesn't make, gil makes things
11:09 slower, but it doesn't make it really change what it feels like to use it compared to threads in
11:14 general. True.
11:15 So in general in threads, you might have like two threads actually running at the same time,
11:20 like on two different views. Because of Python, we have the global interpreter off the gil,
11:24 multiple, then it means that mostly only one thread can actually run at a time.
11:28 But because in Python interpreter controls that, it can decide at any moment to switch which thread
11:33 is running. From your point of view, as writing code in Python, it might as well be running multiple
11:38 things at the same time, basically, right? Because since it could switch at any moment,
11:42 you kind of have to act like it's just constantly switching, right?
11:45 And the reason this is kind of a challenge is because if you have a data structure, and you have two
11:50 different threads or whatever, you know, you have two different concurrent pieces of code acting on
11:55 the data structure at the same time, if they're not like, if you're not careful, you can make a big
11:59 mess. So like, you know, one, you have the classic examples are things like, you know, your program is
12:05 like managing like a bank. And so I'm going to withdraw money from this account, and then add it,
12:09 put it in this account. But if you don't do that, like atomically, so like ice, you know, first one,
12:14 one thread says, okay, does Alice have enough money? Yeah, great. Okay, I'll take that money out and put it
12:19 into Bob's account. Another thread says, oh, does Alice have enough money for this transfer to Carl?
12:24 And then if they're in a leave right, first both threads check and say, oh, yeah, Alice has plenty of money.
12:28 And then both threads take money out of the account. And now Alice has spent the same money twice, actually.
12:33 Yeah, great for Alice, but not so nice for the bank.
12:35 So yeah, someone's gonna be trouble running writing that software.
12:40 Yeah, right. So yeah, if you're writing that software, you need to sort of manage this.
12:44 Yeah. So one of the things that kind of helped me this like click in my mind was
12:48 thinking about how programs temporarily enter these invalid states.
12:54 Right.
12:55 What you're describing is basically that like at the beginning of that function,
12:59 the bank is in a valid state. At the end of the function, everything is valid. But somewhere in
13:04 between in order to accomplish that operations as a series of steps, it has to become invalid. And
13:09 long as nobody observes it in this invalid state, it's all good. But when they do like, like you're
13:16 describing halfway through, it's not okay anymore. It's kind of like transactions and databases,
13:21 but in time. And yeah, they're pretty similar in some ways, I guess.
13:25 Yeah. So yeah, exactly. So with threads, which the solution to this is you have to sort of explicitly,
13:31 like, make sure you notice all those places where you're passing through that invalid state and do
13:36 some kind of in like market somehow in your source code, like say, okay, I'm going to take a lock here.
13:40 That's going to make sure that anyone else who tries to use this has to wait for me to be done
13:44 and things get back to the valid state before they can look at it. Right?
13:48 Yeah, exactly. Okay. But that's really error pro.
13:51 And this is preemptive concurrency, right?
13:53 Yeah, exactly. It's still talking about kind of how threads work, right?
13:55 Yep. So you would have to like find all these places where you go through a temporarily
13:59 invalid state and mark them in your source code. And if you forget one, then you have this nasty bug
14:05 where Alice gets to suspend the money twice or all kinds of weird things can happen.
14:10 And it usually has to do with timing. And so it's very, super hard to debug.
14:13 Yeah. And it's like, yeah, it's like super subtle. Like, yeah, like it only happens one in a thousand times randomly. And it happens,
14:20 it depends how much memory you have. And it's only happens in production and not on your tests.
14:23 And just all kinds of, it's really, really, yeah, it's bad. Yeah.
14:27 Yeah. I've heard them described as Heisen bugs. And I just love that term.
14:30 Yes. Yes. Heisen bugs. Right. And it's just like, and so it means that when you're working with threads,
14:34 you just have to be like constantly vigilant, right? Like every line of code,
14:37 which we think, okay, is this the one that's going to introduce this terrible bug?
14:40 So that's, that sucks, right? You don't want to live like that.
14:44 So, yeah, right. You just have to be like constantly paranoid. So yeah. So the idea for
14:51 async concurrency, because we kind of flip it around instead of saying that like, okay,
14:56 we have to go through and find all the places in the source code where something dangerous is
15:01 happening and mark those. You say, you know what, let's be conservative. Let's assume something
15:05 dangerous could be happening anywhere. That's the default. So by default, only one thing is allowed to
15:09 happen at a time. And then we'll mark in the source code, you know, okay, here's a place where
15:14 I want to let other things happen. It's okay for, I'm not in the middle of, you know, just doing some
15:19 delicate operation, you know, adjusting someone's bank account. This is a fine place for that to happen.
15:27 This portion of talk Python to me is brought to you by Linode. Are you looking for bulletproof
15:31 hosting that's fast, simple, and incredibly affordable? Look past that bookstore and check
15:35 out Linode at talkpython.fm/linode. That's L I N O D E. Plans start at just $5 a month for a
15:43 dedicated server with a gig of RAM. They have 10 data centers across the globe. So no matter where you are,
15:48 there's a data center near you. Whether you want to run your Python web app, host a private get server
15:54 or file server, you'll get native SSDs on all the machines, a newly upgraded 200 gigabit network,
16:01 24 seven friendly support, even on holidays and a seven day money back guarantee. Do you need a little
16:06 help with your infrastructure? They even offer professional services to help you get started
16:11 with architecture, migrations, and more. Get a dedicated server for free for the next four months. Just visit
16:17 talkpython.fm/linode.
16:20 And you can still have bugs. You can still make mistakes and put that in the wrong place.
16:26 Because now there's only like a finite number of places to check when something goes wrong, right?
16:31 It's just much easier to reason about that kind of program.
16:34 Right. And that could be done with just straight up asyncio. But with three, five and above,
16:39 like basically the await keyword is the marker in your code for that.
16:44 Exactly. Yeah.
16:45 Yeah. Yeah. That's pretty beautiful. That's pretty beautiful. So I'm guessing,
16:50 I'm guessing most people probably under understand like how to use async and await the keywords,
16:54 but maybe just describe like what's, how to define like a super, super simple async method with the
17:01 keyword, just to the people understand this.
17:02 Yeah. Like you said, I'm going to be focusing on trio here and trio does kind of use only kind of a
17:07 simple subset. There's some extra complexity needed for like backwards compatibility with async
17:12 IOS sort of called that place thing. We can get, talk about that more later. But sort of for if,
17:17 especially if you're just, you know, if you're using trio, async await is very simple actually.
17:21 And so here's what you need to know is that there are two kinds of functions. There's the async ones,
17:26 which are these, the special functions that might let other code run during them. And so you kind of
17:31 need to be aware of what you call one of them that like, you know, the ground might shift under
17:35 feet, data structures might change while it's happening. And there's the regular functions,
17:40 like the synchronous functions where those always happen atomically. And so, and we're going to do is
17:45 we're going to make all the regular Python functions you already know, those are going to be the synchronous
17:49 ones. Because, you know, no one, you know, they're all written on the assumption that like, you know,
17:53 Python's a sequential language, it does things sequentially. And so they, no one like thought about
17:58 how to handle this async stuff when they were writing all these libraries that already exist.
18:03 So those are going to be all atomic. And then we need a way to mark these special async functions.
18:09 And we want to mark it at two places. So what we want to mark it when we define the function
18:12 so that we, you know, as part of the API, is this an async function or not? And we want to mark it at
18:18 the call sites. When you call one of these, when you're like reading through your code, you see, ah,
18:22 this is a point where I'm calling an async function. This is a point where other code might run in the
18:26 background, I need to make sure I'm not in the middle of some, you know, dangerous operation.
18:29 Right?
18:30 Right. Okay. So you've got like, maybe a call to a web service or something.
18:34 Yeah, that might return like a future, which then you hook in a call.
18:39 So there are no futures in Trio. That was one of the things I got rid of.
18:43 Yes. Yeah, that's beautiful. Yeah. But you got to, you have to wait on the result, basically.
18:48 Right?
18:49 So yeah. I mean, so yeah, so the way, yeah, the way you think about it for Trio is just that,
18:54 it's just like, there are these special functions and the way you call it is you type
18:57 a weight and then the function call. It's like a new kind of syntax for calling a function.
19:01 Right.
19:02 And that's sort of all you need to know. There is, yeah, like you said, like there's the
19:05 complexities around Python also supports a weighting on objects and having future objects,
19:09 like the function to return and so on and so forth. But it's kind of unnecessary clutter is
19:14 kind of how Trio's perspective on it. So we just don't do any of that.
19:17 Yeah. I feel like the AsyncIO layer is like, let's get something kind of working,
19:22 but it's not that delightful. And I feel like Trio really cleans it up. So there's a really great
19:28 example that you gave in your talk about happy eyeballs, which is like a way to try to use
19:33 basically DNS resolution to connect a socket to a server and some concurrency stuff around that,
19:39 which is, I don't want to talk about that yet. Maybe we'll have some time to talk about it later. But
19:43 basically there's a version in Twisted, which is how long was the one in Twisted? Hundreds?
19:49 And so, yeah, well, yeah, there's two different versions of Twisted I talked about in the talk.
19:54 One is the sort of the classic one that's in in master, which is like 120 lines long, I think,
20:00 roughly. I mean, not that it's not super meaningful to talk about lines of code like this, but like
20:04 just kind of give you a sense. Yeah. Yeah, exactly.
20:07 A better sense of complicated is to reason about that is it has inside this method. It has another
20:12 internal function. Inside that function, there's another function defined. And then inside that,
20:16 there's a fourth level of internal functions. So like, yeah, that's bad.
20:20 Because it's all these. Yeah. Yeah.
20:24 Almost like it's full of all these go to's and in some weird sense. And then, and you basically say,
20:30 now, if we apply the building blocks or the primitives of trio to it, oh, look, it's like 20 lines,
20:34 and it's totally straightforward, which is really great. So I think, you know, let's talk about trio.
20:40 Why don't you start by telling people what it is. Trio is a library for a sink concurrency in Python.
20:45 It's like an alternative to libraries like a sink or twisted or so on, which NATO or threads,
20:52 sort of all of them. And I think there's kind of, I think it was, there's kind of two pieces. So one
20:57 is there's sort of the idea trio, the ideas are like a research project or something where I have,
21:03 I sort of did some analysis of like, what is it about twisted and I think I own someone that makes
21:08 it so difficult to use sometimes, or why are things like happy eyeballs so complicated?
21:13 Where are some common errors sort of come from? It came up like, oh, that's actually I sort of,
21:18 as I dug into it, I realized actually, there's, you know, a small number of things that kind of
21:21 seem to cause a lot of the problems, and even ended up digging into some old literature from like the
21:26 60s, about early programming language design. On Univac or what was that one on? That was like
21:32 a really old computer. Oh, oh, Flowmatic. The, that language, which is, yeah, the Grace Hopper's
21:38 language, the precursor to COBOL, which is a really interesting language. That's going way back. Yeah.
21:42 Yeah. Yeah. And sort of talking about this transition, there was a lot of debates then about like,
21:47 how do you structure your language, not even getting into the concurrency part, just like,
21:51 how do you do a language at all that's like usable? And one of the big things that happened was
21:56 this switch from using go to as the main control structure to having things like if statements and
22:02 functions and for loops and so on. And I realized there's actually sort of an analogy there that's
22:08 surprisingly like precise between a lot of that, a lot of these async libraries are actually still
22:16 kind of in the go to stage, where sort of the basic primitives are in sort of a technical way,
22:21 kind of analogous to a go to, and they cause similar kinds of problems. And then if you look at,
22:25 okay, how did, you know, Dijkstra solve these problems back in the late sixties,
22:29 we can take those and apply them to concurrency. And that leads to something called,
22:34 I call it a nursery concept. Yeah. It's so interesting. And I had, yeah. And I had never
22:38 thought about the relationship between go to and many of these programming, these threaded programming
22:44 models, but you really hit it. Well, I think it's a super good analogy and it's really relevant.
22:50 Because so what Dijkstra said was, look, you should be able to treat these building blocks as black boxes.
22:56 Stuff goes in, stuff goes out. That's all. If you know kind of what it does, you don't need the details,
23:01 right? I mean, this is like a huge part of abstraction and programming, like functions,
23:05 classes, it's modules, et cetera. Right. Like, like even just think of like,
23:09 like in Python, think of the print function, right? Like in actually it does all kinds of complicated
23:17 stuff, right? It's got talking to the operating system interfaces to do IO and it's like buffer
23:22 a thing and character second version and blah, blah, blah. But like, you don't have to care about
23:26 that. You just type print hello world. And it, you know, it prints all the world.
23:30 You can just treat that as like this little atomic unit. And that's great.
23:33 That kind of. Yeah. It's such an important part of a building block of programming,
23:36 but these threads mean that stuff can still be going. You're just like all over the place. Right.
23:41 And it's very similar to the go-to's, which I thought that was a great analogy.
23:44 Yeah. I mean, specifically the issue is, so like, let's say the, the analog of print,
23:48 like in say twisted, you have this transport object. That's like your connection,
23:52 like a network connection and you call it's right method. And this is like,
23:55 I want to send this data to the remote site now. And that's like a function. You call it
23:59 it returns. That's all fine. But what's sort of confusing is that when you call it,
24:04 it returns, it hasn't actually finished yet. What it's actually done is sort of scheduled
24:08 that right to happen in the background. And that makes it hard to reason about the flow.
24:12 Cause it's like, Oh, the functions returned, but actually in a sense, it's like kind of still running.
24:17 Right. And now if I want to write and then do something else that happens after that,
24:21 that's hard to manage because I don't necessarily know when that's actually finished. I have to use some
24:26 other API to like check and ask, okay, has it actually written yet?
24:29 Yeah. And so yeah.
24:30 Did it succeed? Did it not succeed? Then how do you deal with that? And then how do you get that
24:34 back into your, yeah, your whole, it's yeah. It feels like almost like JavaScript.
24:37 Well, I mean, JavaScript, you know, is also in this general family of like, yeah,
24:42 it is as this async currency model that's sort of endemic to it. We use all over the place and it's
24:48 all callback based and it's all, yeah, it has that same kind of go-to problem.
24:52 Yeah, exactly. So let's talk about the architecture and then maybe like we can see about how that kind
24:58 of brings us back into something that Dykstra might be proud of.
25:01 Okay. Sure. Yeah. I guess I should also mention, so I don't know how, this is a discussion that really benefits from like diagrams, which is not something podcasts or
25:11 there is also, I have a blog post called notes on structured concurrency or go statement considered
25:18 harmful, which goes into this and sort of this set of ideas and much more detailed pictures and all
25:23 that. But so yeah, so the basically the idea is that, I mean, so it's what I said, right? That like,
25:27 when you call a function, it should be something you can treat as a black box. It does some stuff,
25:32 it returns. That's kind of what Dykstra's point. I mean, that's the problem with go to is also that
25:36 like, you know, the old go to world, you could call a function and maybe it returns, maybe it jumps
25:43 somewhere else, maybe some other function and some other point of your code suddenly returns instead
25:47 because the control go to just lets, you know, functions have a certain control flow they're supposed
25:52 to have, right? Where you have your function and then you call something. So control jumps into that
25:56 function. It executes that code and then it comes back again to your function. That's kind of this
26:00 nice little structure. That's why it's, you could treat it as a black box because you know, oh, it'll
26:03 do some stuff, but then it'll come back. Right. Go to isn't like that. It's just, it's just a one way
26:08 you leap off somewhere else and then maybe you come back, you could jump back, it's something you do manually.
26:13 Choose your own adventure book. You don't know when you're done, you don't know where to begin or
26:19 you're just like, it's always a mess. It's always a surprise. Yeah. Yeah. I mean, it's choose your own
26:23 adventure accepted. Then you're like, you know, actually I want to switch. I'm going to, it says
26:26 I can go to page five or 10, but I'm going to 17. Let's see what happens. So yeah. So in that,
26:32 and that breaks this function abstraction, right? So it means like you call a function and you know,
26:36 hopefully like if it's like a nice, well-written library and people aren't like messing with you,
26:41 it will come back normally. But there's no kind of guarantee of that in the language. Like if
26:46 someone decided to be clever, they could, it could do something else. It could jump somewhere else.
26:50 And then it becomes very hard to sort of reason about the overall structure of your program,
26:54 if that's true. And it also breaks sort of higher level structures that people are using the program.
26:59 But like, so, so think about exceptions, right? So it's very, a handy thing about exceptions is like,
27:04 oh, something goes wrong. Then, you know, and I didn't think about how to deal with it locally,
27:09 then the exception will propagate out in Python. And so until either someone catches it and knows how to
27:14 deal with it, or the program crashes, you get a trace back and like, that's not great. But at least,
27:18 you know, you didn't just like go blindly on doing the wrong thing. At least, you know,
27:22 you get a trace back, you can try and figure out what's going, what happened, right?
27:25 But not with, not with threading. Not if you care.
27:28 Well, okay, well, we'll get there in a moment, right? But if you have go to,
27:32 then it's where do you, when you raise an exception, it goes up the stack, right? It's like,
27:37 okay, does this function want to handle it? Okay. How about it's caller? How about it's caller's
27:40 caller? Right? Because you have this nice stack kind of, you know, you know who the caller is,
27:45 you know who the caller's caller is. If you have go to, then control just bounces around sort of at
27:50 random. So who wants to know about this exception? Like I don't have a well-defined sense of a caller
27:54 even, right? It's just like these basic things we take for granted growing up and you know, this,
27:59 you know, our ancestors struggled with and sort of fixed for us. And now we take for granted,
28:03 right? But like these basic assumptions just aren't there in the language of go to.
28:06 Yeah. With blocks have similar issues, right? So like a with block is supposed to like,
28:10 you say with open file and then you know, okay, inside the block, the file's open,
28:14 outside the block, the file's closed. That's nice. It's great. It makes it easy to manage resources.
28:20 But if you have go to, you like, you jump into the middle of a block, you jump out of the middle of
28:23 the block. Like what happens is the, where does the file open and close? Like how does that even,
28:27 I don't even, it doesn't work, right? Exactly. Yeah. So yeah. And then, then if you look at like problems we're dealing with threads or things that are a struggle and like
28:38 asyncio or twisted, but these are just kind of the things that are problematic, right? So you call
28:43 a function, is it, you know, it's still running maybe when it returns, it makes it hard to like
28:46 control sequencing, hard to treat things to black boxes. Like you don't know, like you need to like,
28:51 go read the source code when you use some random, asyncio like based library to find out like,
28:57 okay, I called this function, but did it actually do the thing then? Or is it scheduled something in
29:01 the background? Like you have to go, does the error go to the caller or does it go to the callback?
29:05 Right. Yeah. So if you spawn off this background thread or background task, as I think it calls it,
29:10 and it crashes, there's an error. There's an unhandled exception with that. Well, you've lost,
29:17 you don't have a nice sort of call stack that you've, you've spun up this new independent entity that's
29:21 executing and it's now split apart from your original code. And so there's sort of like, where does that
29:27 if there's an unhandled error, an exception, then what actually happens is that in threads and
29:32 they sync IO and twist or whatever, is it, you know, maybe print something on standard, like,
29:36 hey, I hope someone's looking, something went wrong. Then it throws away the exception and it carries on.
29:41 And you know, hopefully... It does make your software more reliable, because there's way fewer crashes when you don't actually... Well, for some value of reliable,
29:48 right? Yeah. Exactly. Nice. Okay. So what are the building blocks of Trio that like...
29:53 Yeah, exactly. Yeah. Yeah. So the main thing that is, in Trio, we have this thing we call a nursery.
29:59 And the idea, it's just a sort of a silly joke, right? About if you want to spawn a child task,
30:04 then it's not okay for it just to be go off independently. Like it has, you have to like,
30:08 put it somewhere and it'll be like taken care of. It's like a nursery is where children live.
30:13 Okay. So concretely, what it is, is if you want to spawn a child task in Trio, you have to first
30:19 have a nursery object. The way you get a nursery object is you write async with Trio.openNursery as
30:24 nursery. So it's like this open, you have like a with block that opens a nursery and then you get this
30:29 nursery object. And the object is tied to that with block. And so, and then once you have that,
30:34 you can say nursery, there's a method on that nursery to spawn a task into it. And all the tasks you
30:38 spawn to the nursery will run concurrently. But then the trick is that that with block,
30:44 its lifetime in the parent is tied to the lifetime of the child tasks. So that you can't exit the
30:50 with block while there's still children running. If you hit the end of the with block, then the
30:53 parent just stops and waits for them. The stuff within the with block could be interwoven
30:58 and like in certain ways, async and concurrent. But taken as a whole, the with block is like a black box.
31:05 You start it, it runs, and then out comes the answer at the end, and everything's done,
31:10 or it's gotten into its canceled state or whatever happens to it, right?
31:13 Exactly. Exactly. Yeah. And so right. And that so that lets us do things like so right. So now,
31:18 if you call a function in Trio, you know, well, okay, it might internally open a nursery and have
31:23 some currency. But by the time it finishes, it's done, it hasn't left anything running. Or if you have,
31:28 if something crashes, we have some child task, you know, has an exception that is no, it doesn't catch,
31:34 then we say, okay, what do we do with this? Oh, well, wait, the parent is just sitting there waiting
31:37 for the child. So that's what we do is the exception hops into the parent and continues executing. So
31:43 in there, sorry, continues like propagating out. So Trio sort of follows the normal Python rule of,
31:48 you know, you can catch an exception, but if you don't, then it will keep propagating until someone
31:52 does. Right. So that nursery is a really important block block here. And I think, you know, it's,
32:00 it's really cool to be able to start all these child tasks and they can even be sort of children
32:05 of the children, right? Like you could pass the nursery and the child task could spawn more child
32:09 tasks and whatever. And so it's all going to sort of be done at the end.
32:15 This portion of talk Python to me has been brought to you by roll bar. One of the frustrating things
32:19 about being a developer is dealing with errors, relying on users report errors, digging through
32:25 log files, trying to debug issues, or getting millions of alerts, just flooding your inbox and ruining your
32:30 day. With roll bars, full stack error monitoring, you get the context insight and control you need to
32:35 find and fix bugs faster. Adding roll bar to your Python app is as easy as pip install roll bar. You can
32:42 start tracking production errors and deployments in eight minutes or less. Are you considering
32:46 self hosting tools for security or compliance reasons? Then you should really check out roll bars,
32:51 compliance SAS option, get advanced security features and meet compliance without the hassle of self
32:57 hosting, including HIPAA ISO 27001 privacy shield and more. They'd love to give you a demo. Give roll bar a
33:05 try today. Go to talkpython.fm/roll bar and check them out.
33:09 The other thing that's really important in this type of threading is to not block forever, to not wait
33:17 on something forever, right?
33:19 Oh yeah, so you're talking about timeouts and cancel scopes.
33:22 Timeouts and cancel scopes and all that stuff.
33:24 Yeah, yeah, yeah.
33:24 Yeah, so this is a really basic problem. If you're doing anything, kind of network programming,
33:29 for example, or any kind of thing, your programs talking to other programs, other resources,
33:36 because sometimes those will just stop responding. So it's like, if you make an HTTP request,
33:42 then maybe it'll succeed. Maybe it'll fail, you'll get a 404 or something like that.
33:48 Or, but there's also a third possibility, which is that it just never finishes, right? Like the
33:54 network disappeared and it just like sits there forever. And your program, yeah. And like, you kind
34:00 of, and I mean, if it's just writing a little script or whatever, it's like that you're running
34:04 at the command line, it's fine. You know, you have some point you'll get bored and hit control C,
34:06 it's fine. But for sort of more complicated systems, you need to be robust against this.
34:11 Yeah.
34:11 Or unattended systems.
34:13 Yeah.
34:13 Like, well, we have 10 workers, but we're only getting this super low throughput. Why? Well,
34:17 because eight of them are like just blocked forever.
34:19 Yeah. You need to somehow, you know, detect this, be able to get out and say, okay, actually,
34:24 let's stop doing that. Try something else, raise an error or something, but like, you know,
34:29 not just sit there forever. So that means, yeah, there's just like, and this is just a problem
34:34 that's just endemic, right? It's just every time you do any kind of networking or other kinds of IPC,
34:40 you need to have this kind of timeout support. So that's one thing that makes it tricky.
34:43 The other thing that makes timeouts tricky is that usually the place where you need to set
34:47 the timeout is like buried way, way down deep inside the code. It's like, at some point your
34:52 HTTP client, you know, is using some socket library and the socket library is like trying to send some
34:58 bytes or read some bytes from the network, like a very, very low level operation. That's probably buried
35:02 under like five levels of abstraction. But then where you want to set the timeout is like way out at the
35:07 top where you say like, I just want to say, if this HTTP request isn't done in 10 seconds,
35:11 then give up. So timeouts are this problem that's like, you need them everywhere and you need
35:16 coordination across different layers of your system. So it's this really complicated problem that like
35:21 covers, you really kind of need your timeout framework that's like used universally by everyone
35:28 and kind of like coordinates, can coordinate across the system.
35:31 And it can be super tricky. Like imagine you want to call like two web services and do a database
35:37 transaction. And if one times out, you want them all to like roll. Yeah. Right. Which is like super,
35:43 like how do you coordinate that? Like there's just, you know, like, yeah, almost impossible. Right. Unless,
35:49 yeah. Unless you put it in trio. So yeah. So right. Yeah. So the traditional way to do this is like,
35:53 you have a timeout framework sort of inside each library, like inside, you know, your HTTP client has a
35:58 timeout argument that you pass or whatever, but that doesn't help with people just don't do that
36:03 reliably. And yeah. And it doesn't solve these coordination problems. So in trio, you say like,
36:07 no, this is something that we're just going to bake into the library, the IO library itself.
36:12 So all libraries that work with trio, if you have an issue that in trio, it uses trio's timeout system.
36:18 And so in this, this thing called cancel scope. So again, there's a lot more details on this in a
36:23 blog post that I wrote called timeouts and cancellation for humans. We'll put the link up there.
36:28 Podcast description, I guess. But in basically, yeah, it's, it's fairly simple to use. Basically,
36:34 the way it works is you can just anywhere in your code, say, you know, with timeout,
36:38 10 seconds, and then you put code inside that with block. And if it takes more than 10 seconds,
36:45 then trio raises a special exception called canceled that just sort of, you know, unwinds out of there.
36:51 And then that with block catches the exception. So, you know, it's a way of saying, okay,
36:55 whatever's happening with block, stop doing that, and then carry on at that point.
37:00 The other nice thing that's important about this, as compared to a lot of other systems we're doing,
37:04 working with timeouts and this cancellation, is that we have the with block to limits, like you can say,
37:10 this is the operation that I want to cancel, which is really important. So like asyncio doesn't work like this.
37:17 So when I think IO, you can say, I want to cancel something, and that injects the special exception,
37:21 but then there's nothing that keeps track of, you can't look at the exception, figure out, okay,
37:25 did I want to cancel this specific network operation, or this HTTP request, or this like entire program,
37:34 like, kind of, you don't keep track of that. When trio, because we said it's with block, we can say,
37:39 okay, I know this is the actual set, this is the operation that I'm trying to cancel right now.
37:44 And these can be nested.
37:46 Yeah, it's awesome. And you, if you want to do like a timeout, you can create a with block that says,
37:50 basically, do this to success or fail after 10 seconds. What is the syntax for that? It's like,
37:57 async with, I forget.
37:58 This one is just a regular with.
38:00 Okay.
38:00 Because, so async with is just like a regular with except that a regular with does, calls a method at
38:06 the beginning and the end of the block. And async with does an await call of a method at the beginning
38:10 of the block. And we talk about a weight being the special, you know, thing we call these special
38:14 functions. Yeah. And so for, it happens that for tree for timeouts, for nurseries, you have to use
38:20 an async with because at the end of the block, it might, the parent might have to wait for the
38:24 children. And you want to let them run while it's waiting. Right. So it has to be an await there.
38:30 So let's use async with for a timeout. You're just, you know, setting something at the beginning
38:34 of the end, but there's, it doesn't, it doesn't have to actually like stop and wait for, let other
38:37 things run there. It can be synchronous. That's just a little detail, but yeah. So it's, it's basically,
38:42 so with block, what we, the main one, sort of this basic one is called move on after to kind
38:46 of remind you that it's going to run the code. And then if the timeout happened, it doesn't
38:51 raise an exception. Like it, you can stop and see, okay, did it, was it canceled or not? But like,
38:55 it keeps executing after the block. So you can like, look at it, which is, this again has to do with,
39:00 like, like the simplest case for a timeout is just like, okay, if the timeout expires,
39:04 blow up everything, give up, raise an exception, right? But a lot of times you want to say, oh,
39:09 well, did this thing get canceled? If so, do some fallback, do something else. So the core thing in
39:16 Trio is, is not to raise an exception after that. It's like to provide some object you can look at to
39:20 see what happened and then figure out how you want to deal with it.
39:23 Yeah. The other thing that I thought you did well with these cancellation scopes is if there's an
39:30 exception on one of the tasks and there's a bunch running in one of these width blocks, you can
39:35 catch that and use that to cancel the other ones and just say, no, no, things are going wrong. We're
39:40 out of here. Every, all threads are done. We're out of here. Yeah. So yeah. So the nurseries actually
39:46 kind of need the cancel scopes because what, one of the things the nurseries do is this exception
39:51 family thing, right? So if an exception happens in a child, it isn't caught, then it has to hop
39:55 into the parent. So we have this kind of, I guess the way I think about it is normally you think of
39:59 like your call stack. It's like this little chain of like, you know, A calls B calls C calls D,
40:03 right? What nurseries do is they kind of make it that into like a tree. So like A calls B, B calls C and D
40:10 and E sort of simultaneously. And then maybe D calls several things at the same time, right? So you have
40:15 this kind of like this tree structure of your, for your call stack. And now some exception happens down
40:20 inside one of those branches. Well, what do you do with exceptions? You unwind the stack,
40:24 right? You go, you run any handlers inside that function. And then you go up to the parent,
40:28 you kill that functions call and you move in there and run any handlers there and so on.
40:33 But then we have this problem. If you're going up, unwinding the stack, and you hit one of these
40:36 places where branch, the stack now branches, how do you sort of unwind past that point? Right?
40:41 You don't want to like orphan the other children, like you just unwind the parent without
40:46 stopping them because that's our whole thing in Trio, right? We say we're not going to let
40:49 orphans run around unsupervised. That's a bad idea. Dijkstra doesn't like that. So what we do is,
40:58 but we have to unwind the stack, right? So what we do is we use the cancellation at that point to go and
41:02 we cancel all the other tasks that are inside that same nursery. So we sort of prune the tree,
41:08 unwind those tasks back up to this point. And then we can keep unwinding from there with the original
41:13 exception. Yeah, that's, that's really clever. I really like the way you put it all together.
41:18 So there's also some other pieces, like some file system bits. I love the, the whole trick of saying
41:26 like a wait sleep for a little bit of time, just go, I'm basically yielding this thread to let it do
41:32 other work, but then just carry on as opposed to say like a time.sleep and things like that.
41:37 So maybe tell us about the higher order of building blocks.
41:39 Yeah. So I guess, yeah, so far we've sort of been talking about kind of the core ideas in Trio, which
41:43 are, you know, not specific to Trio the project. In fact, there's so, I've had a, talking to a number
41:48 of other language designers or other languages, like, oh, this is interesting. We're also struggling with
41:52 these problems. But then, yeah, but Trio is also a specific project you can download. It has
41:57 documentation and API and all that. And one of the things I try to do with that project is to make
42:02 it sort of really sort of usable and accessible, kind of have this philosophy of like, the, you know,
42:07 kind of the buck stops here for developer experience. Like if there's something that you're trying to,
42:12 you're trying to write a program using Trio and there's something awkward or difficult or a problem
42:16 you have, then it's up to us to solve that. And even maybe it's, it's so we, it's, we have the project
42:22 itself. We also have things like that, like, you know, has the core, like networking and concurrency
42:27 stuff. We also have testing helpers and we have a documentation, like a plugins for Sphinx to make
42:34 it easier to document these things. And so there's something called pytest-Trio. How's that work with
42:40 that? So the main thing that pytest-Trio gives you is that you can have, when you write a test, you can say,
42:46 okay, I guess we need to give a little more information here. So Trio, there's sort of a
42:52 stereotype pattern that all Trio programs for, where you have like an async def main or whatever you want
42:57 to call it. That's like your top level async function. That's like, that's where you go into
43:02 that Trio's magic async mode. If you use all the Trio stuff.
43:04 Right. So maybe you start by saying trio.run main.
43:08 Yeah. Yeah. So the thing about async functions, you can only call them from other async functions.
43:12 So you have this problem. How do you call your first async function? And that's what trio.run.
43:17 So you have this little bit of ceremony at the very top of every Trio program. And this is also the case
43:22 for like asyncio and so on. But like, you say, I have to like switch into trio mode. And that's kind of
43:27 annoying in a test suite to do that on every single test. So that's the most basic thing that pytest-Trio
43:31 does for you is it makes it so you can write async def test, whatever, and it will take care of like
43:37 setting up trio and turning it on there. But it also has some other handy stuff. So one thing
43:42 is that it allows you to have async fixtures. And it does some sort of magic to like goes,
43:47 it switches into trio mode, and then sets up your fixtures and then calls your test,
43:50 and then tears down the fixtures. So it's sort of all within this async context.
43:54 And it's also integrated with some of the testing helpers that are built into trio itself.
43:59 So in particular, the one that's sort of the most, you know, gee whiz, awesome,
44:03 is that trio has this ability to use a fake clock. So this is an issue when you're writing like
44:10 networking programs. Like often you want to like have a test about, okay, what does happen if this
44:15 HE request just hangs forever? Like do my timeouts work correctly? Stuff like that, right? But it's
44:20 really annoying to run these tests because it's like, okay, and now I have to sit and wait for a
44:25 minute for the timeout to fire and make sure something happens. And that happens every time you run your
44:29 test. It just spends a minute sitting there doing nothing. And you're like, I really like, okay, but
44:34 like, this is really boring. I want my test suite to finish already. And I need like 100 of these
44:38 tests. And yeah.
44:40 So killing my build time.
44:42 Exactly. Yeah. Right. It's like really ruins your flow. So one thing I've done, actually,
44:48 what I did is when writing trio itself, I said, okay, I really want trio's own test suite to run
44:52 really quickly. I figured that'll force me to like come up with the tools that my users will need to
44:57 make their test suite run quickly. Yeah.
44:58 So if you just type pytest trio, you know, if you want to contribute to trio, we'd love to have
45:03 you. We're very friendly and all that. If you type pytest trio, it runs in like five seconds.
45:07 It has like 99 point something percent test coverage, which is like completely, it's very
45:12 difficult to get there because trio is this really complicated sort of networking library. It's all
45:16 this stuff that's usually hard to test. Part of that is that for all the timeout tests,
45:21 we have this magic clock. And so the way it works is you say, okay, trio, I don't want you to use,
45:26 I know it says like sleep 30 seconds or whatever. I don't want you to actually use sleep 30 real
45:31 seconds. I only want you, I want you to sleep 30 virtual seconds. And so it's a special thing you
45:36 sort of pass the tree at our run to say, every time you have timeouts, sleeping, anything inside this call,
45:41 I want you to sort of use this virtual clock instead. And the way the virtual clock works,
45:45 it starts out at time zero and it just stays there. And you can like advance it manually if you
45:50 want or things like that. But normally what you do is you just use the automatic behavior,
45:54 which is, it's just, it's, it's at time zero. And then it sort of watches what your program is
45:59 doing. Anytime that your program sort of like finishes running everything and just stops and
46:04 is waiting for stuff to happen, then it looks to see, okay, looks like the next thing that would
46:08 wake up, there's like a sleep 10 and sleep 20. Okay. So in 10 seconds, that's the next one that'll wake
46:13 up. I'm just going to jump the clock forward 10 seconds and then start running again. Right.
46:17 So anytime it knows it's going to be waiting for a certain amount, it's like, all right,
46:21 we'll just, we'll let the wait start and then we'll just go right past that.
46:24 So it's basically, yeah, you just write your test the way you normally would with like for use timeouts
46:29 regularly, test your real code, but sleeps, whatever is easiest. And then what's annoying about that
46:34 normally is then your, your test takes like 30 seconds, a minute, whatever it is to run. Most of
46:39 which the time is just sitting there doing nothing, waiting for time to pass. So if you flip the switch to
46:44 use the special clock, then it does this exactly the same things, but it just skips over all those
46:48 times when it's sitting doing nothing. And so suddenly your test runs in like a few milliseconds.
46:52 Oh, that's awesome. Yeah. It's pretty awesome. And then it tests trio is hooked up to that.
46:57 So you can just turn this on with just like flip a switch on any test.
47:00 Oh, that's great. Yeah. So one of the things that makes me a little bit sad about Python's async
47:07 loops and stuff is like the asyncio based apps and the trio based apps. Those are not exactly the same
47:16 and they're not exactly compatible. It's not like the core you're using the same core. And so it just
47:22 keeps running like the asyncio loop and the trio loop. These are not the same. They got to be like
47:26 brought together with different APIs. Right. Yeah.
47:30 But you seem to have, you do have some interoperability. So like trio can work with
47:35 libraries that maybe assume asyncio is there or something, right?
47:38 Trio itself is just like a totally different library than asyncio. I've looked at, you know,
47:43 could I build it on top of asyncio? And there's sort of a number of reasons why that didn't sort of
47:47 make sense. And yes, and there is this big problem because it's just because of technical things about
47:55 how these async concurrency systems work. There has to be like one library ultimately that controls
48:00 all the actual networking like asyncio or trio or whatever, or twisted or tornado or something.
48:06 And that means to like, if you have a, like a say an HTTP client that's written to work on async
48:11 IO, it will necessarily work on trio because it's using a different set of networking primitives underneath
48:16 or vice versa. And this is sort of a, a larger sort of ecosystem problem, right? So there used to be,
48:24 there's twisted and tornado and G event and they all, none of them could interoperate. You'd have to
48:28 like pick which one they're using. And asyncio was sort of one of the reasons it exists is to try
48:33 and solve that problem and become the standard one, but then twisted and tornado and everyone can use.
48:38 And now they can all work on top of asyncio. And now all those libraries written for twisted and tornado,
48:43 you can mix and match however you like. And then here comes trio and kind of ruins that
48:48 by being here's this new thing you should use. So to try and kind of mitigate that,
48:53 there is this library called a trio asyncio, which lets you use asyncio libraries on top of trio.
49:00 The way it does this is kind of, it creates like a virtual asyncio loop that internally uses trios primitives
49:06 under the cover. And it kind of lets you, you know, kind of cordon them off and kind of a little container,
49:11 sort of all the weird stuff they sink IO can do. You can do that stuff, but, the kind of in a little box
49:17 that won't like leak out to pollute the rest of your program, your trio program.
49:22 I think this is really encouraging because that means if you maybe have already invested in AC IO and
49:27 you've already got some code written on it, like you could still Yeah.
49:30 Trio without going, I'm rewriting trio. And is that worth it? Is that a good idea?
49:34 Yeah. Or again, it gives you sort of an incremental path. You can say like, well, okay, I can at least
49:40 get it running on trio first of all, and then I can start porting one piece at a time and eventually
49:44 end up all in trio. Hopefully.
49:46 Exactly.
49:47 Exactly.
49:47 Now, the reason it's not, you can't just magically make this all work because
49:51 trio and Cinco really have fundamentally different ideas about things. Now, obviously, I think trio's
49:56 ideas are better. They're kind of the new thing that I try to fix all these problems, but it's not that
50:02 it's the differences aren't just like in terms of the internal implementation. The differences are in
50:06 terms of just like the fundamental concepts that are exposed.
50:09 Right. Like the philosophy of it.
50:10 Yeah. Right. It totally changes how you write the library on top.
50:14 Right. So it's not something you can just sort of magically switch.
50:17 But there's a little bit of an incremental aspect to it. So we're almost out of time.
50:22 Right.
50:22 Just really quickly, what's the future of trio? Like where is it going? What you got planned?
50:27 And is it production ready?
50:29 So yeah. So I should be clear. Yeah. Right now, the trio library itself is very solid,
50:33 but there is not much of an ecosystem around it. So like there is not currently an HTTP client or
50:39 an HTTP server that you can just use out of the box and it's like mature and all that
50:43 for trio. There are some solutions for these kinds of issues. And I don't want to say too
50:47 much because, you know, this will change quickly. We have a chat channel. If you go to our
50:51 documentation or whatever, you can like find out what the latest news is about what you should use.
50:57 But it's not something that, you know, is ready today to run big websites or something like that.
51:01 Okay.
51:02 Just because the libraries aren't there yet. If you'd like to help, you know, write those libraries and
51:05 make it happen. I'd love to have you. We have a really solid contributing policy and things like
51:11 that. You can check it out. The other thing that's happening is asyncio. So I also, I spent a lot of
51:17 time. I am a core Python developer. I talked to Yuri Selvanov as the main asyncio developer and Guido
51:22 about all this stuff. And so there is this, Yuri is quite keen on saying, oh, well, right. You know,
51:27 trios ideas are better. We should add them all into I sync IO. This is quite complicated. There's a lot
51:32 of, I mean, we could probably do a whole other podcast about all the trade offs there. And maybe
51:36 we should, I don't know. It's pretty interesting. It is interesting.
51:38 Maybe relevant. So that's something that's also happening is that Yuri is going to be trying to
51:42 add nurseries and cancel scopes and things to I sync IO. So I think there's a lot going to be a lot of
51:47 limitations. It's a lot of the value in trios of things people can't do. And I sync IO has already
51:52 got like six layers of abstraction built in there or I don't know. It's not actually six. It's like
51:57 four. And they're all totally doing all the things that trio says, these are, these are things that
52:02 should never be done. It shouldn't be possible. So you can't, that's also, you could fix just adding
52:07 a new layer on top, but you know, it's still better than nothing, right? Like, you know,
52:11 I think it would continue to exist. So we do want to make it as good as possible.
52:15 By these ideas. Yeah, absolutely.
52:16 And ultimately we don't, I mean, maybe like no one knows for sure whether like the make a new thing
52:21 plus a compatibility layer, like trio, the trio, the single thing I mentioned, is that going to be
52:25 the best thing or is making asyncio better going to be the best thing? We'd, none of us know for
52:28 sure. So we are trying both versions. I will sort of see.
52:32 I'm super excited just to hear that that collaboration is happening. I think that's great. All right.
52:38 I think that we're out of time for trio. It's super interesting project. I really love what you've
52:42 done there. I think it's, I think it's brilliant. So, people should definitely check it out.
52:46 Thanks a lot.
52:47 Yeah, you're welcome. So quick two final questions. If you're going to write some Python code,
52:50 what editor do you use?
52:51 I use Emacs. I've been using it for 20 years. I'm stuck.
52:55 It's great. It's not any, I don't know that it works for other people or not just because,
53:00 yeah.
53:01 Yeah, sure. I definitely, I started on Emacs as well. And notable pipe effect.
53:06 Yeah. Well, trio, obviously.
53:07 And, pytest.strio?
53:10 Yeah. pytest trios, Sphinx controlled trio. There's, if you go to the, you know,
53:15 github.com/python-trio to see all the different projects under the trio organization. And they're
53:21 sort of trying to build up that ecosystem. Like I said, so.
53:24 Yeah. Sounds cool. Yeah. So final call to action. People are excited. They want to try trio. Maybe
53:28 they want to contribute to it. What do they do?
53:30 Yeah. So check out, start with the documentation trio.readthedocs.io. That also will give you
53:36 links to our chat is, sort of a place to hang out. it has our contributing docs. If you want
53:41 to get involved like that, we give out commitments on your first pull request acceptance. So there's
53:47 lots of people. Yeah, we, we want, you know, this is a project for everyone. I don't want to just be my,
53:52 you know, personal little thing.
53:53 Yeah, that sounds great. Awesome.
53:55 Yeah.
53:56 All right. Nathaniel, thank you for sharing your project and creating it. It's quite great. And,
54:00 we may have to come back and dig into this a little bit more. This is fun.
54:02 Yeah. Thanks for having me. Yeah. Yeah. Talk to you later.
54:05 You too. Bye bye.
54:05 This has been another episode of talk Python to me. Our guest on this episode was Nathaniel Smith,
54:13 and it was brought to you by Linode and Rollbar. Linode is bulletproof hosting for whatever you're
54:18 building with Python. Get four months free at talkpython.fm/linode. That's L I N O D E.
54:26 Rollbar takes the pain out of errors. They give you the context insight you need to quickly locate and
54:32 fix errors that might have gone unnoticed until your users complain. Of course, as talk Python to me,
54:38 listeners track a ridiculous number of errors for free at rollbar.com/talkpython to me.
54:43 Want to level up your Python? If you're just getting started, try my Python jumpstart by building 10
54:49 apps or our brand new 100 days of code in Python. And if you're interested in more than one course,
54:54 be sure to check out the everything bundle. It's like a subscription that never expires.
54:58 Be sure to subscribe to the show. Open your favorite podcatcher and search for Python. We should be right at the top.
55:04 You can also find the iTunes feed at /itunes, Google Play feed at /play, and direct RSS feed at /rss on talkpython.fm.
55:13 This is your host, Michael Kennedy. Thanks so much for listening. I really appreciate it. Now get out there and write some Python code.
55:19 I'll see you soon.