Learn Python with Talk Python's 270 hours of courses

#167: Simplifying Python's Async with Trio Transcript

Recorded on Thursday, Jun 21, 2018.

00:00 Michael Kennedy: Ever since Python 3.5 was released, we've had a really powerful way to write I/O-bound async code using the async and await keywords. On this episode, you'll meet Nathaniel Smith who wrote the Trio Async Framework that significantly simplifies complex coordinating operations using async and await. This is Talk Python To Me, Episode 167, recorded June 21st 2018. Welcome to Talk Python To Me, a weekly podcast on Python, the language, the libraries, the ecosystem and the personalities. This is your host, Michael Kennedy. Follow me on Twitter where I'm @MKennedy. Keep up with the show and listen to past episodes at talkpython.fm and follow the show on Twitter via @talkpython. This episode is sponsored by Linode and Rollbar. Please check out what they are offering during their segments, it really helps support the show. Nathaniel, welcome to Talk Python.

01:06 Nathaniel Smith: Hey, thanks for having me.

01:07 Michael Kennedy: It's great to have you here. I've wanted to have you on for a long time. Listeners of the show, probably fans, know that I'm a fan of async programming and parallel programming. That's definitely gotten some feedback and some attention but I think it's super important for Python and I think what you're doing is incredible. Some of the stuff you showed at PyCon this year was incredible.

01:30 Nathaniel Smith: Thank you.

01:30 Michael Kennedy: So, I'm really excited to talk about it, yeah, it's going to be fun. Before we get to that though, let's get your story. How did you get into programming and Python?

01:37 Nathaniel Smith: So let's see, so for yeah, getting into programming, I mean, I guess I was fortunate in my choice of parents. Born into sort of a relatively affluent family and then my mother was also a professional programmer, though she mostly stopped, switched to sort of part time consulting when the kids came along so she could be a stay-at-home mom. So actually, I first learned to program through in my elementary school. They had this teaching Logo to us that my mom created this project at my elementary school.

02:09 Michael Kennedy: Wow, that's pretty awesome. Many moms volunteer at schools but not any of them create learning programs.

02:17 Nathaniel Smith: Right, yeah, yeah, no, my mom is pretty incredible. She also loves podcasts, so I mean I guess, hey, hi, Mom. Maybe she'll listen to this. Yeah, I mean she later went on to get a master's in education, and now teaches art and science and programming. But so yeah, so I got started early and was fortunate enough to have my own computer starting at around age 10 and so on so this is like in the days where it's like, was DOS Windows 3.1, it was pretty exciting.

02:46 Michael Kennedy: Like 386, 46 type things?

02:48 Nathaniel Smith: Yeah, oh, yeah, yeah, turbo button to make it go faster.

02:51 Michael Kennedy: Oh, yeah, turbo buttons. Yeah, we don't have turbo buttons anymore.

02:53 Nathaniel Smith: No, it just always goes fast, isn't that weird, right? There isn't a little button to make our laptop half the speed. Why wouldn't everyone want that? So, and then I, from there, installed Linux when I was 13, on my computer, messing around. I had no idea what I was doing. It was Slackware '96, was the name of it, 'cause it's like Windows '95 was the big thing then, so this one was better, right, '96? Exactly, yeah, alright, so it's kind of been bumming around that kind of programming, open source world for something years as now. It took me a little longer to get into Python. I don't remember exactly when, probably, five years later or so. I remember the first time people were talking about Python was this cool new thing, it's kind of like Perl, so forth. First time I read the tutorial, I got to the point where it said that lambda was syntactically limited to a single expression, and just like rage quit, I was like whatever, what a stupid language.

03:50 Michael Kennedy: Where are the curly braces? Come on!

03:54 Nathaniel Smith: But I, sort of like Logo, I learned Scheme, things like that. So this was like obviously terrible. I later came back to it, got over that, and I understand now why that makes sense in terms of the block structure and everything, this statement expression distinction that Python has. But it took me a little while. But yeah so, I guess I got into Python around 2.2 or so, that'd gradually become my main language. In the meantime, I was also, I was getting a PhD and stuff like that, using Python for both hobby-hacking stuff and also for my work for data analysis. I wrote some various libraries.

04:36 Michael Kennedy: What's your PhD in?

04:36 Nathaniel Smith: Cognitive Science.

04:38 Michael Kennedy: Oh, right, okay, very awesome.

04:40 Nathaniel Smith: Yeah, yeah, so studying how people understand language and brains understand language, yeah. So, yes, along the way and then yeah, I'm getting embedded in all these different, entangled in all these different open source projects like NumPy and so on.

04:53 Michael Kennedy: Yeah, yeah, that's really great. So, what do you do these days?

04:57 Nathaniel Smith: So, I'm actually kind of in a bit of transition right now. So the last few years, I've been working at UC Berkeley, the Berkeley Institute for Data Science, which is a new data science center that was started up on campus. And so, my position has been this unique one really where it's been grant-funded to figure out how to make sure that Python continues to work well, works better for scientists. Actually, there's a blog post we can put a link to there. I wrote this sort of... Talking about all the stuff I've done there, but some highlights being like I got some grants for NumPy development, $1.3 million over a few years.

05:39 Michael Kennedy: Wow!

05:39 Nathaniel Smith: Which is basically the... NumPy has been, some people have full-time people working on it. I made a color map people like, did some work on packaging, so manyLinux lead that effort so we could have Wheels on Linux now.

05:55 Michael Kennedy: That sounds like a really fun job. You could actually make a difference in open source space.

06:00 Nathaniel Smith: Exactly, yeah, I mean... It's sort of funny, this is what I talk about on the blog post, but like... Open source is just sort of desperately underfunded and it's amazing how well and effectively we can use the little bits of volunteer time and things that people manage. Sure, you know, people are spending a few hours a week, but there's a lot of things that larger projects that have huge impact that can't be done in that mode, where you actually need to sit down for awhile and think about it and understand the landscape, put some focused effort into something.

06:32 Michael Kennedy: Yeah, it depends on what you take as your goal. If you have a month of uninterrupted full-time effort on some project, or a goal, part of a project, versus I'm going to squeeze this in Saturday morning before the kids get up, right?

06:45 Nathaniel Smith: Exactly.

06:47 Michael Kennedy: These are the same types of things you attack.

06:49 Nathaniel Smith: Exactly, yeah. I mean, a month of time is not that much in the grand scheme of things. One person one month is like, compared to the amount of labor that Google has available. But it still enables all kinds of things that right now just aren't happening. So, there's actually lots of low-hanging fruit. There is nobody almost who has that kind of time to worry about these sort of broad, open source ecosystem kind of projects. So, there was just, there's all kinds...

07:15 Michael Kennedy: And we saw how much of a difference was made with the Mozilla Grant, the Python Packaging--

07:21 Nathaniel Smith: For PyPI, yes, yeah.

07:21 Michael Kennedy: Yeah, PyPI, that was like, hey, look, $170,000, oh, actually, this can happen. It's been dragging on for years, now it's real.

07:29 Nathaniel Smith: Yeah, yeah, I mean getting the team together to make that happen was, yeah. I had a little bit of involvement. I'm on the PSS Packaging working group.

07:37 Michael Kennedy: Nice.

07:38 Nathaniel Smith: I wasn't heavily involved in that. I made some introductions, gave some feedback on the grant and things.

07:43 Michael Kennedy: That's really great.

07:43 Nathaniel Smith: So it was yeah, just super exciting to see that up close 'cause you know, it was very successful, and it was a...

07:49 Michael Kennedy: Yeah, felt like it really got a lot out of how much investment was put into it.

07:53 Nathaniel Smith: Yeah, I mean it was, that had been dragging on for literally six years. The old PyPI was just totally unworkable. But you know.

08:02 Michael Kennedy: It was like kryptonite for people who wanted to contribute to open source. They looked at it and like, whoa no! No, not touching that.

08:11 Nathaniel Smith: I made one patch to the old PyPI and it was just a trivial thing. It was just that we wanted it to start saying, oh yeah, manyLinux is now a legal thing for Wheel to have in its name. So it was just like there's a list of strings that allowed, I'm just adding one new string, right? But that was the most terrifying patch I've ever made, PR I've ever made, because it's like if I have a typo, then PyPI dies, right? There are no tests, there's nothing.

08:36 Michael Kennedy: The is buried--

08:36 Nathaniel Smith: Py without a net, yeah.

08:41 Michael Kennedy: Exactly.

08:42 Nathaniel Smith: Yeah, the new PyPI is way, way better. You should all go and help contribute 'cause now it's like all set up. You can develop locally and there's a real test suite.

08:52 Michael Kennedy: Yeah, it's really nice. I had the people involved on the show a couple episodes back too. So, definitely got a chance to dig into that.

08:59 Nathaniel Smith: Oh, yeah, right, that's right, yeah.

09:01 Michael Kennedy: Yeah, so let's talk a little bit about async. 'Cause I think that that's one of the sort of revitalized areas of Python in the last...

09:09 Nathaniel Smith: Sure, yeah.

09:10 Michael Kennedy: Since 3.4, 3.5, it really started coming along. So, what, in 3.4 we got asyncio as a thing and then 3.5 it really, I feel like, it really got more accessible with async and await.

09:22 Nathaniel Smith: Right, with the new syntax features to make it easier to use, exactly, yeah.

09:27 Michael Kennedy: Yeah, exactly, like foundation was built previously but it was, it was still this sort of callback hell type of programming, right?

09:35 Nathaniel Smith: Right, right.

09:37 Michael Kennedy: We should maybe take a little bit of a step back and just let people know about the presentation you gave at PyCon which introduces your project, Trio, which is what we're going to focus on. But in your talk, you sort of set the stage and said, look, there's these two philosophies or types of async programming that you might consider. So maybe you could touch on that a little.

09:59 Nathaniel Smith: Well so, I think the first thing to say is for those who aren't already up on the jargon, async is a subset of concurrent programming. And so, concurrency meaning writing programs that do two things at the same time which is very handy. It's something in our real life we do all the time. I'm working on one thing and my friend's working on another thing at the same time, it's very natural. But writing a programming that does that is a little bit trickier. Especially in Python, it's kind of generally a sequential language. It makes it easy to do, I want to do this, so then that, and then the other thing. But it doesn't directly have built into the syntax, whatever ways to say, I want to do these two things at the same time, and then when they're both done, do these other things. So, in general, there's this general question of like how do you write concurrent programs in Python? And then I think what you were, yeah, there's kind of two philosophies of concurrency. Which is one is the kind of preemptive concurrency that threads give you where just everything just kind of runs all the time, interleaved kind of in arbitrary ways.

11:00 Michael Kennedy: And to be clear, this is in general threads, not Python threads, 'cause we have this whole thing called the GIL.

11:06 Nathaniel Smith: From the programmer, from the user's point of view, GIL makes things slower but it doesn't make a real change what it feels like to use it compared to threads in general.

11:15 Michael Kennedy: True.

11:16 Nathaniel Smith: So in general in threads, you might have like two threads actually running at the same time, like on different views. Because in Python we have the Global Interpreter Lock, the GIL. It means that mostly only one thread can actually go out at a time. But because in Python Interpreter controls that, it can decide at any moment to switch which thread is running. From your point of view as writing code in Python, it might as well be running multiple things at the same time, basically, since it can switch at any moment. You kind of have to act like it's just constantly switching. And the reason this is kind of a challenge is 'cause if you have a data structure and you have two different threads or whatever, you have two different concurrent pieces of code acting on the data structure at the same time, if you're not careful, you can make a big mess. The classic examples are things like your program is like managing a bank. And so, I'm going to withdraw money from this account and then add it to this, put it in this account. But if you don't do that, like atomically, so the first one, one thread says, okay, does Alice have enough money? Yeah, great, okay, I'll take that money out and put it into Bob's account. Another thread says, oh, does Alice have enough money for this transfer to Carl? And then if their interleaved threads, first, both threads check and say, oh, yeah, Alice has plenty of money, and then both threads take money out of the account and now Alice has spent the same money twice actually. Which is great for Alice, but not so nice for the bank.

12:37 Michael Kennedy: Yeah, someone's going to be in trouble writing that software.

12:40 Nathaniel Smith: Exactly, yeah, right? So yeah, if you're writing that software, you need to sort of manage those.

12:44 Michael Kennedy: Yeah, so one of the things that kind of help made this click in my mind was thinking about how programs temporarily enter these invalid states.

12:54 Nathaniel Smith: Right, exactly.

12:56 Michael Kennedy: And what you're describing is basically that. At the beginning of that function, the bank is in a valid state. At the end of the function, everything is valid but somewhere in between in order to accomplish that operation as a series of steps, it has to become invalid and long as nobody observes it in this invalid state, it's all good. But when they do, like you're describing, halfway through, it's not okay anymore. It's kind of like transactions in databases...

13:20 Nathaniel Smith: Exactly, yeah.

13:20 Michael Kennedy: But in time, yeah, they're pretty similar in some ways, I guess.

13:25 Nathaniel Smith: Yeah, so yeah, exactly. So, the solution to this is you have to sort of explicitly make sure you notice all those places where you're passing through that invalid state and like mark it somehow in your source code. Let's say, okay, I'm going to take a lock here that's going to make sure that anyone else who tries to use this has to wait for me to be done and things get back to the valid state before they can look at it, right?

13:48 Michael Kennedy: Yup, exactly, okay.

13:49 Nathaniel Smith: But that's really error prone because--

13:51 Michael Kennedy: And this is preemptive concurrency, right?

13:53 Nathaniel Smith: Yeah, exactly, still talking about how threads work, right? So, you have to find all these places where you go through a temporarily invalid state and mark them in your source code and if you forget one, then you have this nasty bug where Alice gets to spend the money twice or all kinds of weird things can happen.

14:10 Michael Kennedy: And it usually has to do with timing.

14:12 Nathaniel Smith: And so, it's very discouraging...

14:13 Michael Kennedy: Because you work hard to debug...

14:14 Nathaniel Smith: Yeah, it's like super subtle. Like yeah, it only happens one in a thousand times randomly and it depends on how much memory you have and this only happens in production and not on your tests and just all kinds of things. It's really those bugs, yeah.

14:26 Michael Kennedy: It's bad, yeah, I've heard them described as heisenbugs and I just love that term.

14:30 Nathaniel Smith: Yes, heisenbugs, right. And so it means that when you're working with threads, you just have to be constantly vigilant. Every line of code, what do we think? Okay, is this the one that's going to introduce this terrible bug? So that sucks, right? You don't want to have to live like that.

14:44 Michael Kennedy: It's like being paranoid.

14:46 Nathaniel Smith: Yeah, right, you just have to be like constantly paranoid. So yeah, so the idea for async concurrency is we kind of flip it around. Instead of saying that, okay, we have to go through and find all the places in the source code where something dangerous is happening and mark those, it's like, know what? Let's be conservative, let's assume dangerous could be happening anywhere, that's the default. So by default, only one thing is allowed to happen at a time. And then, we'll mark in the source code, okay, here's a place where I want to let other things happen. It's okay for, I'm not in the middle of just doing some delicate operation, adjusting someone's bank account. This is a fine place for that to happen.

15:27 Michael Kennedy: This portion of Talk Python To Me is brought to you by Linode. Are you looking for bullet proof hosting that's fast, simple, and incredibly affordable? Look past that bookstore and check out Linode at talkpython.fm/linode. That's L-I-N-O-D-E. Plans start as just $5 a month for a dedicated server with a gig of ram. They have 10 data centers across the globe so no matter where you are, there's a data center near you. Whether you want to run your Python web app, host a private Git server or file server, you get native SSDs on all the machines, a newly upgraded 200 gigabit network, 24/7 friendly support, even on holidays, and a seven-day money back guarantee. Do you need a little help with your infrastructure? They even offer professional services to help you get started with architecture, migrations, and more. Get a dedicated server for free for the next four months. Just visit talkpython.fm/linode.

16:22 Nathaniel Smith: And you could still have bugs, you can still make mistakes and put that in the wrong place. 'Cause now there's only a finite number of places to check when something goes wrong. Which is much easier to reason about that kind of program.

16:34 Michael Kennedy: Right, and that could be done with just straight up asyncio, but with 3.5 and above, like basically, the await keyword is the marker in your code for that.

16:44 Nathaniel Smith: Exactly, yeah.

16:45 Michael Kennedy: Yeah, that's pretty beautiful, that's pretty beautiful. So, I guess most people probably understand how to us async and await, the keywords. But maybe just describe what, how to define a super, super simple async method with a keyword. Just so that people understand this.

17:02 Nathaniel Smith: Yeah, like you said, I'm going to be focusing on Trio here and Trio does use only kind of a simple subset. There's some extra complexity needed for backwards compatible and new asyncios sort of callback, we can talk about that more later. Especially if you're just using Trio, async await is very simple actually. And so here's what you need to know, is that there are two kinds of functions. There's the async ones, which are the special functions that might let other code run under them. And so, you kind of need to be aware of what you call unto them that like, yes, the ground might shift at your feet. Data structures might change while it's happening. And there's the regular functions like the synchronous functions where those always happen atomically. And so, what we're going to do is we're going to make all the regular Python functions you already know, those are going to be the synchronous ones. Because now when they're all written on the assumption that Python's a sequential language, it does things sequentially, and so, then no one thought about how to handle this asnyc stuff when they were writing all these libraries that already exist. So those are going to be all atomic. And then we need a way to mark these special async functions and we want to mark it in two places. We want to mark it when we define the function so that, yeah, it's part of the API, is this an async function or not? And we want to mark it at the call site so when you call one of these and you're reading through your code, you see, aha, this is a point where I'm calling an async function, this is a point where other code might run in the background, I need to make sure I'm not in the middle of some dangerous operation, right?

18:30 Michael Kennedy: Right, okay, so you've got maybe a call to a web service or something. And that might return a future which then you hook in call...

18:39 Nathaniel Smith: So, there are no futures in Trio. That was one of the things I got rid of.

18:43 Michael Kennedy: Yeah, it's beautiful, yeah. You have to await on the result basically, right?

18:49 Nathaniel Smith: So, yeah, and so yeah, so the way that you think about it for Trio is just that there are these special functions and the way you call it is you type await and then the function call. It's like a new kind of syntax for calling a function. And that's sort of all you need to know. There is, like you said, yeah, there are complexities around Python awaiting on objects and having future objects, like function return and so on and so forth. But it's kind of unnecessary clutter is Trio's perspective on it so we just don't do any of that.

19:18 Michael Kennedy: Yeah, I feel like the asyncio layer's like let's get something kind of working but it's not that delightful and I feel like Trio really cleans it up. So there's a really great example that you gave in your talk about Happy Eyeballs which is like a way to try to use basically DNS resolution to connect a socket to a server and some concurrency is tougher on that, which is, I don't want to talk about that yet. Maybe we'll have some time to talk about it later. But basically, there's a version in Twisted which is, how long was the one in Twisted? Hundreds at one point?

19:51 Nathaniel Smith: So, yeah, there's two different versions of Twisted I talk about in the talk, one is sort of the classic one that's in master which is 120 lines or so I think, roughly. I mean, it's not super meaningful to talk about lines of code like this. But just to kind of give you a sense, yeah.

20:06 Michael Kennedy: Yeah, exactly.

20:07 Nathaniel Smith: A better sense of complicated is to reason about that is inside this method, it has another internal function. Inside that function, there's another function defined, and then inside that there's a fourth level of internal functions.

20:20 Michael Kennedy: Yeah, that's bad.

20:21 Nathaniel Smith: 'Cause it's all been...

20:24 Michael Kennedy: Almost like it's full of all these GoTos in some weird sense, and you basically say, now if we apply the building blocks or the primitives of Trio to it, oh, look, it's like 20 lines and it's totally straight forward, which is really great. So I think, you know, let's talk about Trio. So why don't you just start by telling people what it is.

20:42 Nathaniel Smith: Trio is a library for async concurrency in Python. It's like an alternative to libraries like asyncio or Twisted or so on, or threads, and sort of all of them. I think of it as there's kind of two pieces. So one is this sort of idea of Trio, the ideas are like a research project or something where I have, I sort of did some analysis of like what is it about Twisted and asyncio and so on that makes it so difficult to use sometimes. But why are things like Happy Eyeballs so complicated? Where are so common errors coming from? It came up like, oh, that's actually a sort of, as I dug into it I realized actually, there's a small number of things that kind of seemed to cause a lot of the problems, and even ended up digging into some old literature from the the '60s about early programming language design.

21:29 Michael Kennedy: On UNIVAC, or what was that one on? That was like a really old computer.

21:33 Nathaniel Smith: Oh, FLOW-MATIC.

21:33 Michael Kennedy: FLOW-MATIC, yeah.

21:36 Nathaniel Smith: That language, which was, yeah, the Grace Hopper's, like what is the precursor to COBOL. Which is a really interesting...

21:41 Michael Kennedy: That's going way back.

21:41 Nathaniel Smith: Yeah, yeah. And sort of talking about this transition. There was a lot of debates in it. How do you use structure in your language? Not even getting to the concurrency part, just like how do you do a language at all that's like usable? And one of the big things that happened was this switch from using goto as the main control structure to having things like if statements, and functions, and for loops and so on. And I realize there's actually sort of an analogy there that's surprisingly like precise between a lot of these async libraries are actually still kind of in the goto stage, where sort of the basic primitives are in a technical way kind of analogous to a goto. And they cause similar kinds of problems and then if you look at, okay, how did Dijkstra solve these problems back in the late '60s, we can take those and apply them to concurrency. And that leads to something called, I call a nursery concept.

22:36 Michael Kennedy: Yeah, it's so interesting, yeah, and I'd never thought about the relationship between goto and many of these programming, these threaded programming models. Well, I think it's a super good analogy and it's really relevant. So what Dijkstra said was, look, you should be able to treat these building blocks as black boxes, stuff goes in, stuff goes out. If you know kind of what it does, you don't need the details, right? I mean, this is like a huge sort of abstraction in programming, like functions, classes, its modules, et cetera, right? Inputs and outputs.

23:10 Nathaniel Smith: Just think of like in Python, think of the Print function, right? It actually, it does all kinds of complicated stuff. It's talking to the operating system interfaces to do I/O when it's like buffering, being in character second version, and blah blah blah. But you don't have to care about that. You just type, print hello world and it prints, hello world.

23:29 Michael Kennedy: Exactly.

23:30 Nathaniel Smith: You can just treat that as like this little atomic unit. And that's great.

23:34 Michael Kennedy: Yeah, it's such an important part of a building block of programming, but these threads mean that stuff can still be going, you're just like all over the place, right? And it's very similar to the goto's which I thought that was a great analogy.

23:45 Nathaniel Smith: Yeah, I mean, specifically, the issue is, so like, let's take the analog of print in, say, Twisted. You have this transport object that's like your connection, like a network connection, and you call its write method. It says I want to send this data to the remote site. Now, that's like a function, you call it, it returns, so that's all fine. But what's sort of confusing is that when you call it, it returns, it hasn't actually finished yet. What it's actually done is sort of scheduled that write to happen in the background. And that makes it hard to reason about the flow 'cause it's like, oh, the function's returned, but actually in a sense, it's kind of still running. If I want to write and then do something else that happens after that, that's hard to manage 'cause I don't necessarily know when that's actually finished. I have to use some other API to check and ask, okay, has it actually written yet?

24:29 Michael Kennedy: Did it succeed, did it not succeed? Then how do you deal with that and then how do you get that back into your whole...

24:35 Nathaniel Smith: Right, yeah.

24:36 Michael Kennedy: It feels like almost like JavaScript.

24:37 Nathaniel Smith: Well, I mean, JavaScript is also in this general family of like yeah, it has an async concurrency model that's sort of endemic to it, we use all over the place, and it's all callback based and it's all, yeah, it has that same kind of goto problem.

24:52 Michael Kennedy: Yeah, exactly. So, let's talk about the architecture and then maybe like we can see about how that kind of brings us back into something that Dijkstra might be proud of.

25:01 Nathaniel Smith: Yeah, okay, sure, yeah. I guess I should also mention, so I don't know how... This is a discussion that really benefits from diagrams which is not something podcasts are laid out. There is also, I have a blog post called, Notes on Structure Concurrency, or Go To Statement Considered Harmful, which goes into this set of ideas in much more detail. So yeah, so basically the idea is that, so it's what I said, when you call a function, it should be something you can treat as a black box. You call it, it does some stuff and returns. That's kind of what Dijkstra's point is, that's the problem with goto is also, that the old goto world, you could call the function and maybe it returns, maybe it jumps somewhere else. Maybe some other function in some other one of your codes suddenly returns instead. Functions have this sort of control flow they're supposed to have, where you have your function and then you call something so a control jumps into that function, it executes that code, and then it comes back again to your function. That's kind of this nice little structure. That's why it's treated as a black box 'cause you know, oh, it'll do some stuff, but then it'll come back. goto isn't like that, it's just a one way, you leap off somewhere else, and then maybe you come back. You could jump back, it's something you do manually.

26:14 Michael Kennedy: Oh man, it's like a choose your own adventure book. You don't know when you're done, you don't know where to begin, you're just like, it's always a mess, it's always a surprise.

26:20 Nathaniel Smith: Yeah, yeah. I mean, it's choose your own adventure except then you're like, you know, actually, it says I can go to page five or 10 but I'm going to 17. Let's see what happens. And that breaks the function abstraction. So it means you call a function and hopefully if it's like a nice well-written library and people aren't messing with you, it will come back normally. But there's no kind of guarantee of that in the language. If someone decided to be clever, it could do something else, it'll jump somewhere else. And then it becomes very hard to sort of reason about the overall structure of your program, if that's true. And it also breaks higher level structures when people use their program. So, think about exceptions. A handy thing about exceptions is like, oh, if something goes wrong, and I didn't think about how to deal with it locally, then the exception will propagate out of the pipeline until either someone catches it or the program crashes, you get a trace back, and that's not great but at least you know, you didn't just like go blindly on doing the wrong thing. At least you get a traceback, you can try to figure out what happened.

27:25 Michael Kennedy: Not with threading, not if you can't thread.

27:29 Nathaniel Smith: We'll get there in a moment. If you have goto... When you write in an exception, it goes up the stack, right? It's like, okay, does this function want to handle it? Okay, how 'about its caller? How 'about its caller's caller? 'Cause you have this nice stack. You know who the caller is, you know who the caller's caller is. If you have goto, then control just bounces around sort of at random, so who wants to know about this exception? I don't have a well-defined sense of caller even, right? It's just like these basic things that you take for granted growing up, you know, that our ancestors struggled with and sort of fixed for us and now we take for granted. These basic assumptions just aren't there in the language of goto. With blocks have similar issues. So a With block is supposed to, you say with open file and then you know, okay, inside the block, the file's open, outside the block, the file's closed. That's nice, it's great, it makes it easy to manage resources. But if you have goto, you're like, you jump in the middle of a block, you jump out in the middle of a block, like what happens? Where does the file open and close? How does that even, I mean... It just doesn't work.

28:30 Michael Kennedy: Exactly.

28:30 Nathaniel Smith: Yeah. So, yeah, and then if you look at problems dealing with threads or things that are a struggle like in asyncio or in Twisted, these are a sign of the things that are problematic. So, you call a function, is it still running, maybe? When it returns, it makes it hard to control sequencing, hard to treat things to black boxes. You don't know, you need to like go read the source code when you use some random asyncio based library to find out, like, okay, I called this function, but did it actually do the thing then? Or has it scheduled something in the background? You have to go to...

29:02 Michael Kennedy: Yeah, does the error go to the caller, or does it go to the callback?

29:06 Nathaniel Smith: Right, yeah, so if you spawn off this background thread, or background task as asyncio calls it, and it crashes and there's an error, there's an unhandled exception new with that. Well, you've lost, you don't have a nice call stack. You've spun up this new independent entity that's executing and it's now split apart from your original code. And so, they're sort of like, where does that, if there's an unhandled error, an exception, then what actually happens is that in threads and in asyncio, in Twisted, whatever, it maybe prints something else in it, like, hey, I hope someone's looking, something went wrong. And then it throws away the exception and carries on.

29:41 Michael Kennedy: It does make your software more reliable 'cause there's way fewer crashes when you don't actually...

29:46 Nathaniel Smith: Well, for some value of reliable, right, yeah.

29:49 Michael Kennedy: Exactly, nice. Okay, so what are the building blocks of Trio that address some of the problems?

29:54 Nathaniel Smith: So, the main thing that is in Trio, we have this thing we call a nursery. It's just sort of a silly joke about if you want to spawn a child task, then it's not okay for it to just go off independently. You have to put it somewhere and it'll be taken care of. It's like a nursery is where children live, and, okay. So completely, what it is if you want to spawn a child task in Trio, you have to first have a nursery object. The way you get a nursery object is you write, async with trio.open_nursery() as nursery. You have a with block that opens a nursery and then you get this nursery object. And that object is tied into to that with block. And then once you have it, you can say, nursery, there's a method on that nursery to spawn a task into it. And all the tasks you spawn in the nursery will run concurrently, but then the trick is that with block, its lifetime and the parent is tied to the lifetime of the child tasks. So then, you can't exit the with block while there are still children running. If you hit the end of the with block, then the parent just stops and waits for them.

30:55 Michael Kennedy: The stuff within the with block could be interwoven. In certain ways, it's in concurrent. But taken as a whole, the with block is like a black box. You start it, it runs, and then out comes the answer at the end and everything's done, or it's gotten into its canceled state, or whatever happens to it, right?

31:13 Nathaniel Smith: Exactly, exactly, yeah. So that let's us do things like... So right, so now if you call a function in Trio, you know, well okay, it might internally open a nursery and have concurrency, but by the time it finishes, it's done, it hasn't left anything running. If something crashes... Child task has an exception that it doesn't catch, then we say okay, what do we do with this? Oh, well wait, the parent is just sitting there waiting for the child. So what we do is the exception hops into the parent and he continues to execute it. So it continues propagating out. So Trio follows the normal Python rule of you can catch an exception but if you don't, then it will keep propagating 'til someone does.

31:53 Michael Kennedy: Right, so that nursery is a really important block, building block here. And I think you know, it's really cool to be able to start all these child tasks, and it can even be sort of children of the children, right? You can pass the nursery and the child task could spawn more child tasks or whatever. And so, they're all going to sort of be done at the end. This portion of Talk Python To Me has been brought to you by Rollbar. One of the frustrating things about being a developer is dealing with errors. Relying on users to report errors, digging through log files, trying to debug issues, or getting millions of alerts just flooding your inbox and ruining your day. With Rollbar's full stack error monitoring, you get the context insight and control you need to find and fix bugs faster. Adding Rollbar to your Python app is as easy as pip install rollbar. You can start tracking production errors and deployments in eight minutes or less. Are you considering self hosting tools for security or compliance reasons? Then you should really check out Rollbar's compliance SaaS option. Get advanced security features and meet compliance without the hassle of self hosting, including HIPAA, ISO 27001, Privacy Shield, and more. They'd love to give you a demo. Give Rollbar a try today. Go to talkpython.fm/rollbar and check 'em out. The other thing that's really important in this type of threading is to not block forever, to not wait on something forever, right?

33:19 Nathaniel Smith: Yeah, you're talking about timeouts and cancel scopes.

33:22 Michael Kennedy: Timeouts and cancel scopes and all that stuff, yeah, yeah, yeah, yeah.

33:25 Nathaniel Smith: Yeah, so this is yeah, this is a really basic problem if you're doing anything kind of network programming for example, or any kind of, like when your program's talking to other programs, other resources, because sometimes, those will just stop responding. So it's like if you make an HTTP request, then maybe you'll succeed. Maybe it'll fail and you'll get like a 404, something like that. Or there's also a third possibility which is that it just never finishes. The network disappeared and it just sits there... And your program...

33:56 Michael Kennedy: It's going to be bad.

33:59 Nathaniel Smith: Yeah, I mean, it's writing a little script or whatever, it's like you're running a, At some point you'll get bored and hit Control C and its' fine. But for sort of more complicated systems, you need to be robust against this.

34:12 Michael Kennedy: Or unattended system? Or like, oh, we have 10 workers but we're only getting this super low throughput, why? Well, 'cause eight of them are like just blocked forever.

34:19 Nathaniel Smith: Yeah, you need to somehow detect this, be able to get out and say, okay, actually, let's stop doing that. Try something else, raise an error, something but you know not just sit there forever. So, that means yeah, there's just like... And this is just a problem that's just endemic, right? It's just every time you do any kind of networking or other kinds of IPC, you need to have this kind of timeout support. So that's one thing that makes it tricky. The other thing that makes timeouts tricky is that usually, the place where you need to set the timeout is like buried way, way down deep inside the code. It's like at some point, you're HTTP client is using some socket library and the socket library is like trying to send some bytes or read some bytes from the network. Like a very, very low-level operation. It's probably buried under five levels of abstraction. Then where you want to set the timeout is like way out at the top where you say, I just want to say if this HTTP request isn't done in 10 seconds, then give up. So timeouts are this problem that's like you need them everywhere and you need coordination across different layers of your system. So it's this really complicated problem that covers, you really kind of need your timeout framework that's like used universally by everyone and can coordinate across the system.

35:31 Michael Kennedy: And it can be super tricky. Imagine you want to call two web services and do a database transaction, and if one times out you want them all to like roll back, right?

35:40 Nathaniel Smith: Yeah, sure.

35:44 Michael Kennedy: Which is like super... How do you coordinate that? Almost impossible, right? Unless, unless you put in Trio.

35:51 Nathaniel Smith: So yeah, so right yeah, so the traditional way to do this is you have a timeout framework inside each library. Your HTTP client has a timeout argument that you pass or whatever. But that doesn't help with, people just don't do that reliably, and yeah, it doesn't solve these coordination problems. So in Trio, we say no, this is something we're just going to bake into the I/O library itself. So, all the libraries that work with Trio, if you have an HTTP client with Trio, it uses Trio's timeout system. And so, an assistant called cancel script, so again, there's a lot more details on this in a blog post that I wrote called, Timeouts and Cancellation for Humans and I'll put the link in the podcast description, I guess. And basically, yeah, it's fairly simple to use. Basically, the way it works is you can just, anywhere in your code, say, with timeout 10 seconds, and then you put code inside that with block and if it takes more than 10 seconds, then Trio raises a special exception, called Canceled, that just sort of unwinds out of there. Then that with block catches the exception, it's a way of saying, okay, whatever's happening with with block, stop doing that and then carry on at that point. The other nice thing that's important about this, and it's compared to a lot of other systems for working with timeouts and cancellation is that we have the with block delimits. You can say, this is the operation that I want to cancel, which is really important. So, asyncio doesn't work like this. So in asyncio, you can say, I want to cancel something and that injects the special exception. But then there's nothing that keeps track of, you can't look at the exception and figure out, okay, did I want to cancel this specific network operation or this HTTP request, or this entire program? You don't keep track of that in Trio, because set up this with block, we can say, okay, I know this is the actual, this is the operation that I'm trying to cancel right now, and these could be nested.

37:46 Michael Kennedy: Yeah, it's awesome, and if you want to do a timeout, you can create a with block that says basically, do this to success or fail after 10 seconds. What is the syntax for that? It's like async with, I forget.

37:59 Nathaniel Smith: This one is just a regular with. So async with is just like a regular with except that a regular with calls a method at the beginning and end of the block and async with does an await call of the method at the beginning and end of the block. And we talk about await being the special thing we call the special functions. For nurseries, you have to use an async with because at the end of the block, the parent might have to wait for the children and you want to let them run while it's waiting.

38:28 Michael Kennedy: Right.

38:28 Nathaniel Smith: So it has to be an await there. So it's used for timeout, you'd just set in something at the beginning and the end, but it doesn't have to actually like stop and wait for, let other things run there, it can be synchronous. That's just a little detail, but yeah, so it's basically a with block. The main one, the basic one is called move on after to kind of remind you that it's going to run the code, then if the timeout happened, it doesn't raise an exception. You can stop and see, okay, was it canceled or not? But it keeps executing after the block so you can look at it. Which this again has to do with... The simplest case for a timeout is just like if the timeout expires, blow up everything.

39:05 Michael Kennedy: Exactly.

39:05 Nathaniel Smith: Give up. Raise an exception, right? But a lot of times, you want to say, oh, did this thing get canceled? If so, do some fall back, do something else. So, the core thing in Trio is not to raise an exception. After that, it's like to revive some object you can look at to see what happened, and then figure out how you want to deal with it.

39:24 Michael Kennedy: Yeah, the other thing that I thought you did well with these cancellation scopes is if there's an exception on one of the tasks and there's a bunch running on one of these with blocks, you can catch that and use that to cancel the other ones and just say, no, no, things are going wrong, we're out of here, all threads are done, we're out of here.

39:44 Nathaniel Smith: Yeah, so yeah, so the nurseries actually kind of need the cancel scopes because one of the things the nurseries do is the exception handling so if an exception happens in a child that isn't called, then it has to hop into the parent. So we have this kind of, I guess, the way I think about it is normally, you think of like your call stack is like this little chain. A calls B, calls C, calls D, right? What nurseries do is they kind of make that into a tree. So, A calls B, B calls C and D and E simultaneously, and then maybe D calls several things at the same time. So you have this tree structure for your call stack. And now some exception happens down inside one of those branches. Well, what do you do with exceptions? You unwind the stack, right? You go, you run any handlers inside that function and then you go up to the parent, you kill that function's call and you move in the parent, run the handlers there and so on. But then we have this problem, if you're going up unwinding the stack, and you hit one of these places where the stack now branches, how do you sort of unwind past that point? You don't want to orphan the other children. You can just unwind the parent without stopping them 'cause that's, our whole thing in Trio, right, with said we're not going to let orphans run around unsupervised, that's a bad idea. Dijkstra doesn't like that. So what we do is, well we have to unwind the stack, so what we do is we use the cancellation at that point to go and we cancel all the other tasks that are inside that same nursery. So we sort of prune the tree, unwind those tasks back up to this point, and then we can keep unwinding from there with the original exception.

41:14 Michael Kennedy: Yeah, that's really clever, I really like the way you put it altogether. So there's also some other pieces, like some file system bits, I love the whole trick of saying like await, sleep, for a little bit of time. Just go I'm basically yielding this thread to let it do other work and just carry on as opposed to say like a time.sleep, and things like that. So maybe tell us about the higher order of building blocks.

41:39 Nathaniel Smith: Yeah, so, I guess yeah, so far we've been talking about kind of the core ideas in Trio which are, you know, not specific to the project. In fact, talking to a number of other language designers who are just like, oh, this is interesting... But then yeah, but Trio is also a project you can download. It has documentation and API and all that and one of the things I tried to do with that project is to make it sort of really usable and accessible, kind of have this philosophy of like the buck stops here for developer experience. You're trying to write a program using Trio and there's something awkward or difficult or a problem you have, then it's up to us to solve that. And even maybe it's so we, we have the project itself, we also have things like the core networking and concurrency stuff, but we also have testing helpers, and we have a documentation, like a plugin for Sphinx to make it easier to document these things.

42:35 Michael Kennedy: So, there's something called pytest-trio.

42:39 Nathaniel Smith: Right.

42:39 Michael Kennedy: How's that work?

42:40 Nathaniel Smith: So the main thing that pytest-trio gives you is that you can have, when you write a test, you can say, okay, we need a little more information here. So, Trio, basically there's sort of a stereotyped pattern that all Trio programs follow where you have an async def main, or whatever you want to call it, it's like your top level async function. That's where you go in that Trio's magic async mode and use all the Trio stuff.

43:05 Michael Kennedy: Right, so maybe you start by saying trio.run.

43:08 Nathaniel Smith: Yeah, yeah, so the thing about async functions, you can only call them from on the async functions. So, you have this problem of how do you call your first async function? And that's what trio.run is. So you have this little bit of ceremony at the very top of every Trio program but this also the case for asyncio and so on. You have to switch into Trio mode. And that's kind of annoying in a test suite to do that on every single test. So that's the most basic thing the pytest-trio does for you. It let's it so you can write async def test, whatever, and it will take care of setting up Trio and turning it on there. But it also has some other handy stuff. So one thing is that it allows you to have async fixtures and it does some sort of magic to like, it switches into Trio mode, and then it sets up your fixtures and then calls your test and then tears down the fixtures so it's all within this async context. And it's also integrated with some of the testing helpers that are built into Trio itself. So in particular, the one that's sort of the most gee whiz, awesome is that Trio has this ability to use a fake clock. So this an issue when you're writing networking programs. Often, you want to have a test about okay, what does happen if this HTTP request just hangs forever? Do my timeouts work correctly? Stuff like that, right?

44:18 Michael Kennedy: Right.

44:20 Nathaniel Smith: But it's really annoying to run these tests because it's like okay, now I have to sit and wait for a minute for the timeout to fire, make sure something happens, and what happens? Every time you write your test, it just spend a minute sitting there doing nothing. And you're like, I really, okay... Like, this is really boring and I want my test suite to finish already. And I need like 100 of these tests, so yeah.

44:40 Michael Kennedy: It's killing my build time.

44:42 Nathaniel Smith: Exactly, yeah, right, it like really ruins your flow. So one thing I've done, actually, what I did is with writing Trio itself, I said, okay, I really want Trio's own test suite to run really quickly. I figured that'll force me to come up with the tools that my users will need to make their test suites run quickly. If you just type pytest-trio, if you want to contribute to Trio, we'd love to have you, we've very friendly and all that. If you type pytest-trio, it runs in like five seconds. It has 99 point something percent test coverage which is like completely, it's very difficult to get there because Trio is this really complicated networking library. It's all the stuff that's usually hard to test. Part of that is that for all the time I've tested, we have this magic clock. And so, the way it works is you say, okay, Trio, I don't want you to use, I know it says like sleep 30 seconds or whatever. I don't want you to actually sleep 30 real seconds. I want you to sleep 30 virtual seconds. And so, it's a special thing you sort of pass to trio.run, say, every time you have timeout, sleeping, anything inside this call, I want you to sort of use this virtual clock instead. And the way the virtual clock works is it starts at a time zero, and it just stays there, and you can advance it manually if you want or things like that but normally what you do is you use the automatic behavior which it is a time zero and then it sort of watches what your program's doing. Anytime that your program sort of finishes running everything and just stops and is waiting for stuff to happen, then it looks to see, okay, looks like the next thing that would wake up, there's like a sleep 10 and a sleep 20. Okay, so in 10 seconds, that's the next one that'll wake up, I'm just going to jump the clock forward 10 seconds and then start running again.

46:15 Michael Kennedy: I see. So anytime it knows it's going to be waiting for a certain amount, it's like, alright, we'll let the wait start and we'll just go right past that.

46:24 Nathaniel Smith: So it's basically, yeah, you just write your test the way you normally would, uses timeouts regularly, test your real code, put sleeps, whatever's easiest. And then, what's annoying about that normally is then your test takes like 30 seconds, a minute, whatever it is to run, most of which the time is just sitting there doing nothing, waiting for time to pass. So, if you flip the switch to use the special clock, then it does exactly the same things, but it just skips over all those times when it's sitting and doing nothing. And so, suddenly, it runs in like a few milliseconds.

46:52 Michael Kennedy: That's awesome.

46:52 Nathaniel Smith: Yeah, it's pretty awesome. And then pytest-trio is hooked up to that so you can just turn this on, just like flip a switch on any test.

47:00 Michael Kennedy: Oh, that's great. So, one of the things that makes me a little bit sad about Python's async loops and stuff is the asyncio based apps and the Trio based apps, those are not exactly the same and they're not exactly compatible. There's not like the core, you're using the same core and so it just keeps running. The asyncio loop and the Trio loop, these are not the same, they got to be like brought together with different APIs, right? But you do have some interoperability. So, Trio can work with libraries that maybe assume asyncio is there or something, right?

47:38 Nathaniel Smith: Trio itself is just like a totally different library than asyncio. I've looked at you know, could I build it on top of asyncio? And there's a number of reasons why that didn't make sense. And yes, and there is this big problem because it's just because of little things about how these async concurrency systems work. There has to be one library ultimately that controls all the actual networking like asyncio, or Trio, or whatever, or Twisted, or Tornado, or something. And that means, so like if you have a, like I say, an HTTP client that's written to work on asyncio, it won't necessarily work on Trio 'cause it's using a different set of networking primitives underneath, or vice versa.

48:17 Michael Kennedy: Right.

48:18 Nathaniel Smith: And this is sort of a larger ecosystem problem. There used to be there's Twisted and Tornado and Genie event and none of them could interoperate. You'd have to pick which one you're using. And asyncio was sort of, one of the reasons it exists is to try and solve that problem and become the standard one, that Twisted and Tornado and everyone can use. Now they can all work on top of asyncio and now all those libraries written for Twisted and Tornado, you can mix and match however you like. Then here comes Trio and kind of ruins that by being, here is this is the new thing you should use. So, to try and kind of mitigate that, there is this library called trio-asyncio which lets you use asyncio libraries on top of Trio. The way it does this is kind of, it creates like a virtual asyncio loop that eternally uses Trio's primitives under the cover and it kind of lets you kind of quarter them off in kind of a little container.

49:11 Michael Kennedy: I see.

49:11 Nathaniel Smith: Sort of all the weird stuff asyncio can do, you can do that stuff, but kind of in a little box that won't leak out to pollute the rest of your program, your Trio program.

49:22 Michael Kennedy: I think this is really encouraging because that means if you maybe have already invested in asyncio and you've already got some code written on it, you could still let Trio, without going, I'm rewriting it in Trio and is that worth it, is that a good idea?

49:35 Nathaniel Smith: Yeah and it gives you sort of an incremental path. You can say like, well, okay, I can at least get it running on Trio first of all, and then I can start porting one piece at a time and eventually end up all in Trio, hopefully.

49:46 Michael Kennedy: Exactly.

49:47 Nathaniel Smith: Now, the reason it's not, you can't just magically make this all work because Trio and asyncio really have fundamentally different ideas about how things should work. Now obviously, I think Trio's ideas are better. They're kind of a new thing by trying to fix all these problems. The differences aren't just like in terms of the internal implementation. The differences are in terms of just like the fundamental concepts that are exposed to...

50:09 Michael Kennedy: Right, the philosophy of it all.

50:11 Nathaniel Smith: Yeah, right, it totally changes how you write the library on top. So, it's not something you can just sort of magically switch.

50:18 Michael Kennedy: But there is a little bit of an incremental aspect to it. So, we're almost out of time, just really quickly, what's the future of Trio, where's it going, what do you got planned, and is it production ready?

50:29 Nathaniel Smith: So yeah, so I should be clear, yeah, right now the Trio library itself is very solid but there is not much of an ecosystem around it. So, there is not currently an HTTP client or an HTTP server where you can just use it out of the box and it's like mature and all that. There are some solutions for these kinds of issues, and I won't say too much 'cause this will change quickly. We have a chat channel, if you go to our documentation or whatever, you can find out what the latest news is about what you should use. But it's not something that you know is ready today to run big websites or something like that, just 'cause the libraries aren't there yet. If you'd like to help write those libraries and make it happen, I'd love to have you. We have a really solid contributing policy and things like that, you can check it out. The other thing that's happening is asyncio, so I also, I spend a lot of time, I am a core Python developer, I talk to Yury Selivanov, this amazing core developer, and Guido about all this stuff. And so, there is this, Yury is quite keen on saying, oh, wow, well great, you know, Trio's ideas are better. You should add them all into asyncio. This is quite compli... There's a lot of, I mean we could probably do a whole other podcast about all the trade-offs there, and maybe we should, I don't know, it's pretty interesting.

51:37 Michael Kennedy: It is interesting, yeah.

51:39 Nathaniel Smith: So, that's something that's also happening is that Yury is going to be trying to add nurseries and cancel scopes and things into asyncio. So, I think there's going to be a lot of limitations since a lot of the value in Trio are the things people can't do, and asyncio has already got like six layers of abstraction built there, or I don't know, it's not actually six, it's like four or something.

51:57 Michael Kennedy: Yeah, yeah, yeah.

51:59 Nathaniel Smith: And they're all totally doing all the things that Trio says these are things that should never be done, it shouldn't be possible. That's also you can fix by just adding a new layer on top, but you know it's still better than nothing, right? asyncio continue to exist, so we do want to make it as good as...

52:14 Michael Kennedy: Apply these ideas, yeah, absolutely.

52:15 Nathaniel Smith: And ultimately, we don't, I mean maybe, like no one knows for sure whether like make a new thing plus a compatibility layer like Trio, the Trio asyncio thing I mentioned. Is that going to be the best thing or is making asyncio better going to be the best thing? None of us know for sure. So, we are trying both versions and we'll sort of see...

52:32 Michael Kennedy: I'm super excited just to hear that that collaboration is happening, I think that's great. Alright, I think that we're out of time for Trio. It's a super interesting project and I really love what you've done there. I think it's brilliant, so people should definitely check it out.

52:46 Nathaniel Smith: Thanks a lot.

52:46 Michael Kennedy: Yeah, you're welcome. So, quick two final questions, if you're going to write some Python code, what editor do you use?

52:52 Nathaniel Smith: I use Emacs, I've been using it for 20 years, I'm stuck.

52:55 Michael Kennedy: Awesome.

52:55 Nathaniel Smith: It's great! I don't know if it works for other people or not just 'cause, yeah.

53:01 Michael Kennedy: Yeah, sure, I definitely, I started on Emacs as well. And notable PyPI package?

53:06 Nathaniel Smith: Yeah, well, Trio, obviously.

53:08 Michael Kennedy: Obviously, and pytest-trio?

53:10 Nathaniel Smith: Yeah, pytest-trio, sphinxcontrib-trio. If you go to github.com/python-trio to see all the different projects under the Trio organization. They're trying to build up that ecosystem like I said.

53:23 Michael Kennedy: Yeah, sounds cool. Yeah, so final call to action, people are excited, they want to try Trio, maybe they want to contribute to it, what do they do?

53:30 Nathaniel Smith: So, start with the documentation, trio.readthedocs.io. That also will give you links to our chat, is sort of a place to hang out, has our contributing in docs if you want to get involved like that. With give out commitments on your first pull request acceptance.

53:47 Michael Kennedy: Awesome.

53:47 Nathaniel Smith: So, there's lots of people, yeah, we want, you know, this is a project for everyone. I don't want to just be my personal little thing.

53:53 Michael Kennedy: Yeah, that sounds great, awesome. Alright, Nathaniel, thank you for sharing your project and creating it, it's quite great and we may have to come back and dig into this a little bit more, this was fun.

54:03 Nathaniel Smith: Thanks for having me, yeah.

54:04 Michael Kennedy: Yeah, talk to you later.

54:04 Nathaniel Smith: You too, bye bye.

54:07 Michael Kennedy: This has been another episode of Talk Python To Me. Our guest on this episode was Nathaniel Smith and it was brought to you by Linode and Rollbar. Linode is bulletproof hosting for whatever you're building with Python. Get four months free at talkpython.fm/linode. That's L-I-N-O-D-E. Rollbar takes the pain out of errors. They give you the context insight you need to quickly locate and fix errors that might have gone unnoticed until your users complain, of course. And as Talk Python To Me listeners, track a ridiculous number of errors for free at rollbar.com/talkpythontome. Want to level up your Python? If you're just getting started, try my Python Jumpstart by Building 10 Apps or our brand new 100 Days of Code in Python. And if you're interested in more than one course, be sure to check out the Everything Bundle. It's like a subscription that never expires. Be sure to subscribe to the show. Open your favorite podcatcher and search for Python, we should be right at the top. You can also find the iTunes feed at /itunes, Google Play feed at /play, and direct RSS feed at /rss on talkpython.fm. This is your host, Michael Kennedy. Thanks so much for listening, I really appreciate it. Now get out there and write some Python code.

Back to show page
Talk Python's Mastodon Michael Kennedy's Mastodon