Learn Python with Talk Python's Python courses


« Return to show page

Transcript for Episode #304:
asyncio all the things with Omnilib

Recorded on Tuesday, Feb 16, 2021.

00:00 relatively recent introduction of async and await as keywords in Python have spawned a whole area of high performance highly scalable frameworks and supporting libraries. One such library that has great async building blocks is Omni lib. On this episode, you'll meet john Reese. JOHN is the creator of Omni lib, which includes packages such as a itertools, a IO, multi process, and AI Oh, SQL Lite. Join us as we async all the things. This is talk Python to me, Episode 304, recorded February 16 2021.

00:45 Welcome to talk Python, a weekly podcast on Python, the language, the libraries, the ecosystem, and the personalities. This is your host, Michael Kennedy, follow me on Twitter, where I'm at m Kennedy, and keep up with the show and listen to past episodes at talk python.fm and follow the show on Twitter via at talk Python. This episode is brought to you by linode and talk Python training. Please check out the offers during their segments. It really helps support the show.

01:10 JOHN, welcome to talk Python to me Howdy. It's good to be here. Yeah, it's great to have you here as well. It's gonna be a lot of fun to talk to you about async stuff, I think we both share a lot of admiration and love for async IO and all the things I definitely do. It's one of those cases where the things that it enables is so different. And you have to think about everything so differently when you're using async IO, that it's it's a nice challenge, but also has, you know, potentially really high payoff if it's done well. Yeah, it has huge payoff. And I think that it's been a little bit of a mixed bag in the terms of the reception that people have had. I know there have been a couple of folks who've written articles like well, I tried it, it wasn't that great. But there's also you know, I've had examples where I'm doing something like web scraping or actually got a message from somebody who listened. Maybe they were listening to Python bytes, my other podcasts. But anyway, I got a message from a listener after we covered some cool async IO things. And web scraping, they had to download a bunch of stuff. Like it takes like a day, take it literally it takes all day or something. It was really crazy. And then they said, Well, now I'm using async. And now my computer runs out of memory and crashes. It's getting it so fast. Like that's a large difference right there. Right? Yeah. There's certainly a category of things where it's amazing. Yeah, I think the the case we've seen it most useful for is definitely doing those sorts of like concurrent web requests internally, it's also extraordinarily useful in monitoring situations where it's like, you want to be able to talk to a whole bunch of servers as fast as possible. And maybe the amount of stuff that comes back from it is not as important as being able to just talk to them repeatedly. Yeah, but you're right. There's definitely a lot of cases where people are not necessarily using it correctly, or they're hoping to like add a little bit of async into an existing thing. And that doesn't always work as well as just building something that's async from the start. Yeah. And there's more frameworks these days that are welcoming of async. From the start, I guess, yes. And we're gonna talk. Yeah, we're talking about that. But before we get too far down the main topic, let's just start with a little bit of background and you How did you get into programming in Python? Sure. So my first interaction with computer was when I was you know, maybe like five or six years old, my parents had a ti 99 for a, which is like the knockoff Commodore attached to the television. And I think back to that, like, how could you have like, legible text on the CRT TV, it was, it was pretty bad. It's bad, right? It's like, my biggest memory of it is really just, every time we would try to play a game and the cartridge or tape or whatever wouldn't work correctly, it would just dump you at a basic prompt, where it's just expecting you to start typing some programming in and like nobody in my family had a manual or knew anything about programming at the time. There was like, I think maybe we figured out that you could like print something to the screen, but nothing beyond that. Right? And it wasn't until we ended up getting a DOS computer, you know, a few years later, that really started to actually do some quote unquote, real programming, where we were writing like batch scripts to do menus, or like, you know, deciding what program to run or things like autoexec on a floppy disk in order to boot into a game. I was just thinking of all the autoexec bat stuff that we had to do like, Oh, you want to play Doom, but you've got you don't have enough high memory, whatever. Yeah, that was and so you've got to rearrange where the drivers are. It was What a weird way to just play games. I've got to rework with my drivers. Make sure you don't load your mouse driver when you're booting into this one game that doesn't need the mouse because otherwise you run out of memory. Yeah, it was kind of crazy. And my biggest memory of programming there was there was qbasic on it, and it came with this gorilla game where you just like throw bananas at another gorilla from like some sort of like city skyline like a king kong knockoff Donkey Kong knockoff type thing. Yeah, exactly. I would struggle to figure out how that was actually doing anything. It was like I'd try to poke at it and figure it out.

05:00 Went didn't really do that much. But it was actually my first opportunity for quote unquote, open source projects, because there's a video game that I really, really liked called NASCAR racing. And one of the things that I learned was you is on the burgeoning part of the internet, for me, at least was people would host these mods for the game like geo cities, or whatever. And so these would change, like the models for the cars or the wheels, or add tracks or textures, or whatever. And I actually wrote a batch script that would let you like at the time that you wanted to play the game, pick which of the ones you had enabled, because you couldn't have them all enabled. So it would like write is basically just a batch script that would go and like, copy a bunch of files around from one place to another. And then when you're done with the menus, or whatever, then it would launch the game. And I remember posting that on geo cities and, you know, having the silly little like JavaScript counter or whatever it was take up to like a couple 100 page views of people downloading just the script to switch mods in and out. And so that was like the first real taste of like open source programming or open source projects that I had. But that actually like led into the way that I really learned programming, which was I wanted to have my own website that was more dynamic than what geo cities had. And so I ended up basically picking up Perl and eventually PHP to write your webpages that I hosted on my own machine at home from like IIS, and active. How did you get? What did he use like dyne DNS or something like that? Yes, exactly. dyne DNS, it was the janky setup, but it at least worked. And I could impress my friends. And it wasn't until I got to college, and I was working on my first internship, where the main project I was working on was essentially improving an open source bug tracker written in PHP, in order to make it do the things that my company wanted to be able to do in it. So like adding a plug in system and things like that. And in the process of that, they I eventually became a maintainer of the project. And they had a bunch of Python scripts for managing releases, like doing things like creating the release, tarballs, running other sort of like linter type things over the codebase. And that was my very first taste of Python. And I hated it. Because it was just like, I couldn't get past the the concept of like, you're forcing me to do whitespace. Like, how barbaric is this, but it actually didn't take long before I realized that that actually makes the code more readable. Like, you can literally pick up anybody else's Python script, it looks almost exactly like how you would have done it yourself. Yeah. And you got a lot of the pep eight rules and tools that automatically re you know, format stuff into that. So it's very likely you've got black and pi charms reformat and whatnot. Right. But this was all before that. So I think this was when like, Python two, six was the latest. This was quite a while ago, right? before the big diversion. Were there. Yeah, yeah, exactly. Like, I had no idea what Python three was until like, three, two or three, three came out, because there's just sequestered in this world of writing scripts for whatever version of Python was on my Linux box at the time, right? You know, I suspect in the early days, probably the editors were not as friendly or accommodating, right, like now, if you work with PI charm or VS code or something, you just write code and it automatically does the formatting and the juggling and whatnot. Once you get used to it, you don't really think that much about it. It just magically happens as you work on code. I'm wanting to say at the time, I was just doing something stupid, like notepad plus plus or, you know, one of the other really generic text editors like Notepad, but consolas fonts, or it was Eclipse, it might have been Eclipse. Yeah, it was it maybe pydev. I don't think I ever use a Python specific editor. like yeah, I think I've tried pi charm exactly once. And I do just enough stuff that's not Python that I don't want to deal with a an ID or editor that's not generalized, right? Sure. Makes sense. Speaking stuff you work on, what do you do day to day? What kind of stuff do you do? I'm a production engineer at Facebook, on our internal Python foundation team. And so most of what I do, there is, you know, building infrastructure or developer tools, primarily enabling engineers, data scientists, and AI or ml researchers to do what they do in Python every day. So some of that is like building out the system that allows us to integrate open source third party packages into the Facebook repository. Some of that is literally developing new open source tools for developers to use a while back, I built a tool called called bowler that is basically a refactoring tool for Python. It's based off of lib two to three that's an open source Python essentially gives you a way to make safe code modifications rather than using regular expressions which are terrible. Yeah, for sure. Based on like the ASD or something like that, yeah, exactly. Yeah. Okay. It's like the the benefit of lib CST is that it takes

10:00 In the concrete syntax tree, so it keeps track of all the whitespace comments and everything else. So that if you modify the tree in lib two to three, it will then allow you to write that back out exactly the way the file came in. Whereas the HST module would have thrown all that metadata away, right formatting and spaces, whatever, it doesn't care. Yeah. And one of the newer projects I've worked on is called use sort. And it's Microsoft. And essentially, it's a replacement that we're using internally for eyesore because I saw it has some potentially destructive behaviors in its default configuration. And our goal was essentially to get import sorting done in a way that does not require adding comment directives all over the place. So right, the the obvious example of that would be if you import some module, and then you will need to call a function out of it. Like maybe that function will modify the import semantics or add in a special import hook or things like that, or turn off network access is like the two main use cases we see. And then you go and import more stuff after that with eyesore, it would try to move all those imports above the function call that blocks the network access. Oh, interesting. I see. Yeah, yeah, you want that to happen first, and then it can go crazy, right. And you can't just put a skip directive on that function call. Because that just means I sort won't try to sort that one. But it'll sort everything else around it as well. And so what we ended up seeing was a lot of developers doing things like I sort skip file, and it just turn off import sorting all together, one of the things of use or is like first do no harm. It's trying its best to make sure that these common use cases are just treated normally and correctly from the start. In most cases, it's a much safer version of I sort, it's not complete. It's not a 100% replacement, but it's the thing we've been using internally, and it's one of the cases where I'm you know, proud of the way that we are helping to build better tools for the ecosystem. Yeah, this is really I never really thought about that problem. One thing that does drive me crazy is sometimes I'll need to change the Python path so that future imports, regardless of your working directory behave the same if you don't have a package or something like that. Write something simple. That's super common in the AI and ml type of workflows. Yeah, and I get all these warnings, like you should not have code that is before an import like, well, but this one is about making the import work. If I don't put this it's gonna crash for some people if they run it weirdly, and stuff like that. Right? So yeah, interesting. Yeah. Very, very cool. Project. Nice. Alright, so let's dive into async, huh? Sure. Yeah, so maybe a little bit of history. You know, Python is hard to talk about asynchronous programming in Python without touching on the Gil, global interpreter lock, you only spoken as a bad word, but it's not necessarily bad, it it has a purpose, it just its purpose is somewhat counter to making asynchronous code run really quick. And in parallel. I mean, it's it's one of those things where if you imagine what Python would be without the global interpreter lock, you end up having to do a lot more work to make sure that let's say if you had multi threaded stuff going on, you'd have to do a lot more work to make sure that they're not clobbering some shared data, like you look at the way that you have to have to have synchronizations. And everything else in Java or c++. Yeah, we don't generally need that in Python, because the Gil prevents a lot of that bad behavior. And the current efforts to kind of remove the Gil, that have been ongoing for the past eight to 10 years. In every single case, once you remove that Gil and add a whole bunch of other locks, the whole system is actually slower. So right, this is one of those things where it's like, it does cause problems, but it also enables Python to be a lot faster than it would be otherwise. And probably simpler. Yeah, yeah. So the the global interpreter lock, when I first heard about it, I thought of it as a threading thing. And it sort of is but you know, it's primarily says, Let's create a system so that we don't have to do locks as we increment and decrement the ref count on variable. So basically, all the memory management can happen without the overhead of taking a lock, releasing, lock, all that kind of weirdness. Yeah. So we've got like a bunch of early attempts. And then we've got threading and multi processing have been around for a while there's even tornado, but then around, I guess, was at Python three, four, we got async IO, which is a little bit of a different flavor than, you know, like the computational threading or the computational multiprocessing inside of async. It's actually an interesting kind of throwback to the way that computing happened in like the 80s and early 90s, where, like Windows 3.1, or classic Mac OS, essentially, you can, you know, run your program or your process, and you actually have to cooperatively give up control of the CPU in order for another program to work. So there'd be a lot of cases where, like, if you had a bad behaving program, you'd end up not being able to do multitasking in, you know, these old operating systems because it was all cooperative. In the case of async IO. It's essentially taking that mechanism where you don't need to do a lot of context switching in threads or in processes, and you're essentially letting a bunch of fun

15:00 Actions cooperatively coexist. And essentially say, when your function gets to a point where it's doing a network request, and it's waiting on that network request, your function then will nicely hand over control back to the async. io framework, at which point the framework and event loop can go find the next task to work on. That's, that's not blocked on something. Yeah. And it's very often doesn't involve threads at all, or, you know, the one main thread, right, like so. Yeah, yeah, it's like a way to create threading. It's a way to allow stuff to happen while you're otherwise waiting. Yeah, in the best case, you only ever have the one thread. And now in reality it is it doesn't work like that. Because a lot of our you know, modern computing infrastructure is not built in an async. way. So like, if you look at file access, there's basically no real way to do that a synchronously without threads. But in the best case, like network requests, and so forth, if you have the appropriate hooks from the operating system that can all be completely in one thread. And that means you have a lot less overhead from the actual runtime and process from the operating system because you're not having to constantly throw a whole bunch of memory onto a stack and then pull off memory from another stack and try to figure out where you were when something interrupted you in the middle of 50 different operations.

16:16 This portion of talk by Thunder Bay is sponsored by linode. Simplify your infrastructure and cut your cloud bills in half with linode. Linux virtual machines, develop, deploy and scale your modern applications faster and easier. Whether you're developing a personal project or managing large workloads, you deserve simple, affordable and accessible cloud computing solutions. As listeners of talk Python to me, you'll get a $100 free credit, you can find all the details at talk python.fm slash linode. linode has data centers around the world with the same simple and consistent pricing. regardless of location, just choose the data center that's nearest to your users, you also receive 20 473 65 human support with no tears or handoffs, regardless of your plan size. You can choose shared and dedicated compute instances. Or you can use your $100 in credit on s3, compatible object storage, managed Kubernetes clusters. And more. If it runs on Linux, it runs on the node, visit talk python.fm slash linode. Or click the link in your show notes, then click that create free account button to get started.

17:21 Right if it's started swapping out the memory it's touching them might swap out what's in the L one l two l three caches, DAX, I have a huge performance impact. And it's just constantly cycling back and forth out of control a lot of times, right, yeah, a lot of our testing internally, when I was working on things, it would talk to lots and lots of servers, it's like we would hit a point where somewhere between 64 and 128, threads would actually start to see less performance overall, because it just spends all of its time trying to context switch between all of these threads. Right? You're interrupting these threads at an arbitrary point in time, because the runtime is trying to make sure that all the threads are surfaced equally. But in reality, like half of these threads don't need to be given the context right now. So by doing those sort of interrupts in contexts, which is when the runtime wants to, rather than when the functions or requests are wanting to, you end up with a lot of suboptimal behavior. Yeah, interesting. Yeah. And also things like, locks mutexes, and stuff don't work, as well, because you're it's about what thread has access? Well, all the codes on one thread. So to me, the real Zen of async IO, at least at for many really solid use cases, kind of like we touched on is it's all about scaling when you're waiting. Yeah, I'm waiting on something else. It's like completely free to just go to if I'm calling micro services, external API's. If I'm downloading something, or uploading a file or talking to a database, or even maybe accessing a file with something like a i o files, yeah, yeah, there's a cool place called AI. Awesome. async. io by TMF. Here is pretty cool. Have you seen this place? I have looked at it in the past, I end up spending so much time looking at and building things. There's like, I haven't actually gotten a lot of opportunity to use a bunch of these. Most of my time, I am actually not working the high enough on the stack to make use of them. Right, right. Right. These are more a lot of more frameworks. You do have some other neat things in there as well, like async SSH, I hadn't heard of that one. But anyway, I'll put that in the show notes that's got I know, 5060 libraries and packages for solving different problems with async. io, which is pretty cool. Yeah, whenever I talk about async IO, one things I love to give a shout out to is this thing called unsync. Have you heard of unsync? I had not heard about it until I looked at the show notes. But it sounds a lot like some of the things that I've seen people implement a lot of different cases. It's a very filling a common sort of use case where you have like I was saying earlier where people want to mix async IO into an existing synchronous application. You do have to be very careful about how you do that, especially vice versa, or a lot of the the stumbling blocks we've seen tend to be cases where you have

20:00 synchronous code that calls some async code that then wants to call some synchronous code on like another thread so that it right, it's not blocked by it. And you actually end up getting this like, in out sort of thing where you have like nested layers of async. io. I'm not sure how much this may or may not solve that. I think this actually helps some with that as well. Basically, the idea is, there's two main things that it solves that I think it's really neat one, it's like a unifying layer across multiprocessing, threading, and async IO, right. So you put a decorator on to a function, if the function is an async function, it runs it on async IO, if it's a regular function that runs it on a thread. And if you say it's a regular function, but it's computational, it'll run it on and multiprocessing. But it gives you basically an async IO async and await API for and it figures out how to run the loop. And oh, anyway, it's pretty cool, not what we're here to talk about. But it's definitely worth checking out. While we're on the subject. Ultimately, it gives you just a future that you can then either await or ask for the result from right. Yeah, exactly, exactly. And the result, you instead of saying, gonna wait till it's finished, before you can get the result, you just go give me the result. And if it needs to, it'll just block. So it's a nice way to sort of cap the async IO, you know, like one of the challenges of async IO as well, five levels down the call stack, this thing wants to be async. So the next thing is a sink. So the next thing I'd say sink and like Yeah, all the sudden everything's async. Right. And so it was something like this, I mean, you could do it yourself as well, you can like just go create an event loop, run it. And at this level, we're not going to be async above it. But we're coordinating stuff below using async. io. And here's where it stops. Yeah, it sounds like a nicer version of what I see dozens of when you have lots and lots of engineers that aren't actually working on the same codebase together, but they're all in the same repository. And we end up seeing these cases where everybody has solved the same use case, I do think this would be useful. And I'm actually planning on sharing it with more people. Yeah, check it out. It's got a subtotal, I think 126 lines of Python in one file. But it's it's really cool. This unifying API. Alright, I guess that probably brings us to Omni lib. I want to talk about that for a little bit. So this is what I thought would be fun to have you on the show really focus on is like async. io. But then also, you've created this thing called Omni lab, Omni lab project that has solves four different problems with async. io. And obviously, you can combine them together, I would expect the origins of this really is like a built the like a IO sequel lite was the first thing that I wrote, there was an async framework. And then I built a couple more. And at one point, I realized these projects are actually getting really popular, and people are using it. But they're just like one of the 100 things that are on my GitHub profile and graveyard. So I've really felt like they needed to have their own separate place for like, these are the projects that I'm actually proud of, I thought that was actually a good opportunity to be able to make a dedicated like project or organization for it. And essentially say that everything under this guaranteed is going to be developed under, you know, a very inclusive code of conduct that I personally believe in and want to try. And also at the same time, make it more welcoming and supportive, you know, other contributors, especially newcomers, or other otherwise marginalized developers in the ecosystem, and try to be as friendly as possible with it. And it's like, this is something that I tried to do beforehand. And it just never really formalized on any of my projects. Other than like, here's a code of conduct file in the repository. Yeah. But this is like really one of the first times where I wanted to put all these together and make sure these are really like, this is going to be whether or not enough people make it a community. I want it to be welcoming from the outset. Right? That's really cool. And you created your own special GitHub organization that you put it all under and stuff like that. So it's Yeah, you kind of the things that have graduated from your personal project. Yeah, and kind of the threshold I tried to follow is like, if this is worth making a Sphinx documentation site for then it's worth putting on, you know, how many lead projects so they're not all async IO, that just happens to be where a lot of my interest and utility stands at. So that's what most of them are, or at least the most popular ones. But there are other projects that I have also in the backburner that will probably end up there that maybe not as useful as libraries or whatever. But either way, they, like I said earlier, these are the ones that I'm at least proud of nice and cool. So you talked about the being there to support people who are getting into open source and whatnot and having that code of conduct. What other than that? Is there like a mission behind this, like, I want to make this category of tools or solve these types of problems? Or is it just these are the things that graduated, it's something I've tried to think about, I'm not 100% certain, I would like it to have maybe more of a mission, but at the same time, it's like, especially from things I've had to deal with. It's like, I don't want this to be a dumping ground of stuff either. Like I want this specifically is like yeah, like in the the opening statement. I want it to be a group of high quality projects that are you know, following the

20:00 code of conduct. So from that perspective, it's like, at the moment, it's like, my personal interests are always in building things where I find gaps in, you know, availability from other libraries. So that's probably the closest to a mission of what belongs here is just things that haven't been made yet. Yeah, yeah. But either way, I just want to have that, you know, dedication to the statement of like, I want these to be high quality, I want them to be tested, I want them to be, you know, have continuous integration and testing and well documented and so forth. Yeah, super cool. All right. So there's four main projects here on the homepage. I mean, do you have the attribution one, but that's like helper tool. Exactly. Let's talk about the things that maybe they're the aIIow extension of. So we in Python, we have inner tools, right, which is like tools for easily creating generators and such out of collection to whatnot. So you have aIIow EDA tools, which is awesome. And then we have multiprocessing, which is a way around the gills, like here's a function and some data, go run that in a sub process, and then give me the answer. And because it's a sub process, it has its own sub Gil or its own separate Gil. So it's all good to have a IO multi process, which is cool. And then one of the most widely used databases is SQL lite, already built into Python, which is super cool. And so you have a i o SQL lite, and then sort of extending that that's like a raw SQL lite, you know, raw SQL library, that's async. io, then you have AQL, which is more ORM. Like, I'm not sure it's 100% rM you can categorize that for us. But it's like an RM Yeah, I've definitely used like in quotes, in scare quotes, O RM light, because I want it to be able to essentially be a combination of like, well typed table definitions that you can then use to generate queries against the database. As of right now, it's more like writing a like a DSL that lets you write a back end agnostic SQL statement. Right. Okay. Yeah, DSL, domain specific language for Yeah, aren't entirely sure, yeah. So really, it's essentially just stringing together a whole bunch of method calls on a table object, in order to get a SQL query out of it, the end goal is to be able to have that actually be a full end to end thing where you've defined your tables, and you get objects back from it. And then you can like call something on the objects to get them to update themselves back into a database. But I've been very hesitant to pick an API on it for how to actually get all that done. Because trying to do that in an async fashion is actually really difficult to to do it right. And separately, like trying to do async IO and have everything well typed. You know, it's like two competing problems that that have to be solved. Yeah, I just recently started playing with SQL alchemise 2.0 1.4 beta API, where they're doing the async stuff. And it's quite different than the traditional SQL alchemy. So yeah, you can, yeah, see the challenges there. And it's also a case where it's like, having something to generate the queries to me is more important than having the thing that will actually go run the query, especially for a lot of internal use cases, we really just want something that will generate the query, or we already have a system that will talk to the database, once you give it a query and parameters. It's the piece of actually saying, defining what your table hierarchy or structure is, and then being able to run stuff to get the actual SQL query out of it. But have that work for both SQL lite and MySQL, or Postgres, or whatever other backend you're using, having it be able to use the same code and generate the correct query based off of which database you're talking to is the important part. Yeah, cool. Well, there's probably a right order to dive into these. But since we're already talking about the AQL, one a lot, maybe give us an example of what you can do with it, maybe talk us through, it's hard to talk about code on air, but just give us a sense of what kind of code you write and what kind of things it does. For us. This is heavily built around the idea of using data classes. In this case, it specifically uses adders simply because that's what I was more familiar with at the time that I started building this, but essentially, you create a class with, you know, essentially, all of your columns specified on that class with the name and the type not like sequel, but not like you're having to like oh, yeah, like native types, like ID colon, end name, colon, stir, not essay, dot column, dot, USA dot string, and so on. Right? Yeah, exactly. Like, I want this to look as close to a normal data class definition as possible, and essentially be able to decorate that. And you get a special object back that when you use methods on it, like in this case, the example is, you're creating a contact. So you list the integer ID, the name of it, and the email, and whatever the primary key doesn't really matter. In this case, whether the the ID ends up getting auto incremented, again, doesn't really matter. What we're really worried about is generating the actual queries and you are assuming like somebody created the table. It's already got a

20:00 primary key for ID it's auto incrementing or something like that. Yeah, we just want to talk to the thing. Yeah.

20:00 Talk Python to me is partially supported by our training courses. You want to learn Python, but you can't bear to subscribe to yet another service at talk Python training we hate subscriptions to, that's where our course bundle gives you full access to the entire library of courses. For one a fair price. That's right, with the course bundle, you save 70% off the full price of our courses, and you own them all forever. That includes courses published at the time of the purchase, as well as courses released within about a year of the bundle to stop subscribing and start learning at talk python.fm slash everything.

20:00 And so essentially, you take this contact class that you've created, and you can call a select method on it, that will then you know, you can add an aware method to decide which contacts you want to select. There's other methods for changing the order or limits. Or Furthermore, if you wanted to do joins or other sorts of things, like it kind of expects that you know, what general SQL syntax looks like, because you string together a bunch of stuff kind of in the same order that you would with a SQL query. But the difference is that in this case, like when you're doing the where clause, rather than having to do an arbitrary string that says, you know, column name, like, and then some string literal, in this case, you're saying like where contact dot email dot like, and then passing the thing that you want to check against. And the other alternative is, you could if you want to look for a specific one, you could say like, where contact email equals equals, and then the value you're looking for. And so you're kind of using or abusing, pythons expression syntaxes, to essentially build up your query, definitely using a domain specific language in this case, but essentially having the fluent API once you string all this together, you have this query object, which you can then you know, pass to the appropriate engine to get an actual finalized SQL query and the parameters that would get passed if you were doing a prepared query. But you could also potentially like in the future, the goal was, you would also be able to make manage your connection with AQL. And basically be able to tell it to run this query on that connection. And regardless, you'd be able to do this the same with SQL lite, or MySQL, or whatever. And the library is the part that handles deciding what specific part of the incompatible SQL languages that they all use will will actually be available, right? Yeah, like, for example, MySQL uses question mark for the parameters. Yeah, SQL Server uses. I think, at parameter name, there's like they're all have their own little style. That's not the same, right. Yeah. And some of that is kind of moot because of the fact that the most of the engine libraries that we use commonly in Python, like MySQL, or SQL lite, or whatever, they're already kind of unified around the there's a specific Pep that defines what the database interface is going to look like the DB API to or whatever, yes, that so some of that work has already been done by the peps and by the actual database engines. But there's a lot of cases where it's a little bit more subtle, like the semantics, especially around using a like expression. My sequel does a case insensitive matching by default, but sequel lite doesn't. AQL tries to kind of like unify those where possible, but also there's cases, especially when you're getting into joins or group buys, things like that, where the actual specific syntax being used will start to vary between the different backends. And that's where we've had more issues, like, especially the whole point of sequel, light for a lot of people is a drop in replacement to MySQL when you're running your unit tests. And so you want your code to be able to do the same thing, regardless of what database engine it's connected to. And this is one way to do that. Okay, that's cool. Yeah, with SQL lite, you can say, the database lives in colon memory. Yeah, exactly. And then you can just tear it up for your unit tests. And then it just goes away. Nice. So maybe that brings us to the next one, the AI Oh, SQL light. Sure. This one is all about async. io. You can see from the example here, you want to tell us about that? Yeah, this was again, born out of a need for using SQL lite, especially in testing frameworks, and so forth to replace MySQL. And essentially, what I was doing was taking the normal SQL lite API from Python and essentially saying, like, how would this look in an async IO world? Like if we were re implementing sequel lite from the ground up in an async. io world? How can we do that? And essentially, so in this case, we're heavily using async context managers and await doubles in order to actually run

20:00 The database connection to SQL light on a separate thread and provide as much of an async interface to that as possible. So when you connect to a sequel light, it spawns a background thread that actually uses the standard sequel lite library to connect to your database. And then it has methods on that thread object that allow you to actually make calls into that database. And those are essentially proxied through futures. So if you want to execute a query, when you await that query execution, it will spawn basically queue the function call on the other thread, and basically tell it here's the future to set when the result is ready. So once the sequel light execution, or cursor or whatever has actually completed doing what it's supposed to do on that background thread, It then goes back to the original threads event loop and says, you know, set this future to to finished. And so that allows, there's the answer, originally awaiting to actually come back and do something with the result. Yeah, it sounds a little tricky, but also super helpful. And people might be thinking, why didn't we just talk about the Gil and how threading doesn't really add much. But when you're talking over the network, or you're talking to other things, a lot of times the Gil can be released while you're waiting on the internal sequel light or something like that. Right? Yeah. So the the internal sequel lite library on its own will release the Gil, when it's calling into the underlying SQL lite C library. That's where it's waiting. So that's good. Yeah. The other side of this is that it's one thread, I'm not really aware of anybody who's opening, you know, hundreds of simultaneous connections to SQL lite database, the way that people expect to do with, say, like HTTP, or things like that. So while it is, you know, potentially less efficient, if you wanted to do a whole bunch of parallel SQL lite connections, the problem really is that SQL lite itself is not thread safe. So it has to have a dedicated thread for each connection. Otherwise, you risk corruption of the backing database, which sounds not good, right? Yeah, it's like, basically, you end up either where two threads clobber each other, or, more specifically, what SQL lite says is, if you absolutely try to talk to a connection from a different thread, the Python module will complain unless you've specifically told it No, please don't complain, I know it's unsafe, at which point sequel light will be really upset if you try to do a write or modification to that database. So there are layers of protections against that. But it is one of the underlying limitations that we have to deal with in this case. So if you wanted to have simultaneous connections to the same database, you really have to spin up multiple threads. In order to make that happen safely, you can always do some kind of thread pool type thing, like, we're only going to allow eight connections at a time. And you're just going to block until one of those becomes free and finished or whatever, right. It's definitely a tricky thing. So like, the expected use case with ao sequel lite, is that you'll share the database connection between multiple workers. So you'll like in the piece of your application that starts up, it would make the connection to the database and store that somewhere, and then essentially, pass that around. And as sequel lite is basically expecting to use a queue system to say, whoever gets the query first is the one that you know, gets to run it first and ever Yeah, whoever asked for the query Second, you know, is the second one to get it. So you're still doing it all on one thread. And it's slightly less performant. That way, but it's at least safe, right? And still asynchronous, at least. Yeah, that's good. Very nice. And one of the things that looking at your example here, which I'll link in the show notes, of course, is Python has a lot of interesting constructs around async, and await, you know, a lot of languages, think C sharp, or JavaScript or whatever, it's kind of async function await. function calls are good. But you know, we've got async with async, for a lot of interesting extensions to working with async and other constructs, yeah, it actually makes it really nice in some ways. And essentially, these are just syntactic wrappers around a whole bunch of magic methods on objects, right, like a weight thing, enter, do your thing, right, then await exit, right? The nice part is that for some amount of extra work in the library, setting up all those magic methods everywhere and deciding, you know, the right way to use them. The benefit at the end is that you have this very simple syntax for asynchronously iterating over the results of a cursor. In that case, you don't have to care that after, you know, 64 elements of iteration, you've exhausted the local cache, and now SQL lite has to go back and fetch the next batch of 64 items. In that case, it's like that's transparent to your application. And that's where the ko routine that's iterating over that cursor would then hand back. It's control of the event loop. And the next coroutine in waiting, essentially is able to then, you know, wake up and go do its own thing to Oh, how cool I didn't even really think of it that way. That's neat. Maybe next one.

20:00 touch on would be a io multiprocess. Sure, it just now crossed 1000 stars today or recently? Oh, yeah, yeah, I did. Yeah, very recently. That's awesome. That's my real pride and joy here is getting all those stars. There's this interesting dichotomy setup between threading and multi processing in Python. So with multi threading, you're able to interleave execution. So with the Gil, it means that only one thread can actually be modifying Python objects or running Python code at any given time. So you're essentially limited to one core of your CPU. And these days, that's a big limitation. Right? Right. Right, exactly. Like I see servers on a regular basis that are like 64 to 100 cores. So only using one of them is basically a non starter, and you get a lot of people with pitchforks, saying, why aren't we using rust. And so essentially, what the alternative of this multiprocessing, where you're spinning up an individual process, and each has its own Gil, this does allow you for CPU intensive things to basically use all of the available cores on your system. So if you're crunching a whole bunch of numbers with NumPy, or something like that, you could use multi processing and to saturate all of your cores, no problem. In this case, essentially, what happens is it spawns a child process or forks, the child process on Linux, and then it uses a pickle module in order to send data back and forth between the two. And this is great, and it's really transparent. So it's super easy to read write code for multiprocessing and make use of that. But the issue becomes if you have a whole bunch of really small things, you start to have a big overhead with pickling of the data back and forth, right. And the coordination back and forth is like really challenging, right? Yeah. So like, if you're pickling a whole bunch of smaller objects, you actually end up with a whole bunch of overhead from the pickle module, where you're serializing and deserializing, and creating a bunch of objects, and you know, synchronizing them across those processes. But the real problem is when you start to want to do things like network requests that are IO bound, in an individual process, like with multi threading, you could probably do 60 to 100, simultaneous network requests, right? You guys maybe have more than 60 servers or some Sure, right. But like, if you're trying to do this with multiprocessing, instead, where you have like a process pool, and you give it a whole bunch of stuff to work on, each process is only going to work on one request at a time. So you might spin up a process. And it waits for a couple seconds while it's doing that network request. And then it sends it back and you haven't really gained anything. So if you actually really want to saturate all your cores, now you need a whole bunch more processes. And that then has the problem of a lot of memory overhead. Because even if you're using copy on write semantics with forking, the problem is that like Python goes and touches all the ref counts on everything and immediately removes any benefit of copyright forked processes, right, which might do like the shared memory, right. So if I create one of these things, like 95% of the memory just might be one copy. But if you start touching, ref counts, and all sorts of stuff, Instagram went so far as to disable the garbage collector, right, prevent that kind of stuff. It's insane. Yeah, so it turns out that if you fork a process, as soon as you get into that new process, Python touches like 60 to 70% of the objects and it's in the the pool of memory, which basically means that now has to actually copy all of the memory from all of those objects. And so you don't actually get to share that much memory between the child and the parent process in the first place. So if you try to spin up, you know, 1000 processes, in order to saturate 64 cores, you are wasting a lot, a lot of memory. So that's where I kind of built this piece of eo multiprocess, where essentially what it's doing is it's spinning up a process pool, and it only spins up one per core. And then on each child process, it then also spins up an async IO event loop, right. And rather than giving a normal synchronous function as the thing that you're mapping to a whole bunch of data points, you give a ko routine. And in this case, what what a IO multiprocessor is capable of doing is essentially keeping track of how many inflight co routines each child process is executing. And essentially being able to say that, like if you wanted to have 32, inflight co routines per process, and you had 32 processes, then of course, you have whatever 32 times 32 is I can't do that in my head. Because I'm terrible at math. Essentially, you get you know, the the cross product of of those two numbers. And that's the number of actual concurrent things that you can do on a multiprocessor. So the idea is like, instead of creating a whole bunch of one off, run this, this thing with this, these inputs over there, you say, well, let's create a chunk, like let's go 32 here, 32 there and run them but do that in an async way. So you're scaling the wait times. Yeah, exactly. Anyway, right? Because you're probably doing network stuff. Yeah. And these, this world and the benefit of this is essentially like you're scaling the

20:00 The benefits of async IO with the benefits of multiprocessing. So for math, it's easier for me to figure out in reality, what we've seen is that you can generally do somewhere around 256 concurrent network requests on async IO on a single process, before you really start to overload the event loop. Have you looked at some of the other event loop implementations like UV loop or any of those alternate event loop. So functions, UV loop can make things faster, but the things that it makes faster are the parts of that process, like network request headers, the real problem at the end of the day, is that the way that the async IO framework and event loops work is that for each task that you give them, it basically adds it to a round robin queue of all of the things that it has to work on. So at the end of the day, if you want to run 1000 concurrent tasks, that's 1000 things that it has to go through in order before it gets to any one task, right? It's going around asking, Are you done? Are you done? Yeah, or something like that, basically. And if you're doing anything with the result of that network request, before you actually return the real result from your co routine, then you're almost certainly going to be starving, the event loop of acts are starving other co routines on the same event loop of processing power. And so what we've seen actually is you end up with cases where you technically timeout the request, because it's taken too long for Python or async. io to get back to the network request before it hits like a TCP interrupt or something like that. That's interesting. Yeah. So this way, you could say like, well throw 10 processes or processes at it and make that shorter, are willing to run 256 network requests per process, and you have 10 processes, or 10 cores, then suddenly, you can run 2500 network requests simultaneously from async. io and Python. At that point, you're probably saturating your network connection, unless you're talking to mostly local hosts that Facebook when you're talking about a monitoring system, that's actually what you're doing is you're almost certainly talking to things that have super low latency to to talk to, and super high bandwidth. And so this was essentially the answer to that is like, run async IO of an async. io event loops on a whole bunch of child processes, and then do a bunch of really, like smart things to balance the load of the tasks that you're trying to run across all of those different processes in order to try and make them execute as quickly as possible. And then also, whenever possible, try to reduce the amount of times that you're serializing things back and forth. So one of the other common things that having more processes enables you to do is actually do some of the work to process filter, aggregate that data in those child processes, rather than pickling all the data back to the parent process, and then you know, dealing with it and aggregating it there, right, because you've already got that like scale out for CPU cores. Yes. So there, it kind of gives like a local version of MapReduce, where essentially, you're mapping work across all these child processes. And then inside each batch or whatever, you're accurate, you're aggregating that data into the result that you then send back up to the parent process, which can then process in aggregate that data further. Yeah, super cool. And you gave a talk on this and icon in Cleveland, one of the last real actual in person pythons. Yeah. First one I've ever attended, and the first one that I've ever given a talk at? Yeah, that was a good one. That one in Cleveland. Yeah, the room was absolutely massive and terrifying. And I don't know how I managed to do it all. Yeah, it's just kind of block it out. block it out. But now it's all good. Cool. Yeah. So I'll link to that as well. People can check that out. And it really focuses on this eo. multiprocessing. Part, right? Yeah. Nice. All right. Last of the Big O things at Omni lib is IO inner tools. Yeah. So you kind of hinted on this before, like inner tools is mostly a bunch of helpers that let you process lists of things or iterables, in nicer ways. And in our tools is just basically taking the built in functions like iterating, getting the next thing from an iterable, or mapping or chaining between multiple intervals or whatever, and essentially bringing that into an async first world. So all of the functions and tools will accept both, like normal standard iterators, or lists or whatever, as well as async, iterables, or generators or whatever. And essentially, it up converts everything to an async iterable. And then gives you more async iterable interfaces to work on these. So I know how to create a generator with like yield. So I have a function, it does a thing. And then it goes through some process and it says yield an item like here's one of the things in the list. That's already really good because it does like lazy loading, but it doesn't scale. The waiting time. Right? It just yeah. So for the async generator, what's the difference there, in this case, ever tried one of those if you could just call the function async def, and then have

20:00 Have a yield statement and it creates an async generator, which is just an async iterable object that similar to how when you call a ko routine, it's an object, but it doesn't actually run until you await it with an async generator calling it creates the generator object, but you don't actually need the async part is done right. At that point, well, it's like it doesn't, it still doesn't even start running until you actually start to use the async for some other async iteration to then iterate over it. If you're using the async iterator, you still get the lazy loading of everything like with a normal generator, but you also have the potential for your thing to be interrupted. The common use case here, or the expected use case would be if you're doing something like talking to a whole bunch of network hosts, and you want to return the results as they come in as an async iterable, then you could use something like a IO editor tools to then do things like batch up those results, or run another curve routine across every result as it comes in things like that. The other added benefit in here is that there's also a concurrency limited version of gather. So as I said earlier, when you have a whole bunch of tasks, you're actually making the event loop, do a whole bunch more work. One of the common things I've seen is that people will spawn 5000 tasks, and each task, or they'll all have some semaphore that limits how many of them can execute at once. But you still have 5000 tasks that the event loop is trying to serve us. And so you're given a whole bunch of overhead every time it wants to switch between things, it's got to potentially go through up to 5000 of them before it gets to one that it can actually service. So the concurrency limited version of gather that a IO editor tools has lets you specify some limit, like only run 64 things at a time. And so it will, you know, try to fetch the first 64 things of all of the the CO routines are available, so that you give it and it will start to yield those values as they come in. But essentially, it's making sure that the event loop would never see more than 64 active tasks at a time, at least from that specific use of it. Yeah, and they're just hanging on a memory, they don't get really get thrown into the running task. So one of the challenges or criticisms almost I've seen around async IO is that it doesn't allow for any back pressure or whatever, right? Like if I'm talking to a database, it used to be at the web front end have like some kind of performance limit and can only go so hard against the database. But if you do just await it, like all the traffic just piles in until it potentially can't take it anymore. And it sounds like this has some mechanisms to address that. And a generally speaking, that's at least the the general intent of it is to be able to use this concurrency limit to try and prevent overloading either the event loop or your network or whatever. So even if you have 5000 items by setting the limit to 64, you know that you know, you're only going to be doing that many at a time. And then you can combine that that concurrency limited gather with something like the result of that is its own async iterable. And then you could also combine that with things like chain or other things in order to mix that in with the rest of the like inner tools, functional lifestyle, if you will, yeah, yeah, super cool. I can imagine that these might find some way to work together. You might have some async IO, a IO inner tools thing that then you feed off to L multiprocessing. Or something like that. Do you put these together? Any? Yeah, exactly. These are definitely a whole bunch of tools that I've put together in various different use cases. Yeah, very neat. All right. Well, we're getting quite near the end of the show, I think we've talked about a lot about these very, very cool libraries. So before we get out of here, though, we touched on this the beginning, but I'll ask you this as one of the two main questions at the end of the show. If you're going to write some Python code, what editor do you use? The snarky answer is anything with a vim emulation mode, that was the thing that I learned in college, and I specifically avoided answering that site like answering that earlier when we were talking about it. But that's what I learned when I was writing a whole bunch of PHP code. And that's what I used for four years. And then eventually, I found Sublime Text. And I really liked that, but it kind of seemed to dead in the water atom came out. But Adam was slow. And so these days, I'm using VS code, primarily because it has excellent Python integration, but also because it has a lot of like, Facebook builds a lot of things that we used to have on top of atom called nuclide. Which, especially where a lot of like remote editing tools. Okay, we've rebuilt a lot of those on top of VS code, because VS code is faster and nicer, you know, has better ongoing support from the community and so forth. Nice. Yeah, VS code seems like the natural successor to Adam. Yeah. And like I said before, it's like I had tried pi charm at one point, but it's it's one of those cases where I touch just enough stuff that's not Python that I really want my tools to work and function the same way regardless. And so VS code has the better sort of like broader language support.

20:00 work where it's like, there's some days where I just have to write a bash script. And I wanted to be able to do nice things for bash or, you know, I use that as a markdown editor, and it has a markdown preview, things like that. Yeah. Alright, cool. Sounds good. And then notable pipe UI package. I mean, I guess we spent a lot of time on four of them. Right? Yeah. I've also talked about Microsoft yusor. Yeah. So the joke answer is I have a package called aIIow. Seinfeld that's built on top of a sequel Lite. And essentially, you give it a database of Seinfeld scripts. And you can search for things by actor or by keyword of what they're saying. And it will essentially give you back some elements of dialog from a script that contains your search query. And this is powering a site I have called Seinfeld quote, calm, which is basically just a really old bootstrap template that that lets you search for pieces of Seinfeld quotes. I also implemented a chat bot in discord for some of my friends that also uses this, the more serious answer would be the other one that we didn't talk about from Omni live, which is attribution, which is essentially a quick program to automate the generation of change logs, and to automate the process of cutting a release for a project. And so I use this on all of the Omni live projects. And essentially, I type one command, attribution release, I'm sorry, attribution tag, and then a version number, and it will drop a Dunder version.pi in the project directory, it will create a git tag, it lets you then type in what you want the release notes to be, it's assuming you know, markdown format, and then once it's made that tag, then it regenerates the change log for that tag, and re tags that appropriately. And so you get this really nice thing where the actual tag of the project has both the updated change log and the appropriate version number file. So you only ever type the version in once you only ever type the release notes in once. And it gives you you know, as much help and automation around that as possible. Oh, yeah. Okay, very cool. That's a good one. All right, final call to action. People are excited about async IO, maybe some of the stuff at Omni live, they want to get started. What do you tell them if they want to get started on the projects, going to Omni lib dot dev is the easiest way to find the ones that are currently hosted on the project. We're always welcoming of code review from the community. So even if you're, you know, not a maintainer, if you are interested in reviewing pull requests and giving feedback on things, always welcoming of that there's never enough time in my own personal life to to review everything or respond to everything. Otherwise, if there are things in these projects that you are interested in adding, like new features, or fixing bugs, or whatever, either open an issue or just create a pull request, and I am more than happy to engage in design decisions or discussions or whatever. Make sure that ideally, if you open an issue First, make sure you're not wasting your time on a pull request that's going in the wrong direction. Right, right. Because it's people might have this idea of like, this is really inconsistent where this project Yeah, exactly go wherever. So usually, even if it's perfect, you can't accept it, right? So right advice, if it's just like a bug fix or something, then you know, probably just worth creating a pull request, and I'm not gonna bite your head off. But otherwise, the only other thing I'd say is that LGBTQ things are very personal to me. And so I would ask that if you're in a position to do so that you please donate to an LGBTQ charity that will help in the community. There's two that I really liked. One is called power on. And that's a charity that donates technology to LGBTQ youth, they trigger their homeless or disadvantaged, and they have that power on lgbt.org. And then the other one is the Trevor Project, which is crisis intervention and a suicide hotline for LGBTQ youth. And that's at the Trevor project.org. Yeah, awesome. Those are just two examples. But there are plenty worst case just donate to a food bank near you. Cool. Yeah, that's, that's a great advice, great collection. Seems like your projects are also really open to new contributors, people getting into open source. So yes, participating in that way. Seems like a great, great thing. Fantastic. All right, john. Well, thank you so much for being on top Python. It's been great to have you here. Thank you for having me so much. I really appreciate it. This has been another episode of talk Python to me. Our guest in this episode was john Reese, and it's been brought to you by linode and talk Python training. Simplify your infrastructure and cut your cost bills in half with linode. Linux virtual machines develop, deploy and scale your modern applications faster and easier. Visit talk python.fm slash linode and click the Create free account button to get started. level up your Python we have one of the largest catalogs of Python video courses over at talk Python. Our content ranges from true beginners to deeply advanced topics like memory and async. And best of all, there's not a subscription in sight. Check it out for yourself at training dot talk python.fm Be sure to subscribe to the show, open your favorite podcast app and search for Python.

01:00:00 We should be right at the top. You can also find the iTunes feed at slash iTunes, the Google Play feed at slash play, and the direct RSS feed at slash RSS on talk python.fm. We're live streaming most of our recordings these days. If you want to be part of the show and have your comments featured on the air, be sure to subscribe to our YouTube channel at talk Python FM slash YouTube. This is your host, Michael Kennedy. Thanks so much for listening. I really appreciate it. Now get out there and write some Python code.

Back to show page