00:00 SQLAlchemy is the most widely used ORM Object Relational Mapper for Python developers. It's been around since February 2006, but we might just be in for the most significant release since the first one. SQLAlchemy 2.0. This version adds with async and await support, new context manager friendly features everywhere and even a unified query syntax. Mike Bayer is back to give us a glimpse of what's coming and why Python's database story is just getting stronger. This is Talk Python to Me Episode 344, recorded November 10, 2021. Welcome to Talk Python to me, a weekly podcast on Python. This is your host, Michael Kennedy. Follow me on Twitter, where I'm @mkennedy and keep up with a show and listen to past episodes at 'Talkpython.FM' and follow the show on Twitter via at Talking Python. We started streaming most of our episodes live on YouTube, subscribe to our YouTube channel over at Talkpython.FM/ Youtube to get notified about upcoming shows and be part of that episode.
01:12 Mike, welcome back to Talk Python to me, it's been a little while. Hi.
01:16 How are you doing?
01:17 I'm doing really well. When we last spoke, we did speak about SQLAlchemy, of course, but it was in April of 2015. That's been a while.
01:27 Was that live version 1.1?
01:30 Something like that? Yeah. And that was episode five of the podcast has been going for six years. So really early days. I appreciate you helping me kick off the show. And can you believe it still going that's crazy, huh? Yeah.
01:41 Doing great. I'm glad Python is popular enough that I can have its own podcast. I'm pretty psyched.
01:46 Yes, I'm Super psyched about that as well.
01:48 Maybe.
01:49 Let's start with that. Actually, back then, Python was already popular, right? 2015. We'd already sort of hit that hockey stick growth curve that started around 2012, but even so, does Python's popularity and growth and appeal surprise you these days?
02:06 No, it doesn't surprise me. I'm pleased that it's growing in popularity. I think the first thing that I saw about Python when I first got into it in the early 2000s, was it's really clear it's really unambiguous compared to everything else that I've used, and I always wanted to be involved in the language that a quote unquote regular people can get involved with, and people at a low level, high level can do things with it. I didn't want to be focusing on a computer language that would only be very high level math whizzes because I'm not a math person at all. That's why I don't really do Haskell I wanted to there to be because I worked in different jobs and I always would be not always, but often someone that was writing all the frameworks and the architectures, and then all the other people were using my frameworks. But a lot of those people were not at the level that I was. And I wanted to make sure that the stuff that I did was approachable Python is very approachable. And the fact that now that the academic world and the media news world is getting very data oriented. Surprise. They're all using Python.
03:06 Yeah. Surprise, surprise.
03:08 Yeah. And that's exactly what I would expect. And of course, it might not have gone that way, but it was a pretty good bet. So it's cool that there's a data science profession now, and Python is at the center of it. So I think that's really good.
03:22 It makes a lot of sense that it's at the center of it. I do think it has this special appeal to people, and I think it's made up of a couple of parts. One is you can be very effective with a partial understanding of Python, like, you could not even understand what a class is, and you could still do meaningful stuff. Right. Which a lot of languages don't have that feature. It has the packages, right?
03:41 Yeah.
03:41 I haven't even checked. Let me check it out. How many packages are on PyPI these days? 338,000. That's a few things to just grab and go with, like, Lego block style, some of which you've contributed to up there. So that's pretty awesome. Some of the popular ones. Yeah. I think it's fantastic. So good to see people continuing to embrace it. And you mentioned making it more accessible, right. Like, there's different things of accessibility there's. Well, I can write for loops and do a list of comprehension, and that's pretty easy to understand. There's another like, I need to talk to a Postgres cluster and make it do things right. It doesn't matter how simple Python is when you talk to external systems, you, to some degree, take on the complexity of that external system. Right. And I do think ORMS in the no SQL world. ORM's, if you will really democratize that as well, you can write the code in the language you already know. And it does the database stuff, including the slight mismatch of language features in sql like at parameter name versus question mark type of variations. Right.
04:52 Yeah. Well it's interesting. That particular variation is not even necessary. That the quirk of the Python DB API that they decide to have different parameters styles. So yeah, we want to make it. So you wouldn't have to worry about that. Yes. As far as Democratizing SQLAlchemy was always aiming to democratize as much as it could to make things that are trivial. And you shouldn't have to worry about, like, question merge versus colon name. You don't have to worry about that stuff. But at the same time, I always came from all these different database shops where we had to use every feature possible.
05:23 It's always about exposing the database. Sql alchemy remains pretty different than most tools, a tool I've seen in Python more or less, and probably tools in other languages that is not trying to create this abstracted away. You can't really know that there's some vaguely relational object store thing. If we really want you to be sql, and you just happen to be able to write it in terms of Python contracts, where you can also write sql strings. And we're going to try to get that to have as much flexibility as possible.
05:55 There's a couple of interesting philosophies around SQLAlchemy or philosophy of SQLAlchemy that you've imbued upon it, and one of them is not to hide the database, right. Because you often end up in this leak abstraction where it's like, oh, you can just forget there's no database.
06:10 Is it too slow? And now you're into some weird world where it's not a great fit, right. So maybe we could talk a bit about the philosophies a bit before we get into what's new and where things are going.
06:20 Yeah. The basic philosophy has always been, and I think some of you did, too. When I read some articles that gave me some terminology, one of automation of working with a SQL database, like, if you're going to write an application that talks to a database and you're going to have 20 tables and say you're not using any O-R-M. You're just going to write the sql, you're going to find yourself writing the same interest statement, typing it out over and over again. Like, I know that my table has ten columns. I'm going to type this in the statement. That's what we call boilerplate. Any program that anyone writes using nothing at all. They're going to write some function to generate that insert statement. Right. Like, people usually talk about select statements like the ones they want to have more control over. But there's just a boring one. There's updates in search and delete, which are really boring. There's all the DDL for creating tables and stuff that's very redundant. Nobody needs to. If you want to learn how to do that stuff, you should learn how to do this stuff. But if you have 100 classes, you don't want to type in 100 create table statements. You don't want to have to do that. It should be automated. So the idea is that when you're working in this automated environment, you still know everything. That's what a create table statement is, what an insert is. You know what crud somewhat is SQL? You would nearly not to sell, but ideally you would know how to write this whole program without using anything like you would know how to use.
07:37 Right? I would say probably a good rule for I've studied enough SQL. I feel like I'm not hiding too much from myself by using an Orm would be if you could get the orm to print out its statement. So in Sql alchemy on the engine, if you said Echo equals true, for example, and you know what it does when you read the statements you're like, okay. I'm not sure I would have exactly seen the right to join that way, but okay, that makes sense. I see why it's doing. I see what this update statement these parameters mean. Okay, now let's be productive, right? What do you think about as a rule of thumb.
08:12 Echo equals true was when I first wrote sql to me. Whatever I did for the first two weeks, Echo equals true was right there. That was like the very first thing you will see because I had come from using. I think I probably used Hibernate for a while for Java. Now it's 20 years later.
08:31 Hibernate, I'm sure is very different, but at that time I didn't know how to see what the heck it was sending to the database. I had no idea.
08:38 I'm sure there was a way to do it, but it wasn't like obvious. It was like it's a set of lockers and this and that and job of this and Java. That and the idea with Hibernate. And it was like you should have to know that we do the sql for you. Why should you know that?
08:52 Use us correctly and we'll solve the problem. Right.
08:55 I want a really one to one thing where you're writing this Python Select where, whatever and what you can one. So one. If I had more videography skills and also a lot of time, I'd make some kind of cartoon that shows boom with how it lines up in the statement.
09:12 Absolutely.
09:12 Because it's not meant to hide anything. It's meant to automate. So if you had the idea of a soda bottle company, like a soda company like you're selling soda, which soda is not good for you. But if you had to bottle 4000 bottles of soda, you would use a machine to do it. It's not to say that you don't know how to pour soda into a bottle yourself, but you need to scale it up.
09:33 Yeah.
09:33 It's about scaling up something that is very repetitive to be typing by him and be consistent, repetitive.
09:39 And whatnot absolutely.
09:42 A couple of comments from the audience really quick. I just want to throw out there. Devarazo says Hello from Danielle, from Psychophe. Thank you for your kind rapping. And JLee is just cheerleading for Python.
09:57 Hello, everyone out there.
09:58 I think this is really a powerful thing to think about. Just the reproducibility. And also in terms of linking into the tooling. I think one thing people often miss about the advantages, because sometimes you'll hear people say you should never use an ORM. It's hiding too much from you.
10:16 Yeah, sure.
10:17 Yeah, sure. Go. I'm going to go do something. You keep working on your strings. So one of the things I can see that sometimes can easily get missed is refactoring tools.
10:30 If you write some kind of select statement on a class and you want to change that field or that column, you change the field, which obviously maps over the column, but that also changes your entire code. If you're using a proper editor that understands refactoring like VSCode or PyCharm or something like that, right?
10:46 Yeah. A big thing with refactoring. And one of the things that we had to adapt to is a Came is PEP 484, which is I call it mypy, but it's not. mypy is a tool that checks Python annotations, and we call it the mypy thing, which really Pep 44, which is that new thing where they're trying to have a kind of a layer of typing like a strongly Typed static type. Sorry, statically Typed layer on top of your Python script. So if you're a library of Python, you have to work with the system now because people expect it. And the Ide's I use vs code now with.
11:23 So I keep saying the name of the engine, but that's pipe lines I can never remember the name Pylance. Think of a Pylance uses the annotations a lot, and I actually am seeing where it works and where it doesn't work. And I think the refactoring tools are greatly improved by the fact that there is this concept of static type annotation. I think there's a lot of shortcomings in PEP44. There's a lot of things that is not great. I'm not 100% optimistic on it, but definitely it's way better than nothing. And there's also some features that are coming that are in peps that have not been implemented at this thing called Variatic types that would allow us to actually be able to type your results. That coming back if you give it a select, we'll know the types in your result.
12:09 I see is that something like a generic or template type?
12:13 It's a generic that works like a tupple type, because you might notice in Pep 44, you can't make your own tuple type. The typing that's applied to Tuple is actually you look at the mypai source. It's hard coded. If equals Tuple, then all these new special things, so things like that they need to fix because rows from a database are essentially tough. So that's how we want to type them.
12:34 I do have a question. Since we're talking about the typing. I pulled up this super simple example here off of the sql keeping documentation. It doesn't really matter. It's just a class that drives from declarative base has a table name, and then it has the stuff that people probably know from many ORM's these descriptors. Right. So ID equals a column which is of type integer primary key equals true name equals column, which is a string. And so on. I'm going to tell you what I've been doing with this stuff lately and you tell me whether this is a good idea or a bad idea or whatever. So what I've been doing is I've been writing this code as like this name one I would say name colon string equals column string. So the Python code believes it's the type that's backed by the database the column type. So name is a string ID colon n even though it's really set to a descriptor of column integer primary equals. True. Yes. Have you seen this? Is this a good idea? Should I be doing this?
13:26 Yes. I have done a lot of work on this concept, and what you're doing is not incorrect, but there's a lot of complexities to it that require a little more going on. Right.
13:38 And to be clear, I'm not doing this for my py. I'm doing this just so my editor is more observable to me on autocomplete.
13:44 Yes. For the editor. I don't have the link handy. I'd have to go searching on my computer. But first of all, we do have a mypy plugin where you are able to use annotations like name colon string, and it will automatically work them into the correct kind of thing to be recognized. Also, at the query level. It doesn't work with Pylance, and I have a new approach that I worked out with some other people for Pylance type stuff where we won't need to plug in where you will be able to make. We're not totally sure how we want to do it. The point is that you would say name colon. You wouldn't say STR. You would use this other concert called mapping or maps rather than maps, and then the map type is str. The reason you do that is because if you use the user class in a query, not as an instance. It also has behaviors at the class level.
14:31 Right. It's got a DASK for descending.
14:34 Yeah. Like that.
14:35 Kind of.
14:36 Right. So I'm trying to work out the best way to do this where you're not typing the same. You're not repeating yourself because right now it's a little bit squirrelly. The class works completely in Pylance. Everything that's expected both of the class level and at the instance level will work out. It probably requires that when you do the declarative class, you would use a slightly different API. The current proposal is instead of using uppercase column, it's this new thing called M. column. We can always change the names and how it looks. One of the flaws with that declarative model, you see, there is that when I came over that declarative idea many years ago, those column objects are not Python descriptor compatible at all. They actually get replaced when you map the class. It's totally like it's not that clean.
15:24 So I have ideas to make declarative cleaner and do more of what it's supposed to do without breaking the rules. The thing about Python is that as the years have gone by, everyone, even newcomers, we're all much more into not into being clean about code, and being more verbose and typing is more verbose. And asyncio is more verbose. When we were doing Python six, seven, everything was like done just magic object boom, magic. Nobody wanted to type anything, which was bad. We didn't totally because it was scripting language. But nowadays people are way more tolerant of more verbosity, more clarity. And I'm trying to keep C clock me and my my team. We're trying to keep it going along with that. So the Pep 44 thing.
16:04 Do you Async IO stuff? That's my knowledge dump for the typing thing?
16:09 That's fantastic. That's a great insight.
16:11 It's a work in progress.
16:12 Do you think that people are more willing to accept a little more structure in the code now because the tooling is more there to help you write it. I write my maybe C tab. I don't write column of whatever, right.
16:26 Right. Maybe it's the tooling because nowadays people use Python data classes and they want to use we have some support for Python data classes now, and the syntax is that they want me to improve, which we've done are pretty verbose because they want to have a data class where you have all your data class fields and then within the field they want to have the mapping information and people it's awesome. And I'm like, great, because it's actually more of a buzz than I prefer, but people are way more tolerant of that. So yeah, I guess their Ides are spitting things down.
16:57 The values are different now, kids, these days this portion of Talk Python to Me is brought to you by Toptal.
17:06 Are you looking to hire a developer to work on your latest project? Do you need some help with rounding out that app? You just can't seem to get finished. Maybe you're even looking to do a little consulting work of your own. You should give 'Toptal' a try. You may know that we have mobile apps for our courses over at Talk Python on iOS and Android. I actually used Toptal to hire a solid developer at a fair rate to help create those mobile apps. It was a great experience and I can totally recommend working with them. I met with a specialist who helped figure out my goals and technical skills that were required for the project. Then they did all the work to find just the right person. I had short interviews with two folks. I hired the second one and we released our apps just two months later. If you'd like to do something similar, please visit 'Talkpython.fm/Toptal' and click that higher Top Talent button. It really helps support the show. is the goal with the data classes to try to have a more pure entity model that doesn't have column information and key index information. And just there's some other layer that puts those together. I haven't really played with that aspect of data classes.
18:17 I haven't really had the Kool aid with the data class yet. I think the idea is that it's a very clean data encapsulating object that has kind of prefab constructors and Reaper and validation.
18:32 They all have that kind of stuff. That's right. Comparison.
18:34 Yeah.
18:35 I think Pydantic takes a little further some murky stuff between data classes. Python?
18:41 Yeah, they blend together in my mind a little as well.
18:43 It's all weird.
18:44 Yeah.
18:44 And there's a new product called SQL Model that seems to be pretty popular in that regard.
18:47 Yeah. Built on SQLAlchemy.
18:49 Yeah.
18:51 Great. It's totally awesome.
18:53 It's actually a more opinionated overlay of the ORM, and it's also using Pydantic. So that's good. Yeah. The thing you got on the screen there is another way of that's called up.
19:05 Yeah, class, but it also has the mapperregistry mapped on it, which is interesting. It has some of it's table data has, like, not just the table name, but all the call information as well. It's interesting. It does have the same type declaration that I was trying to impose upon the traditional sql alchemy models.
19:24 Yeah. That's one form of the data class model. If you scroll down the more inline embedded one is there. Let me see if that's what it is. That's the one that's even crazier which data class fields. So you're pulling a data class as far as what your IDE see. But then you have the SQL mapping stuff inside. So all of this is time will tell which approach becomes popular. One or people just use SQL Model because the people that like data classes are tend to be using SQL model at Pydantic. Anyway, maybe they'll be there.
19:57 Yeah. I'm going to talk to Sebastian Ramirez. Maybe in December or January about SQL model. Sql model is interesting because it basically takes Pydantic models and SQLAlchemy and merges them together. And Sebastian has a good feel for what a nice API looks like. So I'm optimistic for this. I haven't done anything with it, but it does look pretty neat.
20:20 What he had to do was he had to modify Pydantic space class, because when I first looked, maybe in this last year about Pydantic SQLAlchemy without having two separate models, I was like, okay, this base class has got a whole thing. A lot of things going on that don't work with Python descriptors, so we'd have to change. And I think Sebastian did that basically went in and changed how the network so that it's compatible and more power to him. Yeah.
20:44 More power to them.
20:45 Also, I have enough to work on. This is too much for me to also work on Pydantic someone else.
20:51 Yeah. Absolutely.
20:53 One more philosophy thing. That is not. It is the one one more philosophy thing that I want to ask about. Let you just speak to because this is a key element of how SQLAlchemy works. There's other work this way. And then there's also this one. I think it's called Django. People might have used the Django ORM before. No, that one is more of a traditional Ruby on Rails active record, but I have a thing and I call save on it.
21:19 Sure.
21:19 You went with a thing called unit of work, which is a little more transactional like I'm going to do a bunch of stuff and then altogether it happens maybe just speak to that real quick before we talk about you.
21:29 When I was going to do some sql alchemy, the main thing that I had worked on a lot at work was a lot of fancy selects. I hadn't done much work with the persistent side, but I read this book by Martin Fowler called Patterns and Enterprise Architecture.
21:40 I read it as well. It was quite an interesting book back then.
21:43 Yeah.
21:44 Even though I used Hibernate, I had never heard of Unit of work, and I was like, wow, that looks cool. I'm going to write that because one of the policies was like, insert update. Deletes are really boring. And even saying update that save is really boring because I was working on a content management system for Major League Baseball. And when you work with CMS, you have a lot of tree based hierarchical structures with lots of self referential stuff. When you persist self referential structures in the database with auto committed primary keys, you've got to get one row, get the primary key back. Put it there.
22:18 Everything's got to be done.
22:19 It's so hard. Yeah, I tried to call Save on this, but that didn't work because I needed to call Save first there. And it's like, this is crazy. Right.
22:27 So it's like, why would I want to do? I shouldn't have to do that either. You just have this thing, like, here's everything, the transaction, just push it. And it took a very long time to get it right. It has a couple of little chinks in the armor, a couple of cases that you might have to drop it to quality plush explicitly, but we never get bugs with the unit of work stuff. If you look at unit of work. Py it hasn't changed in years. It took a long time in the beginning, and then I rewrote it a few times and it was really bad early on. It was very hard to get it right.
22:58 Yeah. A lot of edge cases, right. I can imagine if people can do any sequence of things and then you have to make it right.
23:04 Yeah. Mostly that we support these joint table inheritance models were very hard for me to get my head around how to persist that. And then it just took a long time because it grew organically. We know you're coding. You don't know your day.
23:15 I'm glad you wrote that instead of me. So thank you.
23:17 Yeah. I wouldn't be able to do it today.
23:20 You got to be right. Yeah. From Brandon. Brandon. Hi, Brendan. I use SQL model in a demo project, and it was so easy to use. Very nice. All right. So I think probably the big news over here is 2.0 or as you like to put it at your recent talk, the 1.4 name.
23:42 Very early stages are on the site in the library. Tab is a development thing where development docs are nice.
23:49 And the current release is 1.4. It has a lot of these features in there already, right?
23:55 Yes.
23:57 Just about everything in SQL v2.0 internally is available in 1.4. We did hold the internals first and add another stuff so that 1.4 could be a transitional release so that all the behind the scenes would be getting tested.
24:12 And people use the new APIs and transition over is 2.0 going to drop some of the older APIs hard many.
24:18 Yeah.
24:19 Okay.
24:19 Yeah. So people have looked 1.4 has a big migration thing where if you're on 1.3, which is pretty common, you can go to one four pretty easily without much problems. But when you're on this thing, we have this whole thing that is inspired by the Python 2 to 3 process, and at the same time tries to not make some of what I thought were mistakes of the Python two three process. So what this is based on is that 1.4 has an environment variable you can turn on in your console, whatever called SQL 20, something like that. And when you turn that on you're now in warnings mode, you will get all kinds of warnings about all kinds of APIs that either have changed or going away or use this one or that one. So a lot has happened. But at the same time, the reason that's maybe not as scary as you might think is that all the APIs that are being deprecated are APIs that I've already taken out of the Docs years ago. The APIs that I've been telling you for years don't do that anymore. We're not going to do that. So it's a lot of old stuff that is not being featured in the Docs. Basically, the things we're taking on 2.0 are things that have annoyed me for years. And there's a few changes in 2.0 that are a little more boom. Like we change how the engine is auto begin instead of auto commit, but 2.0 is basically going to be I think better. It's in line with this whole notion of people or appreciate more explicitness when things are clear. So 2 hours is going to remove a lot of implicit stuff. A lot of five ways to do the same thing patterns. It's going to narrow you down into one or two possible patterns.
25:54 Yes. That hasn't been a thing with Sql Alchemy, right. There's a lot of ways to accomplish stuff.
25:59 Yeah. Because I came from Perl. I can't. Sorry.
26:02 I was doing Perl. It's still the case that there's more than one way to do things you can't really get away from that. Totally.
26:07 Yeah. Well, often I want the super simple, easy way. Just call an Orm method and then. Oh, we got to rewrite the select statements. Use the store procedure type. There's usually a backdoor type of thing in the database world that has to happen at some point, right.
26:23 Yeah. Store procedure is a pretty dramatic example, but these days, the way is that when you write your code, it's going to be clear where the SQL is being executed, where the I. O. Happens. In many ways. It actually was inspired by Async IO, where I had been tweeting a lot of people into Async IO, which I was a little skeptical of it. At the same time. What people appreciate about it is that you're very aware of where the actual message to the database is happening. This is where the message gets sent out here's where it comes back.
26:53 I say you're awake now. How is that?
26:56 I think that's a little heavyhanded, but at the same time, I appreciated that notion. So with 2.0. Well, I tried to work with that idea that we want to make it very clear here's where you're making a sql statement here's where it's executing, here's where the results are, and then also here's where the transaction is. Those are actually the biggest changes that you're going to know that you're in a transaction. You're going to know when your transaction ended, you're going to know if you committed it or if it just rolled back and, you'll know, and this code is going to be a little more of a boost in Sql Alchemy0.1. But people are already ready code that way, because now that I've seen many years of people writing code.
27:29 They're like, I really want to know when this happens. So I'm going to be super explicit about it.
27:34 They don't want excess bells and switches that don't seem to have any purpose. But if your code is clear, basically being able to read the code and the intent is as clear as you can get it without just does that. We just know it does that. That's where I'm trying to go. And I'm somewhat freed by the fact that I know people are more tolerant of step A, step B, step C.
27:59 A little less magic. Yeah.
28:00 People are okay with that. Now. They weren't okay with that. Twelve years ago. People like that's too much typing. Sure.
28:06 I want to talk about this migration thing, like what a person does to go from a one three application onward. But I think that's getting a little bit ahead of ourselves. How about we talk a little bit about what are the major features coming in? 2.0.
28:19 First of all, obviously, we're finally two Python three only.
28:24 Yeah. What is it?
28:29 Yeah. Three to six is EOL.
28:33 3.7.
28:34 Yeah, just today or I didn't merge it yet. But like, we're ready to merge this gigantic Garrett review that's going to take all the Unicode conversion crap out. That was all Python two, Python three. Having native Unicode is tremendous for us because all the DB APIs do the Unicode now the new API. So, yeah, we have this new full Caching system that I've talked about a bunch in some of the talks I did recently tell us about that a little bit.
29:03 Yeah. That's the one I forced is that like compiling the sql statements and then like caching those results? What are we talking about here?
29:11 So when you run a sql alchemy statement, we have to take your Python code and make a string out of it, which is the string we sent to the database. We also have to look at that thing and figure out what kinds of results are we going to get back? Are we going to get strings and dates integers and floats and then for those strings and States and floats and whatever do we have to do any processing on the rows? We want to have that all set up, because if you get 10,000 rows back, you want to make all the decisions up front so that when you run 10,000 times, everything is as fast as possible. Everything is already figured out. When you do the ORM that whole process becomes like way more complicated. There's way more going on as Eagle loaders, and there's more fancy kinds of types. We might be taking columns and putting them into an object. All of that stuff is time consuming, and it all now in 1.4 lives behind what I call the cache wall, which means when the Caching is working, which seems to work, it's only going to do that stuff once for a statement that's been cached. So when we run your statement, we're going to do a process that's still a little pricey, which is to get a cache key from what you type from.
30:12 We got to determine some sort of unique hash type of thing out of it.
30:16 It's actually a gigantic tuple right now. Maybe it'll be a hash in some other release, but for now it's a giant tuple, because Python is pretty good at that. It's cheaper than running than doing the whole compilation. And then we get the whole compilation from a cache. If it's there and it's LRU cache, so it's not. We make it, and it allows us to put more bells and whistles into the compiler. We have this thing where it will now detect if your savings will produce Cartesian products, meaning you have joined between tables that are not linked together, and we have all kinds of other fun things. Orm can do that take a little more time to compile, but now they're behind a cache. So the performance is not really impacted too much.
30:53 So that's exciting stuff. And we're going to be looking at 2.0 to start using Cython to speed the stuff even more.
31:02 SQLAlchemy for many years has C extensions that we wrote a C Code years ago. We're going to migrate those to Cython so that we can more quickly add new Cython is code for different sections to speed up to how the cache those cash keys get filled.
31:16 It's one thing to say we're going to write some big chunk in a C layer. It's another to say just that loop. Could that loop be C with Cython. You can kind of just do that, right?
31:27 You can do that. You can do it quicker and without having to worry about all the memory reference counters C Code. And Python is pretty tedious. I'm not really a spectacular C programmer anyway.
31:38 Well, and also a lot of people who might want to come contribute are probably not C developers or Python developers first.
31:43 Yeah.
31:47 If you just do Python, you can do it. It's just like a strict Python pretty neat. And we found that some of the Cythonized versions of the functions just are actually faster than our C Code. Really?
31:59 That's pretty strange.
32:00 Faster, because the people that did Cython optimize the crap out of it, they know all these very esoteric Python C API things that literally we have our C Code is not a lot of codes that's like a few hundred lines. We'll call the Tuple Get item or whatever it's called Cython has some well if we do this and it's faster and it actually is. I can't give you detail on because I don't know, but we did benching and actually fast with raw C code.
32:27 I remember morning way back when hearing people used to say, Well, if you want it to be fast, you have to write an assembler and then you can use C. If you're not, it's not going to be that fast. You could just use C, right.
32:39 That's a different kind of fast, right.
32:40 To be clear, of course. But the interesting thing to me was there was a switch when the compilers got good enough that you probably were slower writing assembly for most people. Right. And I feel like the thing you're describing here is kind of like that transformation, like the compiler understands the whole system better than if you were to try to do it yourself at the same level.
33:03 Yeah. I don't know the specifics, but in some cases they've had figured out ways to make it even faster. So there's no reason to not use Cython.
33:10 You think of what people are doing with Cython, right. Like they're running on supercomputers doing huge calculations that take. So they have a strong motivation to make it a little bit faster here and there. Right.
33:21 Yeah. Look, here's.
33:22 The thing about Python.
33:23 Python is a high level scripting language. It's interpreted always. So it's already not. I don't want to say that, but if you're really writing high frequency trading software, you probably want to use something like Rust or something. Python is not really that. It's a lot of things, but it's not high frequency training speed, demon type thing.
33:44 You can make it do that once you start wanting it to be sub milliseconds. You might want to start, but other than that, you're probably fine.
33:53 Yeah.
33:54 I might be wrong. It's not going to control the Rockets, but I guess maybe it does. I don't know. It has to be that it's so useful for so much stuff.
34:02 Yeah. Absolutely.
34:03 Okay. So that's really cool. So you're thinking about switching some of your C Code over to Cython, which is really interesting, and I didn't see that one coming. That's awesome. The cache layer is cool. So that way, if I call a function and in that function, I do a query statement in the Orm the first time. That's expensive, the second time. It's not free, but less expensive, right. Cheapish, much less expensive, much less.
34:27 Okay. Yeah, we haven't set up yet. And I've been doing this for many years. Is when you fetch rows from the Orm to create the Python objects that are your classes. That's still more expensive than we'd like it to be.
34:40 Yeah, that's super expensive in terms in ORMS and ODMs. If you select 10,000 rows out of a database, it's probably the serialization or deserialization. That's the cost, right?
34:50 Completely.
34:52 I've made the loading weight like I spent years and years making it way faster by doing that whole thing where we pre calculate everything up front, everything like loader. We're not going to every time you get a row. It's like we have these little calls set up, but still just to make the object just to make a new class in Python is expensive. It's pricey. So still breaking progress. But yeah, the Caching will not speed that up. It will speed up the overhead per query.
35:18 Which is still like every area you can speed it up is great. Danielle out there says Cython is also a great choice to forget about ref count and psycho PG. Two.
35:32 Ref counts are tough. I'm pretty proud of myself that I did figure them out to some degree. They're not as hard as Malek and free.
35:38 But an esoteric set of rules to ref counts while we're down in internals. Let me ask you this. If you've thought about this or experimented with this, any one thing as I dug into Python's memory model that I found to be really interesting is when you create, you have GC, the GC modeling and say Gcget thresholds or set thresholds and the thresholds tell you what will trigger a garbage collection, not reference counting, but a cycle detection type of thing. Right? By default. At least last I looked, I haven't looked in 310, but in 309, it was 700 1010, which means 700 allocations. And then for every ten Gen zero collections, there's Gen one and every ten Gen one, there's a Gen two collection. That 700 means if you allocate more 700 more classes or dictionaries or Tuples or whatever, then have been freed, a GC will run. So if you select 10,000 rows out of a database, how many GCs are running?
36:40 That's a lot, right? That's like 14 GCs.
36:43 If you're taking the 10,000 premium in a gigantic list, then it's not GC much at all because you're paying it all in a gigantic buffer. You could have watched the size of your process grow.
36:51 Right? You're not freeing any, and you're not rep count freeing any either. But you're allocating 700 more.
36:59 I think that might trigger 14 GCs unless I understand it wrong.
37:03 I don't know anything about it. I wouldn't know. All I know about GC is that it's going to happen.
37:07 Yeah, the reference counting stuff is super straightforward, but I've always thought about this around the database, even if you're just getting dictionaries not ORM classes back. If you're selecting a whole bunch of stuff, not only are you doing that sterilization layer, you're also incurring a bunch of GC because it's trying to preemptively run around and look for cycles that might have been forming.
37:29 One thing you can do is make the code so that you don't have as many cycles. And we actually do that. We've had people post issues related to this, so I don't know much about GC. I do know that if you can reduce reference cycles, you will have less of these asynchronous GC runs happening. So we had someone specifically came to me with a whole lot of use cases where he showed when you run this little code like all the GCs would happen. And we went in there and got rid of a lot of cycles. So we actually have a test suite in Testa profiling test man usage called cycle test. Something like that. Okay. And they're all these little whiz bangs that run, make a session and close it, do this and that do this and that and then it will run it in a hardest that actually counting what GC is doing. And it's asserting that there's only five GC calls and not 20, right?
38:12 Yeah. That's fantastic.
38:13 We have a lot of that happening. Interesting.
38:15 Yeah. It's cool that you're thinking about it. I mean, on one hand, this is an external thing so you can control your data structures, but you can't totally control what people said their GC to do. But anyway, it's just something that I always think about when you create lots of these orms things, because a lot of the signals to the GC are like, oh, I got to get busy, but you're not even that's a single call to sql Alchemy, and I'm just waiting. There's nothing else I can do to get out of the way. You know what I mean?
38:39 We've just been around for so many years and people come to us. We have all kinds of men usage and C profile stuff at our test suite to make sure the call carriers don't grow here to make sure we don't do too many GCs. We try to work on that as much as we can.
38:53 We have a new course over at Talk Python HTMX plus Flask modern Python Web apps hold the JavaScript HTMX is one of the hottest properties in web development today, and for good reason, you might even remember all stuff we talked about with Carson grew back on episode 321 HTMX along with the libraries and techniques we introduced in our new course will have you writing the best Python web apps you've ever written clean, fast and interactive, all without that front end overhead. If you're a Python web developer that has wanted to build more dynamic interactive apps, but don't want to or can't write a significant portion of your app enriched frontend JavaScript frameworks, you'll absolutely love HTMX. Check it out over at Talk Python FM HTMX or just click the link in your podcast player show notes.
39:42 I would say one of the things that makes me more excited than most of the other stuff here is probably the Async support that you have going on. You want to talk a bit about that?
39:52 Yeah. So when I was doing 2.0, I thought I wanted to be compatible with how Async IO works, because I figured I would eventually do an Async IO API for it. But then I suddenly had this idea. Why don't we just do the Async IO.
40:07 We use this library called Greenlit. The way the Async IO works is that SQL company has a blocking API. We call block execute. It blocks. It comes back. Usually the way you want to make that kind of thing work in an Async non blocking context is you have to throw all the stuff into a thread pool. You probably don't want to be in the Wiki. This is more old stuff. Okay, like that one click on Async ORM.
40:33 This is the one I was looking for.
40:35 This is the one I wanted to show people. Yes, not the session one, but the ORM.
40:40 Yeah. This is what's important here is that the notion that two of style has a thing where there's a very specific place that execute happens because with the query object that SQLAlchemy session.query, you could say query all query one query. First, I didn't want to have all this await, just a wait, a wait. Everything with the new API, which is the old API. It's await execute, and then you get your result back and then the result is buffered. And the way this works that changed the whole Async IO equation for Sql Alchemy is that this is not a rewrite of anything. This is a layer on top of the completely blocking API stuff, and it does not use threads. It uses Greenlit, right. When I went to look at what Greenlit actually does, Greenlit is compared to a thread, but it's like a thread, but when you see what it is, it doesn't really feel like a thread because it's a little thing with code that you can contact switch somewhere else.
41:34 It's more like generators than it is, like threads like the weirdness of generators skipping around.
41:38 Yeah, exactly. So the way Python Async IO an Await key where the Await is your contact switch in Asynchio. Now I wanted the internals of Sql Alchemy to be to work in this context switching way, but to integrate with await. So we basically made a greenlit wrapper that emulates a weight without actually using a weight. So you're able to take sqlalchemy blocking internals, which ultimately go out to when you see clock asynchio if you're going to use a DB API. Actually not a database driver is also asyncio currently Async PG AO SQL Lite. There's async my and also there's one. Oh, I can't remember the name of it. Wait, I can't remember the name. Try to get Danielle to psychopg3 I was teasing Danielle because he's on there. There's going to be a psychopg3 that we're also going to support. That does Asynch I O for Postgres. So you have a Async io driver SQL Three, then adapts the A wait calls into the synchronous thing using the green lit thing, and then on the outside. What you see on the screen there is you have Async IO stuff. And when I actually talked to some of the people that were involved with Asynch, I O and Python, they explained it. You could do it the other way too. You can have a synchronous API that calls into an Async middle that goes back out to async API, and that's actually easier to do. But since async lock was already written in sync locking style, we kept it that way.
43:02 Right. It's a lot easier to wrap up async shell on something that's 15 years old and polished. Right. Then just say we're going to do the inside all over again.
43:11 Yeah. And the thing with Async is that there were some Sql Alchemy based things on GitHub and whatever that were basically taking our engine and connection rewriting them as totally brand new Async iOS. And I always thought that you can't maintain that it's protecting our code, taking about 30% of what it actually does and putting it up. And I was like, that can't work. There's got to be a way to get green because I used G Events and the Bentley a lot. And I'm like, okay, I know I'm having to do this. I know it could contact switch without using any way. I know it can do it. I just had to go in Greenlit and read the docs, and I did.
43:45 Yeah, that's really cool. I'm super excited to see this because it opens up using Sql Alchemy with some of the other frameworks that really leverage it, for example, FastAPI, but there's plenty of others as well, right?
43:56 Yeah. Another cool thing about Async IO was that when I did the SQLalchemy1.4, which introduces this new query ORM interface that keeps the old one, which is not going to go away because it's just too much code in the old interface. But for Asyncio. Yeah, it's totally brand new Greenfield development. So I said with Asyncio, you've got to use our new API that automatically got a lot of people to be testing the new way of working and got us to get people because one thing that makes libraries really good is when a lot of people use them and find problems with them and add use cases, and the library goes for sure. So the Async IO thing has been enormously helpful to bring a lot of people to Sql Alchemy who would not have been there. A lot of people who were probably going to leave SQLAlchemy stayed.
44:38 And people used to probably wouldn't have been based on SQL. Not at all.
44:44 Yeah, exactly.
44:45 Because obviously Sebastian needs that to integrate with FastAPI, and they think is a core element of their audience. Daniel says, I wonder if other drivers are moving to expose native async interfaces themselves. Time to go back to DVC DVC.
44:57 Yeah, that's a database special interest there.
45:00 Yeah, sure.
45:02 Daniel can go there.
45:03 And I've not had much success when I go to the sake suggesting let me talk about this async API a little bit for folks who are probably familiar with SQLAlchemy's traditional API because there's a couple of things that stand out at me that stand out to me. Here. One is let's start simple. Here one is you have this concept of creating a select statement and then executing the statement as opposed to like a query of a filter. So there's this sort of this statement model that you build up.
45:37 Do you want to speak about that the original the SQL of me with query really is 95% the same statement model. It's just the statement has the fetcher, the getter the result fetching method stuck on it. And when query came out, that was a historical thing. The original idea of celebrity was that there was only going to be a select object, and then you would use select object with the session to get results. And then it wasn't really done very well. It wasn't very flexible. So people were coming to me with why don't we do this generative builder thing? And that's what happened because I was a little bit rudderless in those days. So we had query, and it took me quite a long time to realize, wait a second. Query is basically what query got much more fancy. Sorry, it's very simplistic, but I can query for one object as I had to keep adding to it, it became clear that I was rebuilding in the middle.
46:31 Yeah, I'm just rebuilding select in exactly the same but slightly different way. And that bothered me for years. So with 2.0, I finally took that on and did a really long, difficult refactoring to take all of the guts of ORM inquiry and put it into a different module called context that you don't deal with. And when you get the select object, you see there that indicates the exact same intent as query a options filter, and they both go to the same back end that does all the Orm figure it out stuff that's a little more elaborate to come up with. The SQL that will be sent to the database. One of the biggest refactorings I've ever done. And it was quite stressful when it seemed like I'm sitting dead ends. This is not going to work.
47:15 Yeah, I can imagine this is cool. So basically the queryofclass, that style is going away, right?
47:23 It's never going to go away because there's so much code in that style that it's basically not. When you go to the website, it's not going to be in the tutorial. It'll be in reference docs. So people that deal the code could deal with it.
47:34 It'll be in some sort of middle ground where it's not fully deprecated, but it's not promoted either.
47:38 It's called Legacy. We've had different systems that say Legacy for many years.
47:42 Makes sense.
47:43 There's so much code written that way. I totally agree. Yeah, I probably would never change that.
47:47 But what I did do is when I do that, I try to make that code written such that it's all by itself. And basically what query does now is it makes that select inside. It sends it off. So it's what you call hitting your own doctor. It means that the alternate API uses the real API internally. So you're testing it either way, right.
48:06 Going back to our pattern stock. It's like an adapter for the select pattern API. Two other things that stand out here. One is you create a session maker traditionally, and you create a session by calling it. You have an Async session maker, right to create them. That's one of the differences that's not super different. But the other one that is interesting to me is now these things are context managers, which when I used to work with SQLAlchemy traditional ones, I would create a class called a session context just so I could do this in a with statement. And here obvious it's an Async with statement, but that's just the async aspect of it.
48:46 It's in the async aspect too. I should mention that when Sql alchemy in 2006, we were in Python2.3. They were not context managers. They didn't come out to Python two you can't integrate with.
48:59 So nothing was based on that. Sqlalchemy 1.4 and 2.0 as much as possible. Completely support the context manager model.
49:06 Fantastic.
49:07 And the docs and the tutorials. If you look at the new 1.4 tutorial as it's on the site, it's all about contact managers, and the older try accepts and ready. Your own thing should go away. You should use context managers now. And what you see there with the async pattern. I've made it so that if you're using all the new APIs, the context manager pattern with the ngin connection in Core is mirrored to the context manager pattern with the session and the session maker.
49:32 Nice.
49:33 And then it's also mirrored in the Asynch IO API. There's four different APIs. There's core O-R-M sync and async and they all should have as much as they can, the same as codes.
49:45 They have the same as much as possible. They share the same context manager patterns. And we want you to use contact managers for everything now for any kind of blocking.
49:55 And that's also how we can improve the transactional model so that you're really bringing the transaction all the time with the core because it's not a burden because you have context managers things I see here I want to ask you about two bits of code and then take another question from the audience you have with session begin and you do some inserts, and then you don't say session commit. So the way this works, I'm presuming tell me if I understand this right. If it makes it through the with block without an error, it commits. If it makes it through the with block with an exception, it just doesn't commit and effectively rolls it back.
50:28 That's what it doesn't roll back. That's correct. And we've had the beginning target manager for a long time. It's just it wasn't totally consistent everywhere, but we've had that for a long time.
50:41 Now I've made sure that the opening and closing of a resource and the begin commit of a resource are consistent. And also there's always a way that you can open the resource and begin commit in one line. Also, the session maker has to begin on it now super and you could do that.
50:57 The other thing I'm Super familiar with execute, but then you also have stream, which looks real similar.
51:03 What's the difference there stream is Async I O only right now stream is because it's important in many cases to get a result that's doing async streaming. And when you're doing asynchronous, you've got to have the Await keyword so you can see the results. Object returned from session stream is called an async result, and all of the methods have the async annotation on them so that you see it's. Async for a in result of scalars
51:26 I see the things you interact with the subsequent functions are themselves all async as well. Okay. Yeah.
51:32 Cool. What that will do is that will try to use a server side cursor.
51:36 Yeah, that's what I imagine.
51:39 Instead of trying to pull it all into memory, if you got 1000 rows back, you could start pulling them more generator like right.
51:45 When you use a service like cursor, you actually are genuinely not pulling everything into memory. First. When you use the buffer result, you're getting everything to memory, which is normally okay.
51:53 Yeah, most of the time it's fine. But if you want a whole bunch or you're flowing it on anyway, a lot of cool stuff there. We'll get short on time. Mr. Hyper Magnetic out there asked to see what can we support Paginated results, but that also leads us to a little area. I want to focus on just a couple of things as well. So yeah, maybe address that.
52:12 First of all, paginated results is one of those things that seems simple, but it's more of a quagmire than you might expect. There's different ways to do packaging results. The way we all did it years ago was we use these functions called limit offset, where limit will give you like ten rows at a time, and offset will start your results set in the middle. Once we work with bigger database to be able to realize that offset is terrible because offset will actually get the whole results and scroll through it. It does it with the server side, but still very slow. So now it is you do pagination, you want to have some kind of approach where the query you're doing is looking at some data in the data. You're some row the data, you're some column in the data you're querying. So if you're passionate by date, page ten would be where data is greater than this date. That from the previous page. You just got that's one way. Another way is to write SQL query to do this thing called window functions where you can figure out the date or just whatever thing ahead of time Pagination implies on that approach implies that you have any approach implies that you have an order, buy something. So it's hard to make Pagination be like this results here that scalers to say .paginate
53:20 You can make a very crappy version that does that, but it wouldn't really work the way. I should say. There is a method in the result. Call partitions that will give you chunks of a result at a time. It's not quite the same as Pagination because Pagination is stateless. It's usually for web applications where you're going to get a certain page of results and then show to the web page and then your whole sql is over and then you're going to come back that sql later.
53:47 So we have features filled in for partitioning of a single result set so that you get chunks of that time, which is helpful. Pagination is something you need to go and look, there's some I think maybe some of you probably have some helper functions for that. We have some recipes in that Wiki page that I told you to leave. There's actually some recipes for Pagination there. I don't think Pagination is like a turnkey. It's got to be customized and people will write Pagination frameworks and they're complicated and they're hard to do. So we leave those as an exercise for the community, but we do have some recipes.
54:21 That's a great answer.
54:23 Okay.
54:23 Now for the last thing, I wanted to talk really quickly about this list from Valia called awesome Sequel alchemy, it's one of these awesome list that covers all the sql alchemy things like data structures and types, and I'm just throwing this at you and neither of us have a ton of experience with a lot of these things, but I thought it'd be fun to just kind of go through them some of the nice or interesting things that I ran across here. So we kind of wrap this up like some extras. All right. So one of them is SQLAlchemy Continuum, a versioning extension for Sql alchemy.
54:56 I know what this is.
54:57 Yeah.
54:58 It's probably based on one of the recipes that we have in the source distribution version. A. Yes. The first one already has a version extension. The session is very limited. That's correct.
55:11 Everything they wrote there is correct. I have some jobs in the banking industry where I was doing SQLAlchemy and we had some niche conversion rows this and when has it changed?
55:23 And how do I go back?
55:25 Yeah. There's different models. You can either take the rows as you get them and put them in an archive table, or you can do this thing where you never update or delete a row. You just enter a new row. You've got a temporal version scheme. I have some recipes to do that in the example section because I did them at a job. And I believe Continuum builds upon those fantastic to make a more robust supported thing. I don't know. I've heard of Liquid Base.
55:49 I don't know.
55:50 Yes.
55:53 I'm not a Java person either.
55:55 Okay. Maybe it might be like here's another one that I ran across from that sql. Awesome list is SQLAlchemy Enum 34. I'm guessing because I came from Python three four. This package provides SQLAlchemy type to store values as standard Enum enums a lot of times. What happens is those things get turned into, like numbers or other weird things. And here they get a little more type information, right? Yeah.
56:18 This might be absolutely, because we do support regular Python enum.
56:26 Seven months ago.
56:27 If someone is using this, they could say because there are so many enum features. I've been notified that we don't do exactly the way. There are some extra features that I forgot what they were, but there were some things that enum do that we don't support.
56:39 So maybe it has that probably does that better.
56:41 Yeah. Numbs are really a pain in the butt.
56:44 Yeah.
56:45 Types.
56:46 All right.
56:47 The Python. That's good. But the database site is.
56:49 Yeah. It's always some sort of limbic. I mean, that's pretty well known to people, but that's obviously important because you've got to keep your database and your models pretty closely, right?
57:01 Yeah. So we maintain a limbic. I wrote a lembic years ago. Sql Alchemy migrate was not really holding up to what we were doing.
57:10 SQl Alchemy migrate is part of Open sack now, and I actually have maintained that a little bit as well. But Olympic was meant to be more of a bare bones, straightforward, but not too fancy tool. It has become fancier and people that use it and get what it's about. Like it a lot. It's never going to be as cool as South for Django, maybe an Olympic three, but that could happen. It's not as automated as everybody would like. It has a system that will look at your models, it'll look at the database and what we call autogenerate migrations, but we don't guarantee those migrations are completely perfect. You've got to go and look at them and fix them, which I still think it just did 95% of the typing for you. All the work for you. That's what it was meant to do.
57:56 Yeah, I think also it's helpful because it works in that realm of that's the area of SQL. I know less. Well, the DVL and how do I drop a table or column and then re at it under a different name without losing data? All that kind of stuff can be tricky.
58:10 Yeah, and a DVL is where a lot of the database vendor specific stuff is exposed. Like all these crazy keywords and data types, and it's less declarative and more imperative for alembic. We just provide a model to create your migrations, and it's been working pretty well. It's got a lot of features now, but it's again a tool where if you don't know what DDL is, you're going to have a bad time.
58:34 Yeah, sure.
58:35 You should learn what database migrations look like in the table alter column and know what that means. And analytics will be pretty smooth, actually, in our extensions, I saw it wasn't on that list. There's an extension called alembic utils which are extensions for alimbic for postgres that also build upon some recipes that are at least docs here, and I recommend looking at that too.
58:54 All right. There's a couple we can go quick on. There's one for talking to Amazon Redshift as a provider. That's pretty cool. There's formalchemy.
59:04 Is this still maintain? This is an old one. Formalchemy.
59:10 Maybe it still works, but let's leave that one alone.
59:12 It probably still works.
59:13 Probably looks like it's had HTML has changed styles and CSS on it. All right. Geoalchamy is pretty popular for people doing geospatial work.
59:22 Yes. So the geo alchemy the geo alchemy two. I'm not sure of the relationship which one is wish who works on which one seems like it might have? I'm not sure how much it's maintained. I think it's maintained. I would like it to be much more well maintained. I think it supports it's pretty postgres centric, but there are geo things for SQL Server and Oracle and my SQL. I would like to see that stuff supported as well.
59:46 I don't hear much about it, but I think it's a good project.
59:49 One for some growth, perhaps SQL Tab, sequel, Profiling and introspection for applications using SQL. Alchemy. Wow, cool. I have not seen this either.
01:00:00 What is it doing?
01:00:01 So when you do a request, I think what it's doing? It stores what the page did. So it says here there's twelve queries spent like 20 milliseconds and then you click on each one and it will show you the select statements and like the actual sql.
01:00:16 So it's probably profiled from the client side.
01:00:18 Yeah, I think so.
01:00:20 That's nice if you're just timing it. Yeah, that's cool. I wonder if it does explain.
01:00:25 I don't totally know, but it looks like it has the Whiskey integration, which is cool. This is another one related sort of.
01:00:32 It's called N+1 and plus one.
01:00:37 That's cool is a problem that many AURUMS run into when people don't realize they're doing lazy Loading and they don't do a join and it can be super indirect. Like here I got a query of a list and I sent the list off to the HTML template and the HTML template did a loop and talked about some property on the thing, and then there's more a bunch of more database queries or something crazy, right?
01:00:58 Yeah. Interesting that it seems to work for multiple lower.
01:01:02 I guess it's streaming.
01:01:04 Maybe just captions the APIs of all added a layer of each.
01:01:09 Yeah. I guess you could do it in any kind of a distant way. Heuristically if you want to see the same query to the same table over and over again, maybe that's what it's looking for.
01:01:18 Yeah.
01:01:18 Perhaps where I could do that without necessarily looking lazy loading calls. If you just look at the sql, there's probably ways to do it, but that's an intricate problem, but that seems like a really useful tool.
01:01:30 And also I saw in there. We'll just close out with this. I saw in there. They mentioned the Pyramid debug toolbar, the Flask debug toolbar, and the Django debug toolbar, and I can't speak to Django. I don't think I've even run the flask one, but the Pyramid one has a like you can open it up and say, what were the sql alchemy queries of this page and actually see how many queries if you're on a page and it says, look, there's 51 queries. I thought I did one. What just happened? How do I get 51? Yeah, 50 elements and plus one the bug two.
01:01:58 I haven't really worked with it, but yeah, we have a lot of people dealing with it, or we have to fix issues with it. I don't have to write any web applications anymore. I just do that. So I don't get to see that stuff really cool.
01:02:10 I also don't use them very much anymore either. I find that I don't need that support as much as I did in the early days, but I do remember them being quite valuable early on. All right, Mike. Well, there's more stuff we can go into, but what a cool conversation. And thank you so much for releasing the 2.0 stuff for adding the Async support. It really opens up a lot more use cases. I think they're going to be interesting for people. So that's fantastic. Yeah. My pleasure.
01:02:34 Async works really? Well, great.
01:02:36 Now, before you get out here, let me ask you the final two questions. You kind of alluded to this already. If you're going to write some code, what Python editor do you use?
01:02:43 I'm on VS Code right now. I was on Sublime for a long time years ago. I used TextMate. I do use VIM a lot, but if I have lots of Windows open right now, it's fantastic.
01:02:57 The notable PyPI package. Anything come to mind. I mean, we kind of threw out a whole bunch. Probably did.
01:03:02 I could click to the.
01:03:07 Later threw up there for you. Tell me. Like awesome ones. Which one stood out the most to you.
01:03:12 Oh, that N plus one thing was really interesting.
01:03:13 All right. Awesome too. N plus one with the plus spelled out and the one also spelled out.
01:03:17 Yes, it's cool.
01:03:18 Yeah, very good. All right. Well, final call to action. People have a bunch of sql alchemy code that they've written, but it's probably for the older style. It's probably not async and so on. What do you tell them?
01:03:28 Yeah, we have a brand new tutorial on the current website. If you go to just work on the left side, it'll have this one point 42.2 tutorial. That kind of represents Sql Alchemy at me all over again, using all the newest concepts. I would look at that and just get to know it and also point out problem. Yeah, this is a new tutorial. So this supersedes the old tutorials. It's going to talk about core and ORM at the same time. This is a complete brand new rewrite from the ground up. It took many weeks to do it. And then there's also the migration guide. But if you go to the tutorial, you really see what the new way of working is supposed to look like. And what the idea is what's supposed to be.
01:04:08 If you read the tutorial and see you don't have any Sql Alchemy learning it from scratch, see what it's like. See? Wow, this is different or you might see. Oh, I never knew that was like that because it really tries to represent the library from the first principles.
01:04:23 So to speak up to ORM stuff.
01:04:26 All right, well, I'm excited about all these new features, even the embracing of contact managers everywhere. It looks great to me. So. Yeah. Thanks for being here and thanks for sharing this with everyone. Yeah, my pleasure. Bye.