Monitor performance issues & errors in your code

#353: SQLModel: The New ORM for FastAPI and Beyond Transcript

Recorded on Monday, Jan 17, 2022.

00:00 Two frameworks that have taken the Python world by storm lately are FastAPI and Pydantic.

00:06 Once you already have your data exchange model and Pydantic, you might want to use that code for storing and talking to your database. And if you have database models, you might might want to somehow use those models to power and document the APIs you're already building with Fast API. But the popular ORM such as SQLAlchemy and others far predate Pydantic. But could those two things be put together?

00:29 Sebastian Ramirez is here to tell us the answer is yes. We're covering his project SQL model, which is the marriage between Pydantic and SQLAlchemy. This is Talk Python to Me episode 353, recorded January 17, 2022.

00:56 Welcome to Talk Python to Me, a weekly podcast on Python. This is your host, Michael Kennedy. Follow me on Twitter where I'm @mkennedy and keep up with the show and listen to past episodes at talkpython.fm and follow the show on Twitter via @talkpython. We've started streaming most of our episodes live on YouTube, subscribe to our YouTube channel over at 'talkpython.fm/YouTube' to get notified about upcoming shows and be part of that episode.

01:23 This episode is brought to you by Datadog and US over at Talk Python Training. Please check out what we're offering during our segments. It really helps support the show.

01:31 Sebastian, welcome back to Talk Python to me.

01:34 Thank you very much. Thank you for inviting me. It's a pleasure to meet you.

01:37 It's great to have you back. When you were on the show, we were talking about Fast API and it seemed like so much had happened. You've done so much. And now there's, like, all these other frameworks that you've built and all sorts of exciting things, right?

01:49 Yeah, very exciting stuff. It's like, very exciting because Python is getting so excited. Well, it's very exciting has always been exciting, but there are so many new things that is great to build things.

02:01 It's interesting. I feel like your framework is more than many take advantage of. It almost depend upon the latest aspects of Python.

02:09 Yeah, absolutely.

02:12 Someone made a meme at some point on Twitter about this guy that was like and he was like type annotations and another library and just put it together. And that's what I'm feeling.

02:26 Yeah, absolutely.

02:28 As a community, we sort of muddled our way through the Python2 to 3 transition, and it took a lot longer than even Guido and many other people expected it to take. But now that we're on the other side of it, stuff like what you're creating and other people are creating, that's what would have been possible had we gone sooner. Right. But now it's like, no, everyone's putting their effort into these new ideas and these new aspects that are now possible.

02:56 Yeah, absolutely.

02:58 I feel like the way Python is growing and improving is amazing.

03:04 There are some growing pains, like, with any project or with anything, but it's been able to grow in the directions that are needed and to support all the things that users are needing. And we can do very cool stuff that will not even be possible in other languages. And I don't know, for me, it's pretty exciting.

03:22 Yeah.

03:23 Same as for me as well. It just gets more exciting. You could see it. You just keep working the same thing and it just keeps I've been doing that for a long time. I need to change, but I don't feel that way at all. I feel like every day there's something new and amazing. And still the possibility for more incredible things to come is certainly out there. I don't feel like we've hit the limit of what it's possible for framework authors like you to build or for the core devs to make Python do. There's the whole performance resurgence thing that Gidovan Rossam and Mark Shannon are doing, that Sam Gill did that Anthony Shaw is doing and others.

04:00 It's good, right?

04:01 Yeah, it's amazing. And I think how the energy within the community and the poor developers and Editors and all the tools all growing together and also supporting each other helps each other part to grow more and better. And it's so exciting to get, for example, support for very recent things and have been able to use them right away in Editors. It's like cool.

04:27 It is absolutely cool. And yeah, the Editors are definitely coming along as well. Now, before we get into your latest project, SQL model, which is very exciting, let's just get a quick update on you. You told us the story of how you got in programming in Python before, so we're not going to ask you that again. But what have you been up to since you were on the show last?

04:46 So when I was on the show last, we're talking about Fast API. Right. And tyber already existed, if I'm not wrong. Right

04:53 Yeah, I think those were the two things you had built, and Fast API was okay.

04:57 Yeah.

04:58 Had Fast API made it to the top three web frameworks yet? I'm not sure if it had, but it was right around that time. Yeah, that's incredible.

05:05 I think it was very close to that point. That was mind blowing that people were being able to use it so much and adopted so much when it came out in the service like that. It was super cool.

05:20 And yeah, I don't know.

05:22 I have been trying to focus I have always been trying to focus on whatever is the next thing that I can work on that will have the biggest impact that can help the most. And I end up just like changing areas and trying to improve different areas and different things. And recently that I was working with SQL databases. Well, recently, I don't know, some months ago that I was working with SQL databases and I was working some of the existing libraries, and I wanted to have the benefits of the new features of Python, but I wasn't able to have, like, as much things as I could because most of these libraries were built before we had all these new features. I wanted to be able to get that. And I figured that the best way to build it was applying the ideas that I had and then learning that I had from the other tools and from the other things that just like, put the team together because there were some libraries that were trying to do similar things. But I felt like there was still a bit more that could be also. Yeah, I was just trying to get that and that's how it ended up starting.

06:24 Yeah, fantastic. Now I feel like SQL model. I don't know for sure I'm asking you, but it seems to me, looking in from the outside that SQL model was something like, I need a good ORM for Fast API and the things out there didn't click for you in the way that you want it. I'm going to build something that fits with this. Right.

06:45 The good thing with FastAPI is that it doesn't have any need for tightly coupling it with any ORM with any database. So it can be used with anything. But still, there are some things that might not be as convenient in the ORM itself. Like if you use it alone and for example, with FastAPI that you use it to declare all the data models, all the shapes of the data that you want to receive and you want to send back and to do, like all the data validation documentation, then you declare a bunch of those data models with Pydantic. But at the same time you will end up declaring duplicating a lot of that information in a separate ORM just to connect to the database and to handle the database stuff with Python objects. But then you have to duplicate the information in two different ways. And that's what it was. Not the best developer experience, I guess. And I was trying to make it a bit more user friendly, a bit more developer friendly, I guess, to work with databases and data models and avoid all that duplication information, and at the same time make it as easy as possible to write a call, just using the same standard type annotations and just using the same intuitive things that we can already use. And that's the point that I was trying to hit for people to see how that clicks together.

08:01 I know 12% of the community who builds Web APIs and Frameworks is using Fast API, but there's a decent person out there who maybe haven't heard or looked into Fast API. Maybe they've heard about it, but they don't know the pieces. Maybe given that, that's some of the motivation. Maybe give us a sense of how do you build data models that match your APIs and how do you do things like generate the Open, the Swagger documentation and stuff like that? Set the stage for why just straight SQLAlchemy or some other standard Pony ORM or something like that didn't just directly map over to how Fast API works.

08:39 Awesome. So like just a very quick intro to Fast API. It's a web framework that is focused a lot on building Web APIs. The main idea is that using the standard type annotations or type hints. So the way that you in a function, you declare what is the type of one particular variable using that same information, that same information by default will give you some certainty that the code is correct and will give you the completion and the inline errors in the editor FastAPI uses that same information to do data validation of the data that you receive in the Web API and to do data serialization of the data that you are returning back and to do automatic documentation. This is all based on a bunch of standards, Open API, JSON Schema and a bunch of other things. And because it's based on these Open standards, then it can generate and it can also provide Swagger UI, as you were saying, which is this web user interface that shows all the information of the API, all the endpoints where the data, the shapes that you can send, and you can actually interact with the API directly from the route without having to go to some documentation site and then update and the Wiki gets updated and things like that.

09:50 Yeah, absolutely. I built a weather service API for one of my courses. This has limited data. People don't try to use this for actual weather service, but it's a Fast API API. And in addition to all the other cool things, it does quickly generate the stuff it needs to. You can just go to /docs and it'll give you the schemas, it'll give you API endpoints, the values that go in the return value. All of this, I guess the most important aspect of this is probably Pydantic, correct?

10:21 Yes, absolutely.

10:22 The most important part, you define yourself in Pydantic models and then that drives so many of these things.

10:27 Yeah. So Fast API is built on top of two tools Pydantic does all the data stuff, data validation, statistics, documentation, and styled, does all the web stuff. Fast API just puts them together in a way that they work together and that's extra things on top. But Pydantic is the thing that powers all these data valuation and all this automatic documentation, Pydantic is also based on the same type annotations, the standard Python type annotation. So you can just use the same intuition that you will have for standard Python and they get all these data processing.

10:57 Yeah. And Fast API does several things with the Pydantic models. It does model binding, I guess I'll call it that term is not super common in the Python world. But you can just say my API function or Web function takes this model and then Fast API will create the Pydantic model and set the values and do the validation, and then also the return value you can say will drive this documentation and so on. So the reason I wanted to set the stage so much around Pydantic is that's one of the core elements of SQL model. Right. And not just using that library, but so that it can be used as the models in Fast API, right?

11:38 Yes, exactly. So SQL model is a library, what they usually call an ORM. And if you don't know what an Orm is, it's just a library to connect SQL database with Python objects and clusters. I don't know why we use the term over my thing. I feel it's a bit abstract, but it just allows you to connect SQL databases with Python objects and classes. And the thing with SQL model is that it does a lot of work inside so that each model that you create is already a Pydantic model. It's not that it internally uses a Python model where it turns some additional parenting model, is that each model is itself a Pydantic model, and at the same time, subsequent model is built on top of Pydantic for data processing, validation. All this stuff and another library that does all the work to communicate with SQL databases, which is called SQLAlchemy. And each one of these models is both Pydantic and SQL Algorithm.

12:34 Yeah. It's an interesting marriage between Pydantic and SQLAlchemy. Much of the way that you work with it would be very familiar to people who do SQL alchemy today, right?

12:46 Yes. That's the idea that it will be very familiar for people that are already working with Pydantic, probably because they are using FastAPI, but at the same time, it will be very familiar for people working with SQL model because it's just the same look and feel. And it's indeed a strange marriage because these libraries are so different that getting them to connect and work together in the very different ways they are built. It was very strange, but they actually ended up working quite well.

13:13 Yeah, I imagine that it was pretty tricky anytime that you get in the middle of an ORM and its model. I've tried to do that with other frameworks and said, oh, it would be great if I could say use inheritance in this way on my model so that there's not duplication like, oh, no, no, you can't do that. Because the thing really depends upon the exact class that derives from its sort of like ORM base class. That's what it uses for its determining what columns are there and so on. Right.

13:46 Yeah.

13:47 It was crazy. I spent so much time that we were trying to figure out what was happening underneath and studying so much about the black of magic in Python. The stuff that I always fear, like all the meta classes and stuff and all that weird stuff I still want to be able to do is this thing together.

14:06 But yeah, because they do things in a very different way at the same time, that facilitates allowing one thing to do its job and the other thing to do its own job in their own particular ways. So yeah, it was fun.

14:22 Yeah, very cool. Whenever I think about an ORM, the thing that I first got to focus on is the Python classes. Because for me the whole point of the ORM is to let me talk to my database through those classes and model my application through those classes. Right. So let's maybe get started by talking about how do I create a class, a model, a SQL model here that is both a Pydantic model and a SQLAlchemy like model I'm going to talk us through. What does it look like?

14:56 Cool. From SQL Model you will import this class SQL model and SQL model you inherit from this class. You can for example create a class Hero and then let's jump to the internal parts of that. Then you will define some attributes for this class Hero. For example, you could say that it has an ID and that this ID will be an integer. The way you declare that is with standard Python type annotations, you could say that it has a name and it's a string.

15:23 If you are familiar with Pydantic, it basically could exactly be a Pydantic model in the simple case, right?

15:31 Yeah, exactly.

15:32 If it really is just an integer and it just has a number, you don't have to make it auto increment or any weird stuff like that, right, exactly.

15:39 In the simplest cases it will look just exactly like a Pydantic Model and in fact it will be a Pydantic model and then for some particular cases where you need to have a little bit of extra information to tell SQLalchemy underneath to delete hey, this does this thing with the database, then you can pass additional parameters and additional configurations. So for example, when you create the ID of this class, this will be the ID of the table and it has to be a primary key. So then you can use the function field to say hey, this still has a default value of none, but I need this particular field or this particular attribute of this particular column. However you want to call it I need this to be the primary key and then that information is passed through to SQLAlchemy underneath, which is the one that does all the work. And there's something particularly interesting here is that you are saying hey, this has a default value of none and that none where the default value will be used by Pydantic in the Pydantic side of things, but at the same time it will be used in the SQL side of things. So in the database this will have. Also, the particular default value in the case of the primary key is just because when you create a model, you still don't know what the primary key is.

16:58 Most of the time the database generates that. You could say your primary key could be your email address, but it's common to have just auto generated by the database UUID or Auto increment integer or something like that.

17:12 Exactly. For those cases, you want to have the Type annotations very precise so that your code can tell you hey, this could be known at some point that's just a particular video. But the thing is, the important thing is that you use standard type annotations to declare attributes, and then this will be mapped to the data model in Pydantic, but at the same time will be mapped to the table in the SQL database.

17:36 Nice. So it kind of behaves in the two ways. And that means that what you put into your database is pretty much what your API model is as well, right?

17:45 Exactly.

17:48 This portion of Talking Python to Me is brought to you by Data Dog. Are you having trouble visualizing latency and CPU or memory bottlenecks in your app? Not sure where the issue is coming from or how to solve it. Datadog seamlessly correlates logs and traces at the level of individual requests, allowing you to quickly troubleshoot your Python application. Plus, their continuous profiler allows you to find the most resource consuming parts of your production code all the time, at any scale, with minimal overhead. Be the hero that got that app back on track at your company. Get started today with a free trial at talkpython.fm/datadog or just click the link in your podcast players show notes. Get the insight you've been missing with Datadog.

18:29 That's the idea. Like, in the most basic situation, the cool thing is that with this approach and with this tool, you can then create additional models that don't map to one particular table in the database. They are just for handling data with the API. For example, if you create if you have an API that receives data to create a user, it will probably receive a password from the user. It will have the username and the password, and you want to be able to have that information in the model that you want to receive in the API. But you don't want to save the password as plain text in the database.

19:05 You don't. Isn't that the easiest way I get these warnings from these various sites? Like, oh, your password can't be more than eight characters long.

19:14 Please don't save it in the database. That's a really interesting scenario, right? You need to receive it on one end, but you must not put it into the database.

19:21 Exactly.

19:22 It's not carry on, for example.

19:24 And then in that same situation, you create the user and you want to return the information of the user back to whatever is the client, you don't want to return the plaintiff password. You want to say, hey, this is the username, but that's it.

19:37 Yeah, probably not. It's very unlikely that you want to return the hash as well. You don't want it to return at all. Right.

19:43 Yeah. How do you handle that for these particular cases is where the model will shine because you can create one base model that will have, like, all the base attributes. For example, it will have the name, the last name, the address, the email, blah, blah, blah. And then you can emerge from that model and they have different models for the particular use cases, for example, for creating data. So you will have a plaintiff password, and for returning data, you will have no password at all. But then one of these models will be the actual model that stays in the database, the one that reflects the information in the database. And this one is the one that will have the hash password. But you didn't have to duplicate all the information for the model because they all inherited from the same page.

20:25 Is that the section that got on the screen here that says multiple models with fast API? How you do that?

20:31 Exactly. Yeah.

20:32 So the idea is obviously you have got some shared information about the user, like the email and their name and stuff you want to share that probably their ID. But you don't want to share, say, like you said, the password or whether or not they're an admin on the site or those kinds of things you probably don't want to exchange over the API. Right, exactly.

20:52 And if you need to duplicate all the information for each one of these particular models, there's a high chance that at some point whenever you refactor the code, some part will be out of sync, and then you will have a bunch of errors and a bunch of bots that are very difficult to detect. When you have duplication of code and you have to synchronize it by hand, it creates a lot of potential bolts that are very difficult to detect.

21:18 Yeah. So in the way you do your models, this is pretty neat. One of the things that you do is you've got your model hierarchy, you've got SQL model, which is the base class of all the things that interact with SQL Model. And those are typically the classes that you create that would be like SQLAlchemy or Django ORM models. But in your world, you can have inheritance. And then somewhere in that hierarchy, you set table equals true as you create the class. So it's not necessarily that just, oh, you derive from this class. So that's a table. It gives you more flexibility and go, this part is the table, that part of the table. Like in the scenario we're talking about, you have a base user where there's a name and a password, a hash password and stuff. No, sorry, you wouldn't want to put that. You would put like your shared stuff into the base class. And then you write last name.

22:10 Address, email, and then.

22:11 And then the thing that derives from a user would derive from user base, which would say like typical is true and it could have its secrets there.

22:19 Exactly that makes a lot of sense inbound what about outbound? So I've got a Fast API end point. It could even be flask or whatever, right? And I've done a query to the database and I get the table version that has the secrets. I can easily go to Fast API and say the response model is the base thing. So the documentation is right. But if I go to the object I got from the database and I say as dictionary or two dictionary, I forgot exactly what the right term is. But the thing that sends it back, it's going to include everything in it, isn't it?

22:50 This is one of those particular details of Fast API that I think people in many cases miss. And is that in Fast API you can say hey, this is the response model. So this is the model that I want you to use for the data that I'm sending back. The most obvious result of that is that in the automatic documentation you will get the schema of what is the response data. And that is like the most obvious and visible. But past API will also use that same model to filter out the data. So if you say the response model is user out, for example, and the class user out, which is an Pydantic class or something like that, this class user out doesn't include the hash password from the function, you can return an object that includes the hash password or a dictionary that includes the hash password. The fast API will omit that field and Fast API will only return the particular fields that were defined in the response model that you say that will be returned.

23:52 Okay, I did not know that that also affected the outbound data, not just the documentation. That's pretty interesting.

23:59 Yes. And in fact, in many cases people ask why does Fast API use this parameter response model instead of using the return type annotation? Because in Python when you create a function you can define what are the types of the parameters that the function receives, and you can also define what is the return of that particular function. If Fast API use the return value, but then say like hey, the return value is this user out. But then the object that you were returning from that particular function was different object. Then the editor will complain. The tooling and the tools that detect those type of errors like MyPY will complain and they will detect hey, you're saying that you're returning something but you're returning a completely different thing. So that's the reason why their return type is not what is used to write that information and instead it uses this particular configuration responsible because it's used for filtering data.

24:52 Right. Okay, interesting. So for people who don't know haven't seen this in action, you put a decorator like an app. API.Get, for example, just like you would say in Flask or something and you say here's the URL. But then you also may put response model equals some Pydantic type in fast API. And that drives the Swagger documentation. And I am learning now drives the filtering of the allowed return values as well, which is pretty excellent.

25:17 Yeah. In fact, it will also validate the data. So if you're saying hey, this will return this data and then whatever you're returning doesn't include that that will actually be an error on the server because you are saying that the contract is that I will return this data, but this you are not returning it, then it will raise an error inside of the server and it will tell you, hey, the data that you're sending is incorrect. So there's something going on here. There's something wrong with your call because you're sending something Invalid from what you say that you are going to send.

25:47 That's pretty fantastic. Okay. I didn't realize it made such great use of that response model. So that's just a whole another level to bringing Pydantic into that world. So there's a bunch of comments and thoughts here in the audience. I kind of want to bring some of them in because there's a bunch of great ones. First of all, former guest Waylon says your mustache is fabulous, which is always required when you're on a video.

26:09 Thank you very much.

26:11 Sevda link says big like on Fast API. I think Fast API is absolutely very much Papun says, I have a question, if the data schema is complex and has nested JSON structure, in what case would you validate? It's pretty straightforward to just nest the Pydantic, but this brings us if you're going to be in a world where you're nesting Pydantic things, you may want to save them to the database. What's the story on relationships and this? Basically I've received some data that is like nested related data. What do I do in SQL model?

26:48 So if you need to receive some complex data structure and you need to extract the information, you can declare, you can declare models with Pydantic or with SQL model saying that hey, this is just a later model and then you can manually extract the components and then just add them to the database independently or something like that. There wouldn't be a straightforward way to say like hey, I receive this giant JSON and automatically generate a bunch of different models that don't exist yet or something like that, or to automatically infer where to put each information. It will be as straightforward like it will have a lot of different design possibilities. So it will be easy to get it wrong. So the way that you will do it is that you define the complex data ship that you want to receive and then once you take it, you just extract it, each part of the information and it's particular object or particular data points that you want to then say to the database now to return data to the user. With SQL, you can have relationships and relationships between different tables, have like automatic joins and all the stuff. This is all thanks again to SQLAlchemy, which is the thing that works under me.

28:01 It already models that.

28:02 Yeah, exactly. But then you can use that information and you can just like declare the mall. And this again works well with this idea of having narratives to be able to declare, hey, I want to return this model and I want it to include this particular relationship mode so it will include the information from other tables. We will just extract that information and return it to the client.

28:23 That's really cool. What about lazy Loading all. I'll ask this in two aspects. If I've got a relationship, I can do a join or a subquery load on it in SQLAlchemy so that if I know I'm going to be traversing that relationship, I don't end up with the dreaded in plus one performance problem where I thought I was doing one query and I'm doing 51 queries. If I got 51 or 50 results back, something like that. Is that support flow through SQL model as well? The joins?

28:53 Yeah. So the thing is that SQL model actually just like exposes the same interface as SQL because it's actually just using SQL logging underneath. And SQL supports everything including like a lazy Loading. SQLAlchemy actually supports things that are not supported by many other ORM's. I forgot the name having primary keys that are composed of different of several columns.

29:15 Things like composite keys.

29:19 There's a bunch of things that SQLAlchemy supports. If SQL supports them, then SQL model automatically supports them because SQL model is just narrated directly from yes, that's really cool.

29:31 Now the question I was thinking about is if I have a result from the database that has a relationship and I return it from a Fast API endpoint, is that going to go and start iterating? The relationship do I need to be worried about in plus one problems by returning these models that then are getting serialized in eager ways where it's like tracing through all the relationships so it can build a whole JSON to get it back out and then return the whole data.

30:03 Wow, that took a while.

30:07 Now by default, Fast API And what SQL model won't include relationships In models won't include them in the data that is returned back. But if you need to include some of those, you can declare again using inheritance and you can declare a different model that defines hey, this relationship, these particular attributes should be included. That way you can define that particular one in the specific endpoints that you want to include the information in the resulting value, right? Okay, that will work. To force SQL model, to force well Fast API to do all the end plus one queries and to just extract the information and send it back. But if you are returning that data into relationships, you will probably want to eagerly load that information, which is something that is supported by SQL Model and biasing. So you will load all the information that you need including the relationships, and then you just return that update directly and you define them all. Hey, I want this to include the relationship so it will just include the information that is already there.

31:10 I can see that this in plus one issue is without the join or Eager load is happening through a profiler. If I was doing this in something like Django or Pyramid, I could look into the debug toolbar and it'll actually show me the SQLAlchemy statements that are running it will be like why are there 50 queries on this page the harder I suspect and Fast API, especially if it's operating in API mode where it doesn't have like debug toolbars and stuff like that. Probably one way you could see it is to say Echo equals true on the engine.

31:43 Yeah, exactly. Because Fast API is not integrated with any database and SQL model just makes it super easy to work with Fast API. The SQL Model could be used with any other framework. That was the intention. SQL model doesn't depend on Fast API doesn't depend on SQL, they just integrate very well. But then you could just enable the with SQLAlchemy that will then show all the particular SQL statements and show you hey, this is what is running. This is what is happening, right? Yeah.

32:12 So if you're connecting to the database, then this is a SQL alchemy thing, but obviously it will flow through, right? You just say when you create the engine, you give it the connection string. You can say Echo equals true. And if you are doing queries that are doing a bunch of indirect behind the scene, lazy queries for you, your console window, your terminal, wherever it's just going to blow up with query you're like why is so much SQL moving by, right?

32:38 Yeah, exactly.

32:39 That's how it works.

32:40 Let's talk about editor support really quick. So one of the things that's really nice about Pydantic is it requires you to state the types whether those are fundamental types, whether those are nullable types like optional of Int or they're nested types like a user contains an address Pydantic model. All of those scenarios result in really good editor support, right?

33:04 Yeah, exactly.

33:06 What's the story for Editors and sequel model? That's something you specifically call out about how it has good support there.

33:11 So if you check the source code for SQL model, it's actually super short, but it's just like a lot of tricks together and many of those tricks are actually about type annotations because the theme that allows your editor to provide you with the completion and inline errors and the declarations of these types, the type annotations or type hits and SQL model does a lot of internal work so that whenever you use any part of SQL model, you will get that type of information in your editor. For example, if you query table to get data from the database, the result that you get back, the object that you get back with having internally, and all that type information so that the editor will be able to provide you with all the competition and all the inliners and all those things. SQL model, in fact sacrifices some of the more advanced or obscure or sophisticated use cases that SQLAlchemy supports, and sacrifices those to instead get very good out of completion and inline errors everywhere in the code. And this is another thing that SQL model is that it uses some draft standards that are not or even implemented yet. I'm not even like part of the feature standards yet, but they are already supported by some Editors. For example, Visual Studio code already supports providing autocompletion when you are creating a new instance of a particular class for a particular class. Having this auto completion is not very easy to do with other tools because the editor doesn't have any information about what are the parameters that you can pass, what are the arguments that we can pass.

34:58 Which when you create a Pydantic model, it doesn't in anywhere indicate. Here is the constructor initializer, and here are the keyword arguments that happen to be all the static values, static fields.

35:11 People just say like keyword arguments or like data start, something like that.

35:16 Yeah, but I always think thanks for nothing when I see that.

35:20 Yeah, exactly. But actually Pydantic 1.9 includes the same trick. So now you get auto completed in Visual Studio. In PyCharm, you already have auto completion with Pydantic because they have a plugin for Pydantic to provide the completion for those things. But it requires this particular plugin. Now with this extension you can get also Auto completion in VS code with the same extension without needing any plugin. You get auto completion for SQL model In visual Studio, I see the people from PyCharm. We're also checking out to maybe support the same standard, which will allow PyCharm to provide automatic autocompletion for SQL model and other libraries like Pydantic and others and others.

36:03 Sure, yeah, that's great. Definitely the widespread use of Pydantic effectively is forcing everyone to go all right, how can we make this work better on the creation side? So RJL out there has a comment which then leads me to an interesting question. I'm old fashioned. I use direct SQL statements. No ORM, I really need to take the time to go down this route. Indeed, I do think so.

36:28 It's certainly worthwhile what are your thoughts on using straight SQL versus not, then I'll ask my question.

36:33 So I think it just a lot about taste and how people prefer to code. There's a lot of people that are so comfortable with SQL and they can do so many things with SQL very easily that it's just more efficient to just use SQL directly. For me, some of the advantages with ORM is that I get in line errors that I get out of completion for. What is the name of the attribute? If I forget that I say secret underscore name, the editor will have to complete that for me. But if I'm typing that inside of just a long string in Python using SQL, then I have to remember because no one will tell me that I have a syntax error in my SQL or that I'm using an app that doesn't exist.

37:14 One of the things that actually blew my mind is Pycharm. If you set it up to you, basically connect the database to your project. It will give you autocomplete and error checking inside strings inside Python. That's super cool for your schema, which is amazing. That said, I never do that because to me, one of the things that is super valuable, one is this auto complete, the other is refactoring as well. Do we change the name of that? Well, there was that one query we didn't update and now it crashes in production, but only sometimes when it hits this case. And it's just the way it sticks together and stays consistent to me seems a lot stronger than models. And also the ability to swap back ends. Right. The way you do parameterize queries is different across different database back ends.

38:02 Yeah. Also, if you write SQL by hand, then you have to be super careful. And probably you have to be a SQL wizard and know how to sanitize all the data that you're quoting or otherwise you could end up with it. But I'm saying that there's a lot of people that prefer, really prefer writing SQL directly. The same author of Pydantic, which SQL model is based on, prefers to write SQL directly. And he's using Fast API and everything, but he's just more comfortable to keep. And the author of psychopg, the driver for Postgres equals he just uses SQL directly. It's just like more comfortable for him. He's the fastest I love, but still it's just more comfortable. So I guess it depends a lot more for me. I depend a lot on the tooling and editor support and refactoring. As you were saying, if I change the name, I know that it's changed everywhere because I won't remember. I don't remember.

39:02 Yeah, absolutely. Martin in the audience asks an interesting question. Down here is a better example. One of the challenges of ORM's is to make set based operations apply back to the database. Like I want to change this field, I want to set a Is on sale flag to true for all products where the price is less than $10. Right. Where I'm not going to pull. I don't want to go. Let me query all products whose price are less than $10, change it on the object, and then push those changes.

39:35 I just want to say update. Where this set that you know what I mean?

39:39 Yeah.

39:39 What's the story about that with SQL model? Because that's one of the things that can really just hammer productivity or speed, I guess, is if you've got to pull back a whole bunch of stuff to just make sort of consistent changes across them, you know what I'm saying?

39:52 Yeah. This is one of the use cases where you will want to interact directly with SQLAlchemy, and you can do that through SQL Model, but you can write where it's as complex as you want through SQL Model, but just using pure SQLAlchemy underneath and you can use very advanced things with SQL. SQL Model focuses a lot on the simplest and most common use cases, providing the best developer experience, certainty that the code is as free as possible because you have all these type annotations and all these types of things. But for any case that is a little bit more advanced, you can just drop down directly to SQLAlchemy. And because SQL Model is just pure SQLAlchemy, the models are themselves just SQLAlchemy. So you can just use SQL directly. In fact, you could use one of these models with a SQLAlchemy engine directly and it would work.

40:47 Interesting. Okay. Sky points out in the audience that you are too humble to call the PR that got that auto complete for Vs code was actually well done.

40:58 Thank you.

40:58 Keep moving it forward on both fronts. So what about performance? There's extra goodness in the validation and the type conversions and stuff like that of, say, Pydantic. But is there a large overhead for using this, say over SQL alchemy over say, raw SQL?

41:16 Yeah.

41:18 When you declare a model and you say hey, this is table mode. So this is like the equivalent of a SQLAlchemy mode, then a lot of the validation and that stuff with Pydantic is so when you create a model, it will not be valid on creation for that particular table because this will be handled directly by SQLAlchemy. For example, with SQLAlchemy, you can create an instance of a model without setting all the attributes, and then you can set the attributes manually afterwards. If Pydantic was doing validation for that, that will explode and will say hey, this is Invalid.

41:51 Yeah.

41:51 So when you are working with SQLAlchemy alone, well, like with the SQL parts alone through SQL model, then it's just like using SQLAlchemy directly. Okay.

41:59 So it's not really any different in terms of whatever SQLAlchemy does. This does in terms of performance.

42:05 Exactly.

42:07 The code is so slim, it's so little that whatever is the overhead I would think it would be negligible. At the same time, I'm not optimizing for squeezing the maximum performance, but for getting the maximum correctness in the code and the best developer experience, because I feel it helps a lot more to be a lot as a developer, building the tool and making sure that it's all correct down, having the quote super fast but very difficult to develop, to understand and to write correctly.

42:38 Okay. So I guess you just got to decide like is an ORM the right fit at all?

42:43 And if it is.

42:43 This is a pretty good choice if you like this API.

42:46 Yeah, absolutely. If you were needing the maximum performance that you can get, you will probably end up just like getting amazing driving directly and just like writing SQL directly for the particular end point that needs this extra boost in performance. But for most of the other cases, this will probably help making sure that the code is correct and making sure that you can write code quickly and with the feature that they're going to take quickly.

43:14 This portion of Talk Python To Me is brought to you by Tonic.ai. Creating quality test data for developers is a complex, never ending chore that eats end of valuable engineering resources. random data doesn't do it. And production data is not safe or legal for developers to use. What if you could mimic your entire production database to create a realistic data set with zero sensitive data product that AI does exactly that with Tonic. You can generate fake data that looks acts and behaves like production data because it's made from production data, using their universal data connectors, and a flexible API Tonic integrates seamlessly into your existing pipelines, and allows you to shape and size your data to scale realism and degree of privacy that you need. Their platform offers advanced subsetting security identification and ml driven data synthesis to create targeted test data for all your pre production environments. Your newly mimicked data sets are safe to share with developers QA data scientists and heck even distributed teams around the world shorten development cycles, eliminate the need for cumbersome data pipeline work and mathematically guarantee the privacy of your data with tonic.ai. Pick out their service right now at talkpython.fm/tonic Or just click the link in your podcast player shownotes Be sure to use our link talkpython.fm/tonic So they know you heard about them from us.

44:42 I've got one more thing I want to talk to you about. Then I did a Twitter. Hey, I'm talking to Sebastian. What are the questions we should be asking? And I got a bunch of great ones on Twitter, so I do want to touch on those as well.

44:54 Nice.

44:55 One of the things though, that you spoke about is the ability of having this is just generally true for SQLAlchemy. These models, they are tied back to what's called a session or a unit of work to do with the database. And you can't like do a query, get a record, and then go from a separate situation and try to jam it back.

45:17 It's got to be stuck to the session that it comes from, right? So you don't share the models across sessions. But one of the things that would be nice is just to have a single one and so Fast API has a dependency injection system that you talked about can be used for basically always providing one and only one session to an API endpoint or a web endpoint. That then could be the database management, like creating a unit of work. That is the lifetime of the request, basically. You want to talk about that?

45:47 Yeah, exactly.

45:49 I think you described it perfectly. I don't know what else can I add up, but let's try. So this is injection system. It's just like you declare cell function and Fast API will make sure to run that function and provide the value to all the things that need that value for one particular request.

46:05 Right? This has nothing to do with the database, by the way. This could be anything. It could be a login framework, whatever. Right?

46:09 Exactly. So this is very useful for doing logging, for doing authentication, for doing authorization with roles, and whatnot for doing logging for setting up things that load stuff to remote surface like Sentry or Data or I don't know, for all those things that you need to do. And that can be and there is some logic that needs to be shared and that could run before the request is handled and maybe after the request is done and then you can share this information. So in many frameworks there is a concept of a middleware which is something that runs before the request and after the request. So then you have to run this thing for every request with dependencies. With this dependency injection system you can define exactly where you want it to run and you can define I want this to be run with a group of end points, or with a group of operations as I used to call them, for just one particular endpoint or for a bunch of them. And with this system you can extract and you can generate whatever it is that you need to generate for the particular request. And the good thing about the independence injection system is that if you're extracting information from the request, for example from a header, then this information will be also extracted and included with Fast API with all the Open API and all these standards. So you will get that information in the automatically generated user interface.

47:27 Very cool. So what steps do I have to take to do dependency injection to get that session to show up? I remember you had it in the documentation, but I don't remember where it is right now.

47:38 I think for the particular case of SQL Model, even though Fast API and SQL Model are independent, are made to be very compatible to direct dependent. I have a lot of documentation about writing applications with Fast API and SQL Model in the SQL model docs. The way that you will handle a Fast API dependency in general is that from Fast API you import this special function depends and then you create some function that will return sole value. This function will have the same style as any other function that handles a particular request. So it can have some parameters with some types and those that information will be extracted from the request and it just returns something. This is just plain old function and then you pass function. This is what will be the dependence. Then you pass that function as a parameter to the pens. And then you put the pens as the default value of sole parameter in your main function that is handling the request.

48:37 I see. So you could say session equals. Like depends. Here's some function that will get called to create the session session equals.

48:44 Depends, and calling the depends with the function that is named get session or something like that.

48:49 Do you have a way to see both sides of that with dependency injection? So does it just return the value? Or can you create a session and then yield the value and then keep processing or something along those lines?

49:01 Exactly like that? You can create a session that you can yield the value, and then after the request is done, you can continue doing more stuff after yielding that particular session. So you can create a session if the session from the dependency and then the main call that will have the request will have that session, and then the same dependency can take care of closing the session after the request is not exactly try yield the session.

49:26 Finally close it. Maybe like if there's no exception committed, something like that.

49:30 Exactly.

49:30 Okay, that's pretty flexible. Yeah.

49:32 And in the main call that you have, you don't have to take care about creating the session or closing it or making sure that there are no exceptions, because you can do all that stuff in the dependency and share that logic throughout your code. And the other thing is that dependencies can themselves depend on other dependencies. So you can create a dependency that gets the session for the database, and then you can use that in another dependency that gets the current user and extracts from the header from the authentication token or whatever, extract the user ID and gets the user ID from the database, and then returns the current user so that you have a whole dependency that just takes care of returning the current user, making sure that it's authenticated. And then you can reuse that code in all your endpoints or in all the main functions that handle the requests. And all those functions will be able to just get the current user right away without having to have all the logic to extract information, process the token, all the stuff.

50:30 Sure. This is very neat. I haven't used this enough during my work with Fast API, so I got to check this out. All right. Now let's do a bit of a lightning round here of the Twitter questions, because I've seen some of these questions come up in the live chat.

50:45 I pulled out these ones that I thought people put on Twitter that were pretty good. Lymat Webster says any plans to ramp or replace Alembic to make migrations more developer friendly? So first of all, migrations, you've got to keep your database in sync with your models. Otherwise, SQLAlchemy and hence SQL Model will freak out about that because it's going to be a problem. But Alembic, while it works, it's a little bit hard to say. Like here's all the models you need to pay attention to, and here's the scenario where you run it's a little bit clunky. It works well, but it's not super smooth. And I think that's what Matt is asking here.

51:19 Alembic is the official tool from SQLAlchemy to do the migrations. And because SQL Model itself also just SQLAlchemy underneath Alembic works with it perfectly. Alembic is a great tool. It's super advanced and helps a lot. It can even generate automatic migration and things like that. I think the main problem with Alembic is that in some cases it's not as intuitive. So, yes, what I want to do at some point is to wrap a bit of Alembic. I will replace it because it's already doing a magnificent job and it will be super difficult to write all that logic and all that work that MyPY has been doing for a very long time with SQL.

51:59 I will wrap it and I will try to add a bit more of the limitation to explain how to handle the simplest case, which is the same that I'm doing with if you need something more confidence that will probably just go to Alembic directly or directly related to this not Twitter question.

52:14 Michael question. Do you have any thoughts about testing code using these models and stuff like fake data or mocking out the database beyond just the standard stuff you would do to SQLAlchemy, is there anything special about SQL model that makes testing it easier or different than SQLAlchemy?

52:31 No, the testing will be pretty similar to SQLAlchemy.

52:34 Pretty similar, yes.

52:35 I just have a lot of documentation of how to do testing and even how to do testing with faster applications using Sequel alchemy and how to, for example, use SQL live database for testing that will be run on memory instead of the production database that will be post reciever might myself.

52:52 Just change the connection string to the engine and exactly. Let it go. Okay.

52:56 And then make it run with a database in memory and then make it work correctly with threads and everything.

53:01 So something like a Py test fixture that initializes the database, but you just use colon memory colon for the connection string so that it just goes away.

53:10 Yes, exactly like that. It's documented. Exactly like that.

53:13 All right. Right on. All right. Ricky Limb says would SQL model could be part of the standard Python library? I have some thoughts on this, but I want to hear your thoughts first. I have some historical perspective on this, but I want to hear your thoughts on this.

53:26 Okay. So it will not be part of the standard library. Ideally, it will not be part of the standard library, because if it was part of the standard library, it will mean that it will be available in Python 3.13 or something. And the users of Python 3.10 right now will not be able to use it on one side. On the other side, having more stuff in the Python startup library adds more inconvenience and more burden to the core maintainers to the core developers Python, which makes it even more difficult for them to continue supporting Python and all the different versions. And it will also complicate things for great Canon that is trying to figure out a way to live down Python so that you can, for example, run it directly on the web browser.

54:14 How do we build it in web assembly?

54:15 Yes, web assembly and Red Canon and prnce have been doing like a very good job. It's very exciting. A lot of that will probably require actually leaning down a bit, something like, as far as I understand, but I'm not expecting.

54:28 Yes, I imagine a world where we have, I don't know what the right word for it is, but there's like a standard cross environment Python minimum set of language features of standard library, where things like stuff that talks on the network or does UI things or whatever, that is not part of this minimum subset of Python that we are guaranteed to have so that we can put it on web assembly, we can put it on mobile devices, we can put it on servers. And as long as you program to this minimum set, your place where your Python can exist is broader, like micro Python, potentially.

55:08 I think that that's the trend, not the trend towards putting more stuff there.

55:12 Yeah, exactly.

55:14 And it's fun that it's already happening. Micro Python is already that. It's just that it cannot say it's a standard Python because it has to link a lot of things. But being able to have a microcontroller and write code in Python that is running the microcontroller, that's mind blowing. Yes.

55:31 The most mind blowing thing for me is that you can hook a Lambda expression directly to a hardware interrupt.

55:39 That is like what you can do. What that's amazing. The historical perspective I want to bring up here is I believe the core developers actually considered this for requests and they decided that, no, they're not going to put a request in the standard library to replace URL because it would limit requests ability to grow. Like changes could only come once a year. It couldn't come three times a week if there was important changes. Right. Like the speed of development would be hindered. So they said, you know what? No, we don't want.

56:08 Yeah. Points to close this.

56:09 All right. Next, Dimitri Figo says are you considering working on generating TypeScript, declaration files based on what's defined on the Fast API back end? That was that documentation I showed where it has the schema and all that. And like the endpoints.

56:24 Yeah. To make it explained that it was when you go to the automatic interactive documentation for the API that is all based on this standard schema of the API called Open API. This is just a huge JSON that defines all the data shapes that you're using, all the endpoints, everything but that same thing. Because it's a standard, then you can use that same thing to generate code for clients that communicate with your back end. And in fact, there's a bunch of client generators for many languages, including TypeScript. And I have used them, I have used some of them and they are actually very good. You can achieve things like defining in the back end where the data shapes that you're using. Then you update something and then you regenerate the client in the front end. And now after that, the front end team will be able to have access to this new API endpoint without a completion in their editor and everything.

57:17 It works very well. It's super exciting. I just haven't had the time to document the whole recipe to make it work. But it's already there. It's already working and it already does a great job.

57:28 Maybe somebody wanted to contribute a PR or do some help there they could, right?

57:33 Yeah.

57:34 It is a blog post that will be a lot faster to get out and that will help a lot and a lot of people.

57:41 Indeed. Zach Code says, I'd love to hear how you approach figuring out the integration with SQLAlchemy. I mean, we talked a bit about this, but any other lessons you've learned from basically getting in the middle of SQLAlchemy and all of that? It does, yeah.

57:56 It's very interesting that SQLAlchemy was created at a time where I know Python two point something. There were no context managers. So that thing that you do with luck that you say with something, something as blah, blah, blah, and then inside of that report go, that was not available, that didn't exist, SQLAlchemy was made before for that. So SQLAlchemy had to do a lot of sophisticated tricks to make everything work and then getting down inside of it and trying to understand why is this thing doing this and working like this? And it's because of those things. I think I ended up learning a lot about those little details and a lot about how classes work internally and how a class is an instance of what and all those things and how you can configure all that. But the idea would take all this to make it super easy for you to use it without having to deal with all the internal complexity. Yeah.

58:55 You don't have to know, only you had to know and do it. So I think it's worth pointing out, I did have Mike Bear on the show recently to talk about SQLAlchemy 2.0 and how they're moving to have basically the client side view of that, be everything as a context manager and sort of change it up a bit. So how close is this to the 2.0 model or is it the 1.0 model API?

59:18 Yeah. So Mike dayer did a lot of work to make the compatibility transition as easy as possible. And SQLAlchemy, the latest available version which is 1.4 is compatible with the previous style and with the new style. So quote that is within with the new style will be compatible with sequel to be two point whatever and above. SQL model is based on this new style. So for example, if you have an old application with SQL Academy the first thing that you will want to do is to migrate to SQLAlchemy 1.4 and make sure that it's compatible with the new style and make sure that you don't have any warnings that's the main thing that you will do to make sure that it is compared and then after that you can migrate to SQL Model. The migration is also super simple. It's just that changing some classes four type annotations.

01:00:41 Yeah, absolutely. There was a question.

01:00:46 The question by Python at night was what would the level of effort or benefit if any of converting SQLAlchemy models and schema to SQL model? Sounds like the effort is small and the benefit is all the features we spoke about, right?

01:01:00 Yeah.

01:01:02 Okay. Will you recommend it? People are making SQLAlchemy they're like, I really would like to have some of that Pythonic magic. When would you say okay, the trouble of making a change is sufficient.

01:01:14 So if it was me I would just use it right away. Right now it's inversion zero point something I will release 0.1 .0 once I have 100% of test coverage right now it's 97% just because I'm not freaking out. But most of it should already work. It's actually very simple and if anything the work that it does is actually very small because all their needs is all done by Pydantic and SQL. Alchemy. If anything went wrong you could also just switch back to SQL. Alchemy directly. It's just that you will lose the benefits.

01:01:48 Right. Basically change your class back to driving from SQLAlchemy based and you're good to go.

01:01:53 Yeah. The benefit that you will get is auto completion and inline errors everywhere where you are using these classes that you will normally not get.

01:02:01 And the integration with response model and the integration.

01:02:04 Yes, of course. Integration with responsible. Yeah. Actually that's a lot of code that you will save if you can share that with Pydantic and with SQL. Just make sure that you follow all the information about how to pin and how to upgrade versions because it's very detailed how you should go about that. Because as things are still changing and it still has a little bit of extra testing to do, you should be careful about how to be just not install. Like whatever version comes. Just make sure that you been the right version, you have tests and then when you upgrade you make sure that the tests are pausing and then you can upgrade direction.

01:02:40 Yeah, good advice. Already talked about that one.

01:02:44 So Latto asks, are you going to stretch modern Python conventions from the back? In part what you already did. We talked about like using the types and stuff for model binding to Python, challenging front end as well. Should we expect something like Reacti Py, React put in Python that will be do you care about the front end in the sense that you have any intention to build stuff for it?

01:03:09 Yes, I care a lot front end. And actually I think I started in that I have worked with Angular, VueJS and all the stuff. I think it would be amazing to be able to write Python to write Python for front end. But if someone is going to make that happen at some point, probably that kind of and Christine Haines making WebAssembly work for python. Yes.

01:03:30 You need the runtime there first and then it'll go much more easily. I absolutely think that it is not quite negligent, but it's close that the browser makers don't package other runtimes that are web assembly compatible. Right. Like they should go to Ruby, they should go to Java, they should go to Net, and they should go to Python and say, are you willing to provide us a runtime that does X, Y, and Z that we can integrate in a generic way that we can include in our browser? So you don't have to say, oh well, you can't use these other advanced things because the web assembly download is ten Megs. Well, if Firefox, Chrome and Safari all shipped five of those, the five most common languages as Binaries, you would just have it and it would just be this web.

01:04:19 Why does this not happen?

01:04:21 But that's the real problem to reacPy that's not there, right.

01:04:25 Yeah.

01:04:25 All right. Quasi says what is he doing to address the bus factor? That is if you get hit by a bus and then related to that in the audience per shot, it says will you include a moderator to the project so it can become a community driven project and there's less burden on you? I think those are kind of similar questions from a different perspective.

01:04:47 Yeah. So the thing is for most of these projects, like most of the work can already be done a lot by the community. It's not that the work cannot be done. I just want to simply enable a bunch of permissions to a lot of people to just go and merge all requests very quickly because I like to make sure that everything works. Example yesterday for yesterday's Fast API release, it has like four approvals, the pull request, but still it has a couple of buds and a couple of things that needed to be sold. And I need to make sure that the call quality is still kept and that everything is working correctly. So for now, I'm still making sure that I give each one of the requests. But if people went and checked those pull requests and review the calls and tested it and like me, hey, this is working and who uses it? And it's working in my application or things like that, that will, of course, help a lot. Of course that will help a lot.

01:05:43 That's great. Obviously it's open source. People can fork it, they can run with it. If you actually got hit by a bus, I think Fast API would keep going. There would just be a figuring out of like, all right, well, Where's it going to send her back around before it settles down? Not that anybody wants to. I think these questions are a reflection of how significant the impact you're having on the community is. Right?

01:06:03 Yeah.

01:06:04 I find the boss factor fun. I have been wanting to write a blog post about that for a while because I think the boss factor is something that works a lot for investors or for founders that are not developers and they are associating with someone that is the only one that knows the product, but they want to be the owners of half of it. And if this person dies, they just lose all their investment.

01:06:29 We can keep it going. Yeah, absolutely.

01:06:31 For example, many of the projects from Nicole is mainly Tom Christie, which is one person. The maintainer of Flask, which is huge, is mainly David Lord. And he's just like suffering through all of this and through all the abuse of developers and doing like, a lot of the work. Probably like, I don't know, see, he has like another vocabulary or something like that. In the case of Fast API, there's people like my cello, that is helping a lot, and eventual copying is also helping. That helps a lot with keeping the community, maintaining it, and doing all the work that is needed to be done underneath. That doesn't really affect how it's working. Like, the fact that the repository is not in another GitHub owner in a GitHub organization or something like that. It's just because it's easier to handle. But at this agenda, there's a lot of people that are already contributing and that's the work that actually makes maintaining it and sustain it. Exactly.

01:07:26 That's the stuff that matters. Yeah, absolutely. Olupia, just by the way out there all says, I just want to say thanks to you, your work is so important and just great. So. Yeah, definitely that people out there loving it. Okay. Let's keep going back to SQL Model Roadmap for the future where the plans so migrations.

01:07:45 I want to have a small wrapper to have a common line interface built on top of types so that you can get into competition in the terminal as well to have migrations to documentation for using Async with SQL Model. Async is already supported by SQL.

01:08:03 Yeah. And that's a new thing, right? That is, you've got to create now an Async client or an Async session rather. I think it is sort of a regular session in SQLAlchemy, but that's one of the 2.0 big changes that might just push down so that flows through.

01:08:18 Yeah. Like you can already use it. In fact there's people using it in applications in production applications right now, but it's just that I don't have it documented yet. Okay. SQLAlchemy already supports both the normal blocking interface, the regular interface and the Async interface and you can already use it with I want to document all that.

01:08:38 Right. So what you should do is you should just change your dependency injection based on whether you have a def method or an Async def method in Fast API and create an Async client or Async session rather a regular session. And then boom, I can go right.

01:08:52 That is one of the things that I think is so smart about the sign of SQLAlchemy that SQL models is that the thing that handles if it's Async or not is the engine, not the models themselves. So you can use the same models even if it's Async or not.

01:09:07 I agree. Very nice. Good question by large, but I'm going to keep going because we're short. David Smith asked other plans to add Async. Yes. The question is are there plans to document Async?

01:09:16 Yeah, exactly.

01:09:17 At this point.

01:09:18 Right.

01:09:18 We kind of touched on this one about whether you should use it for an existing like should you be migrating. We touched on that one brainer who I think I saw on the audience earlier. Hey Brandon, could you go ahead and make no SQL model? This is a chance for me to mention Beanie out there. I wanted to kind of ask you if you had a chance to look at this. So Roman Wright also was out in the audience. I saw them. So Beanie is ODM like for MongoDB but also basically very similar based on Pydantic. So I thought it was this immediately came to mind when I thought about SQL model is like well here's the Mongo DB version and also tries to do the same thing. Have you thought about a no SQL story? Have you looked at Beanie? What are your thoughts here?

01:10:00 Yeah, I really like both the two alternatives for MongoDB with. One is Beanie, the other one is Pydantic. I think they are both doing a great job. I like this particular thing about Pydantic is that it uses the same style of the interface of SQLAlchemy that the thing that decides is facing or not is the engine and not the model, which means that it will probably be easier to implement because both Beanie and Automatic both are Async. Both having the engine being the one that is Async or not will allow implementing a regular or locking version of it so that you could have Mongo DB models that are shared and reused for Async code and for blocking code in the same application. So you can read more slowly and things like that. But I think both great job and have a very nice interface that is like very close to Python.

01:10:56 Yeah, this is a Sidebar. I do wish it was easier in Python to convert an Async call to a synchronous call, knowing that it would block just go like okay, here's Async method. If I could just go .wait and just make it execute and basically stop the Async running from there, then you could just do the query like give me the answer right. Whereas it's a little more tricky. You've got to get it in the loop and run the loop to completion.

01:11:24 Yeah. That's why I just built Asynchronous, which is built on top of any IO. And the idea is that you have this function async if I Asyncer if I just to do that. So you can pass you can pass one function that is Async and it will be run inside of the main event loop in Asyncway. Or you can say hey, asyncifi this thing and it will run the blocking function in a thread pool so that it's not blocking the main event loop. All the stuff, but it's actually the work sector, although by Antonio is again just doing the same thing of getting typing on the annotations completion and all the stuff on top of the things. That is cool.

01:12:04 I definitely came across this not long ago and it looks very exciting and I wanted to talk to you about it, but as you can see, we're way over time already for just our main thing. So let's go back to it. But maybe next time you're on we can just talk about Async stuff all day. Mike out there asks what is the risk of using it in production?

01:12:20 The risk is the risk that you have for using any software in production.

01:12:26 Let me rephrase it. So what is the readiness for production? I guess is probably what he was thinking.

01:12:32 Yeah. So the thing is most of the work is done by Pydantic and SQL alchemy and they have been used for years and they are doing an amazing job and they are already used by a lot of tools. SQL Mdel always does like a little bit of extra stuff on top just so that you can get all the documentation. Most of the work is done by Dot. The other thing is that the test coverage is at 97%, so you have some certainty that it's working as intended at least. I want to have it at 100% and I want to have tests in continuous integration with several databases because right now the tests are only run in SQL Lite. But you know that all the SQL stuff is actually done by SQLAlchemy, which is already tested in all the database. So there's trade off trying this thing that still has a little bit of extra stuff to do. Most of the extra things that I will do on purpose SQL model are actually documentation, not that much change.

01:13:28 And the ability to flip from one to the other pretty quickly means if you had to say, oh, it's not working out, we're switching back. It's not like, oh, well, you're completely rewriting. It's not that much work.

01:13:38 You just have to make sure that you be in the versions and you upgrade correctly otherwise and you will get more certainty about the calls for code corrections because you have all the types.

01:13:49 All right, we got one question left. You have this ability to take what are often somewhat existing APIs and then improve them in ways that people really connect with.

01:14:01 Fast API didn't start from open TCP socket and let's start from there. It started on top of Starlet, right? Like you already mentioned, Tom Christie, this is on top of two very important libraries that they should go together.

01:14:15 The question Peter asked is how did you learn to sort of come up with APIs like you have what's your recipe for building?

01:14:22 I think the thing is that I have been always trying to solve a problem and I have always trying to improve my developer experience and to improve the way that things work for me. And I have ended up just trying to understand what is the best way to achieve those things. And at some point I ended up learning a little bit about type annotations and I realized that, hey, this can be super powerful. I can reuse it.

01:14:47 After looking and looking for different frameworks that did what I wanted, I ended up saying, okay, I just have to build it because it doesn't exist yet. But it was just like trying to achieve getting the thing that I want. For example, I wouldn't go and build a No SQL model for Mongo DB because there's already Beanie and automatic they are already solving the problem. I try to avoid building new things, but there's a case that nothing is really solving the that thing I want to have, then I go and try to do it. The other thing is I think one of the main features of these tools is just the documentation. And I guess the only thing about it is that I write it as I would have liked to learn those things when I was just starting and I was struggling to understand what all these things and I always have in mind, like how would that newbie learn these things and understand it? I guess that's probably the main thing that I'm trying to make it as easy to use as possible.

01:15:46 Yeah. I feel there's a strong blend of like, let's take the new things that are really useful that maybe not everyone's using and make them very accessible, make them very easy and default and so on. Yeah, yeah.

01:15:56 That's the settings for it, I think.

01:15:57 Yeah, absolutely. All right. Well, that's all the questions. We've been going a little bit long, but I really appreciate the time. Let's just real quickly ask you the final two questions to let you get out of here. All right. So if you're going to write some Python code, work on SQL Model or something else, what editor do you use these days?

01:16:14 I think the both main Editors Visual Studio Code and PyCharm are doing an amazing job at supporting all these tools. Right now. My main one is Visual Studio code, but yeah, it was just one of those.

01:16:26 Okay, very good. And then notable PyPI packages. I feel like we touched on these a little bit.

01:16:31 Yeah, we just mentioned them Automatic and Beanie for doing the same SQL model stuff, but for MongoDB.

01:16:38 Yeah, right on. I agree. Those are both great. All right, Sebastian, it was great to have you back. And Congratulations on SQL Model. Maybe next time we'll talk async, what do you think?

01:16:47 Awesome. Sounds great. Thank you very much, Michael, for inviting me. Thank you everyone for staying.

01:16:52 Yeah, you bet.

01:16:53 See you later.

01:16:55 This has been another episode of Talk Python to me. Thank you to our Sponsors be sure to check out what they're offering. It really helps support the show.

01:17:03 Datadog gives you visibility into the whole system running your code. Visit 'talkpython.Fm/datadog' and see what you've been missing. They'll throw in a free T shirt with your free trial. Want you level up your Python we have one of the largest catalogs of Python video courses over at talk python. Our content ranges from true beginners to deeply advanced topics like memory and async. And best of all, there's not a subscription in site. Check it out for yourself at Training. talkpython.Fm. Be sure to subscribe to the show, open your favorite podcast app and search for Python. We should be right at the top. You can also find the itunesfeed at /itunes, the GooglePlay Feed at /play, and the Directrss feed at /rss on talkpython.Fm.

01:17:46 We're live streaming most of our recordings these days. If you want to be part of the show and have your comments featured on the air, be sure to subscribe to our YouTube channel at talkpython.fm/Youtube. This is your host, Michael Kennedy. Thanks so much for listening. I really appreciate it. Now get out there and write some Python code.

Back to show page
Talk Python's Mastodon Michael Kennedy's Mastodon