Monitor performance issues & errors in your code

#433: Litestar: Effortlessly Build Performant APIs Transcript

Recorded on Wednesday, Aug 30, 2023.

00:00 We all know about Flask and Django.

00:02 And of course, FastAPI made a huge splash when it came on the scene a few years ago.

00:06 But new web frameworks are being created all the time.

00:09 And they have these earlier frameworks to borrow from as well.

00:12 On this episode, we dive into a new framework gaining a lot of traction called Litestar.

00:17 Will it be the foundation of your next project?

00:19 Join me as I get to know Litestar with its maintainers, Jacob Coffee, Janek Nouvertne, and Cody Finchner.

00:26 This is episode 433, recorded August 30th, 2023.

00:31 Welcome to Talk Python to Me, a weekly podcast on Python.

00:49 This is your host, Michael Kennedy.

00:51 Follow me on Mastodon, where I'm @mkennedy and follow the podcast using @talkpython, both on fosstodon.org.

00:59 Keep up with the show and listen to over seven years of past episodes at talkpython.fm.

01:04 We've started streaming most of our episodes live on YouTube.

01:07 Subscribe to our YouTube channel over at talkpython.fm/YouTube to get notified about upcoming shows and be part of that episode.

01:15 This episode is brought to you by Sentry and us over at Talk Python Training.

01:20 Please check out what we're both offering during our segments.

01:22 It really helps support the show.

01:24 Everyone, Janek, Jacob, Cody, welcome to Talk Python to Me.

01:29 Hey, thanks for having us.

01:31 Good to be here.

01:32 Yeah, it's great to have you all here.

01:34 Really excited to talk about one of my favorite topics, web frameworks, APIs, async performance, design patterns, like, let's do this.

01:43 Let's do it.

01:43 Cool.

01:44 So we're going to talk about Litestar, which is somewhat new to me.

01:49 I haven't known about it that long, but looking at the GitHub stars being several thousand GitHub stars and release two, it's definitely been going for a while.

01:58 So it's a really cool framework that I think people will definitely be excited to learn about.

02:03 But before we get to that, let's start with you all.

02:05 Just a quick introduction for each of you.

02:08 Go around the Brady Bunch squares of our video here.

02:12 Janek, you want to go first?

02:13 Yeah.

02:14 Yeah, sure.

02:14 My name is Janek.

02:15 Obviously, I'm a Python developer from Germany, currently living in Cologne.

02:20 So I'm a bit behind the rest of the other guys when it comes to the time zones.

02:26 I would put it a different way, Janek.

02:28 I'd say you're living in the future.

02:29 You're living hours ahead of time.

02:31 You know what's already happened.

02:32 Oh, yeah.

02:32 That sounds much nicer.

02:34 Yeah.

02:35 Well, I currently work as a Python developer.

02:39 Before I got into Python, I worked as a carpenter.

02:43 So I've built furniture and other cool stuff.

02:46 I think that's all we need to know right now.

02:49 So, Cody, why don't you continue?

02:52 Yeah, I'll be happy to.

02:53 So hey, Michael.

02:54 Hey, guys.

02:55 I'm Cody.

02:56 Really, I've kind of had an interesting journey into the Python space.

02:59 And so I'm kind of probably atypical from the rest of the team here.

03:03 And so I've actually been a long time database guy, specifically in Oracle.

03:07 And so lots of big, nasty data warehouses, large transaction systems, and building all the glue that you've got to do to make that stuff run.

03:17 And so I guess really my intro to Python was really around DevOps, doing how do you make the whole environment stay running?

03:25 How do you keep it efficient?

03:27 And it's really focused on the database side of things.

03:29 And so about 10 years ago, moved to Dallas from Alabama, originally where I'm from, and joined a small team of Oracle developers.

03:36 And so we got acquired by one of the big four consulting firms.

03:39 And so at that point, I shifted into development, from development and into cloud migrations.

03:45 So did quite a bit of just Oracle database migrations into the various cloud providers.

03:49 And now, just long, about six years later, I've now wound up at Google as part of the database black belt team there.

03:57 And I still try to figure out exactly what a database black belt is.

04:00 But a year in, really what I can tell you is that what I do is talk to all of our biggest customers and figure out what are the features and things that they need to make their enterprises run on Google Cloud.

04:10 And we work with the engineers to make that happen.

04:12 What an interesting background.

04:14 I would say having a really good background in databases, and especially the DevOps side of databases, is a pretty unique view for building a web framework.

04:22 A lot of people are all about, oh, I got to have something for my front end code, my JavaScript I'm writing, right?

04:27 And that's really not the same.

04:29 Yes, and my career actually originally started using, well, I guess now they're called low code tools, but we used to call them rapid application development.

04:37 And so there's just lots of database builders where there really wasn't any actual Python or any Java or those types of code involved.

04:44 It was all PL SQL.

04:45 So I was actually happy to get involved in the Python world and move out of that space.

04:50 So yeah, excellent. Awesome. Jacob.

04:52 Hey, I'm pretty new to being a developer.

04:54 I spent the last four or five years in the system space, more on like the IT side of things, building systems and helping users.

05:04 The last year though, I've gotten to DevOps on my team at O'Reilly, auto parts, not the book people.

05:10 Been really interesting, but recently I got to join this team and I'm learning a lot.

05:15 So this is really exciting for me to be here.

05:18 Yeah, well, it's awesome to have you here.

05:20 And like I said, a really exciting web project that I think people will appreciate.

05:25 So let's go ahead and jump into it.

05:28 So Litestar at Litestar.dev, effortlessly build performant APIs.

05:34 So who wants to give us the elevator pitch here?

05:38 The 30 second view of Litestar.

05:40 Janik, I think Cody should do that.

05:42 Oh, Janik, I'd like to hear what you think.

05:45 And then I can give you about my perspective of how I kind of joined the team and what it means to me.

05:51 But I think it would be helpful to hear how you think about it.

05:54 All right. So what is Litestar?

05:56 Well, I think our tagline is pretty, puts it pretty well.

06:00 We definitely have a focus on building APIs.

06:04 So not, well, not typical HTML applications monolith.

06:09 It's often compared to FastAPI, and it has similarities, definitely, which FastAPI is already in the name.

06:17 It's also focused on building APIs, but for us really important is the effortless part.

06:23 So what we strive to do is to take away all the, well, not all, but as much as we can take away the boilerplate for developers, you usually have to do anyway when you're building any project of size, which includes lots of stuff like authorization, caching, ORM integration, and all these kinds of things that you usually have to do.

06:49 And with micro frameworks like Flask or FastAPI or Starlet or any of the other ones out there, because there are a lot of them, and they are all really great at what they're doing, but they do require to build a lot of boilerplates, which can be good because it gives you a lot of control over what you're doing, and you can build it exactly how you want to.

07:13 But it's also not, well, it's not completely effortless, which is what we are trying to achieve.

07:20 Yeah, when I think about the web frameworks, I have this sort of bimodal distribution at like two ends.

07:27 On one end, you have Django, where it comes with all of these helpers and all of these things.

07:31 Like you just say, yes, I want a whole back end to manage my database with a UI, for example, right?

07:37 Well, there's a bunch of those kinds of things, form creation and the ORM migrations, all that stuff is kind of just, you get it all.

07:45 And the other end, you have Flask and you have FastAPI and a whole bunch of others, Sanic, you name it.

07:50 And there's a lot, and they're all about, we're going to go and handle the request, and then it's up to you.

07:58 Do you want a database?

07:58 You can have a database.

07:59 If you don't want one, don't have one.

08:01 In that regard, FastAPI itself is kind of almost prescriptive in that it talks about having like model exchange and defining models that create how the website works.

08:13 Whereas Flask doesn't even have that.

08:15 You just kind of nailed it.

08:16 And this is actually how I kind of came into, it was actually Starlight at the time.

08:20 But I guess about four years ago, I built a pretty large scale Django app for some consulting work.

08:26 And it was really around data quality and data migration.

08:30 And it worked really well.

08:31 But I was reading all this stuff about FastAPI and I really liked what I was seeing.

08:36 And the developer experience of that was really incredible.

08:38 And the tools that those guys put together was just kind of second to none.

08:42 It was really refreshing to see that kind of build in over what you see in Django.

08:48 And so I started working with FastAPI and really liked it.

08:51 But I got to where I felt like I had a lot of boilerplate that I added on top of that to get to that working app.

08:57 And so when we joined up with Google, one of the things that I did was say, okay, I've got a lot of like boilerplate for things that I need to run this app.

09:05 And maybe there's somewhere that I can contribute.

09:08 And so at that point, I started looking around at all the web frameworks.

09:12 And that's when I got introduced to starlette at the time.

09:15 And just to give a little bit of a history, it was originally called Starlight because it was based on Starlette, just like all the other ASCII frameworks out there.

09:24 And obviously Starlette is an awesome tool.

09:27 And we were kind of paying respect to that by naming it Starlight.

09:31 But obviously there's very few letters in between that and Starlette.

09:35 And so what we found is that many of the posts that we made, people were confusing, hey, did you mean starlette?

09:40 Because I don't know what a Starlette is.

09:42 And so long story short, we said, okay, it's time for us to rename.

09:46 And I guess we're all not too original because we just flipped the wording around.

09:50 And that's how we came up with Litestar.

09:51 It's a cool name. I like it.

09:53 Yeah. And just for people who don't necessarily know, much of FastAPI's magic is that it's built on Starlette.

10:00 And so in a sense, you're running on the same foundation as FastAPI in that regard, right?

10:06 No, we're actually not anymore.

10:07 So...

10:08 Okay. That was the original. All right.

10:10 I think we have dropped Starlette as a dependency about like six, seven months ago, before version two.

10:18 Yeah.

10:18 So in the beginning, we were...

10:20 Well, starlette is built very, very modular.

10:23 You can use like FastAPI, use the whole thing, and you can just extend the router and don't care about anything.

10:30 But it's also designed in such a way that you can just take certain things of it.

10:35 So you can say, okay, I just like the routing and the rest I'll do myself.

10:40 And what we originally did was we had our own router, our own routing system, and plugged that into Starlet and built our own application on top of that.

10:52 But over time, we have diverged quite a lot from the way Starlette wanted to do things, or well, not wanted to do things, but Starlet became a bit restrictive because we wanted to do things very differently at very deep parts of the starlette stack.

11:09 And so it kind of made sense to us that we just wrote our own basically and filled in the gaps that starlette left behind, which wasn't an easy decision because Starlet is a very great piece of technology and it's very well done.

11:25 And it's got a lot of credibility to it, right?

11:27 You know, there's a lot of people that run it in production, and that means something when you have a tool that is known to work well.

11:33 It was a bit of a challenge to get that going.

11:36 But yeah, at the moment, we are from the ASGI side, our very own thing.

11:42 We don't depend on anything else in that regard anymore.

11:46 Okay. And is that all Python or is that got some other technology making it go in there?

11:52 So that part, the ASGI part is all Python.

11:55 We have some other non-Python parts, but they are not at the web serving side, let's say.

12:03 So we've rustified, I guess that's the term, if you will, at least one place.

12:07 That's the URL parsers, right?

12:08 Yeah, the query parsers.

12:10 Okay. That certainly is a strong trend these days.

12:12 Although I'm surprised for a framework that hasn't been around that long that it's already got rust.

12:16 No, just kidding.

12:16 We experimented about a year ago.

12:20 We experimented actually with more rust and that was to do the routing in rust.

12:25 So we use a Redix-based router.

12:29 That's something Sanic does that as well.

12:32 And we experimented with a rust implementation, but it was decided that the speed up that we got from using rust wasn't really worth the trade-off between it being harder to maintain and being less accessible for other contributors.

12:47 Because most of the people who are using a Python web framework, they will know Python, but they're not necessarily that fluent in rust.

12:55 And well, basically the router wasn't really that big of a bottleneck for Starlette at the time.

13:02 So it didn't make a lot of sense to write that part in rust.

13:06 Yeah. It's a big trade-off, isn't it?

13:08 Even things like shipping wheels and just pushing out a version.

13:12 Once you start to go into a, well, there's a per platform compilation step.

13:16 That has a lot of friction, right?

13:18 And I'm sure you could do a whole podcast just on packaging, but Python packaging in and of itself is not always the easiest or most intuitive process.

13:26 And so, yeah, it definitely gets complicated when you add in another language.

13:30 Yeah, I can imagine.

13:31 And in a lot of cases, there are more low-hanging fruits that you can grab and just optimize things there before you say, okay, now we have optimized everything so well, the only way we can get faster if we now use a language like rust.

13:44 And I don't think we're at that part yet.

13:48 So we have yet still a lot of things to do.

13:50 That's excellent.

13:51 It's a really good philosophy too, I think.

13:52 There's an interesting new way to make your Python code faster that used to be the case when Moore's law was really in effect.

14:00 You went from a 486 to a Pentium to a whatever gigahertz and from megahertz to gigahertz and all those things, you just wait and the hardware got faster.

14:08 So your code went faster.

14:10 But with the faster CPython initiative and Guido and Mark Shannon and team over there, they're making Python quite a bit faster constantly with every release.

14:19 It's really impressive what they've done.

14:22 It was a noticeable for the projects that I'm currently using Lightswirl and it was a noticeable increase in performance when I went to 3.11.

14:29 And yeah, looking forward to seeing what all they do over the next couple of releases.

14:32 – Yeah, we just last week, I think, no, this week perhaps, had the release candidate for 3.12.

14:39 So it's kind of final besides the bugs, which is, you know, people can start testing it and see what's to come out of there as well.

14:45 – Yeah, we haven't actually tested yet with 3.12 because we're still waiting on some of our dependencies to be compatible with that.

14:54 Jacob, do you want to say something?

14:56 – We just have two more.

14:58 I think it's GRPC and Greenlet.

15:01 Greenlet actually, I think, is ready, but they need to do a release to PyPI.

15:06 But the GRPC is, I think, one of our stragglers.

15:10 – Okay.

15:10 – I've been eager to test that and see what kind of performance we can get.

15:13 I guess we could do the RC one now, but we'll wait.

15:17 – We should probably test it out at some point.

15:19 – Okay, interesting.

15:20 Yeah, I mean, that's always the constant struggle, right?

15:22 Is you've got a lot of dependencies here on and on.

15:25 – A huge number of them are optional, but yes, it can get a little crazy.

15:29 – Yeah.

15:30 – That's actually a good point to make.

15:31 You know, one of the things you'll see is that there are quite a lot of dependencies, but you'll see that a lot of them are tied to optional groups.

15:38 And so one of the things that we wanted to do was make it quick for a user to kind of pip install one thing and have all the pieces they need to get started.

15:46 And so you can say pip install Litestar, and you can add the CLI or the standard group, and it'll automatically install the JS beautifier and the command line utilities and rich and a couple of other libraries.

15:58 And so there's a lot of helpers to kind of make that a little bit more easy to just jump right in.

16:03 – Right.

16:03 This portion of Talk Python to Me is brought to you by Sentry.

16:07 Is your Python application fast or does it sometimes suffer from slowdowns and unexpected latency?

16:14 Does this usually only happen in production?

16:17 It's really tough to track down the problems at that point, isn't it?

16:20 If you've looked at APM, Application Performance Monitoring products before, they may have felt out of place for software teams.

16:27 Many of them are more focused on legacy problems made for ops and infrastructure teams to keep their infrastructure and services up and running.

16:35 Sentry has just launched their new APM service.

16:39 And Sentry's approach to application monitoring is focused on being actionable, affordable, and actually built for developers.

16:46 Whether it's a slow running query or latent payment endpoint that's at risk of timing out and causing sales to tank, Sentry removes the complexity and does the analysis for you, surfacing the most critical performance issues so you can address them immediately.

17:00 Most legacy APM tools focus on an ingest-everything approach, resulting in high storage costs, noisy environments, and an enormous amount of telemetry data most developers will never need to analyze.

17:12 Sentry has taken a different approach, building the most affordable APM solution in the market.

17:17 They remove the noise and extract the maximum value out of your performance data while passing the savings directly onto you, especially for Talk Python listeners who use the code Talk Python.

17:28 So get started at talkpython.fm/sentry and be sure to use their code talkpython, all lowercase, so you let them know that you heard about them from us.

17:38 My thanks to Sentry for keeping this podcast going strong.

17:42 And it looks like you've got, say, like Oracle or DuckDB or other things that maybe not everyone, or, you know, AsyncPG, all those types of things that you probably only need one of those, right?

17:54 You're probably not doing MySQL, Postgres, and Oracle.

17:58 Maybe, but probably not.

17:59 We actually do all of them.

18:00 And so with the same, and this is one of the things that I think is probably good to point out, is that with the repository contrib module that we've created, you actually can use the same repository, the same models, the same JSON type, and it will automatically select the best data type for whatever engine you're running.

18:18 So for instance, if you're on, let's just say that today you're running on AsyncPG with Postgres, and you've got a JSONB data type using the built-in custom Litestar JSON type, and tomorrow you convert to Oracle, all you need to do is change your connect string, and it'll automatically deploy that to Oracle with the correct JSON type.

18:36 And so you really don't have to do anything additional to make your code work between that.

18:41 So honestly, a lot of that came from my time with Django, where you got the, you know, one set of utilities worked with quite a few databases.

18:48 And so I spent quite a bit of time kind of making sure that that worked.

18:52 And so you'll see that, including with the Alembic migrations that are coming out in 2.1 in a couple of weeks.

18:59 And so through the CLI, you'll be able to actually manage your entire database, call and configurations and generate them, as well as, you know, launch and use your app through the same, you know, single CLI.

19:10 I don't think I'd ever want to migrate to Oracle.

19:12 Well, when you need Oracle, you do need Oracle.

19:16 I'm aware of the stigma, but yeah, there's some times and places for all of them.

19:21 I think for those who are unaware what you're talking about, you're talking about the SQLAlchemy repository patterns that we offer.

19:29 So the things you mentioned, they are built on top of SQLAlchemy, which in itself is already really flexible and makes it easy to change databases.

19:38 But there are still a few gaps that you need to bridge yourself, like the ones you've mentioned.

19:43 And this is an example of the things that we try to take care of.

19:48 One last note on that.

19:49 So a lot of the repositories that you might see for cookie cutter apps or the other existing templates out there usually stop at the basic CRUD operations.

19:59 But what we've done here is actually implemented all the basic CRUD operations and efficient bulk operations based on whatever database you're using.

20:07 And we choose the most optimal method for that.

20:09 So that includes bulk add, bulk update, a merge statement, bulk delete, as well as all of the standard CRUD operations.

20:16 And so one of the things that we really have focused on is making sure that this is an incredibly feature-complete repository that has all of the functionality that you might want to use just right out of the box.

20:27 That's really excellent that you all are handling that for people.

20:29 And it gives me a sense of what you mean by the helping people do this stuff effortlessly, bringing a little bit of those batteries included feel of Django without...

20:38 Putting in the batteries.

20:39 Without the very prescriptive way that say Django does.

20:43 I guess keeping the micro framework feel, but bringing along a lot of the stuff that people would otherwise have to choose and configure like, oh, okay, we're going to use...

20:50 I guess we'll use SQLAlchemy.

20:52 Oh, did you call an async function?

20:54 Well, then you're going to also need the async SQLite library installed.

20:57 How do I find that?

20:58 Those series of steps you've kind of got to go through.

21:02 And it sounds like you've taken care of some of that for people.

21:05 We try to.

21:06 Obviously, we'll continue to evolve it over time.

21:08 But I've now used it a year at work at Google.

21:11 And it's really kind of satisfied.

21:13 I'd say 95% of the use cases I need.

21:16 And so I typically don't have to drop back into raw SQL anymore, which I think is a huge thing.

21:21 And that's what I would like to propose and get everybody else to.

21:24 And so that's where the focus there.

21:26 And the one thing I'll add about it is that we still kind of maintain that micro framework philosophy, because all this work is actually packaged up in something called a plugin.

21:35 And so you can configure this one class and it automatically registers the route handlers, the own startup, own shutdown handlers that need to happen.

21:44 It'll register the ASCII lifespan things that you need.

21:48 And so basically you get this one piece where you can just set up your entire app and you don't have to do or add the piece in several parts of your application.

21:57 Yeah, excellent.

21:58 So before we dive into the features, which we have been doing a little bit already, at least some of the philosophy, I want to talk about benchmarks.

22:04 And I know benchmarks are a little controversial in the sense that, well, the way I'm using the framework is different than the way you use it.

22:11 The way you use it is really fast.

22:12 The way I, you know, whatever, right?

22:13 Like putting that out there and just giving a sense that this is a really fast framework.

22:19 You know, how does it compare to things like FastAPI or Court, which is the async version of Flask-ish, right?

22:25 They're working on unifying those more, but basically the async version of Flask for now.

22:29 Sanic and then Starlette, who wants to give us a quick summary of this graph you got here in the benchmarks page?

22:35 Before we get into the benchmarks, you said it already, but I want to add another disclaimer here.

22:41 Like, as you said, benchmarks are really, really controversial topic and they're insanely hard to get right.

22:47 And it's even harder to get actually benchmarks that are useful for your use case.

22:53 And they show you what you actually want to measure because most of the time they don't.

23:00 They measure something, but they often don't translate one-to-one or even somewhere close to that to real-world performance.

23:09 And I have spent a lot of time on these benchmarks.

23:12 And I want to say that the benchmarks didn't came about as us trying to compare to other frameworks, but we were experiencing some performance regression internally after a major change somewhere.

23:26 And we were trying to track that down.

23:28 And for that, I developed a quite comprehensive benchmark suit that tried to get us close to a real-world usage of how we expected the framework to be used.

23:39 And then that grew to compare other frameworks as well.

23:43 And when I added the other frameworks, I tried to follow a very, very simple philosophy, which is not necessarily, well, some might say it's unfair.

23:53 I think it's one way to get a comparable result.

23:57 What I tried to do is to not optimize anything.

24:01 I just used every, I built the same app on every framework with the framework as it comes out of the box, just took the straight up approach that's shown in the documentation.

24:12 And I did that because from almost all of the frameworks, there is for every case, some way to make it a little bit more performant in this special case and in that special case.

24:23 And I'm not an expert in all of these frameworks.

24:26 And I'm sure if you start optimizing, there's no point where you can say, okay, now it's completely optimized.

24:34 So I just took the completely opposite approach and didn't optimize anything at all.

24:38 And that includes Litestar.

24:39 We also do things that could be made more performant in Litestar, but we don't do them in the benchmarks.

24:45 Well, that's just our bench line of what we are comparing to.

24:49 And I just think it's an important context to have.

24:52 Yeah, that seems fair.

24:53 So we know what we are comparing.

24:55 Right. Okay.

24:55 So if we look at the benchmarks, one thing we can see there, so we have the synchronous and asynchronous performance.

25:02 And one thing that we can see there is that for Litestar, it's almost identical compared to, for example, Starlette, where it's not.

25:10 The reason for that is our model of execution for synchronous operations.

25:17 What Starlette does and what you could argue is the safe way is to run these in a thread pool by default, which is good because if you have a synchronous function and asynchronous framework, and it's blocking, you might potentially block your main thread and all other requests that are coming in at the same time.

25:34 You don't want that.

25:36 So definitely the safest option is to just put that in a thread pool, let it run, and you're good.

25:43 The thing is threads are slower than async IO.

25:46 And so what we do is we force our users to make a deliberate choice when they have a synchronous function.

25:54 So we say, do you want to run that synchronous function in a thread pool or not?

26:00 And if not, we just don't do that.

26:02 – Is that done by a parameter to the, you know, like at get or something, and then you say thread pool, yes or no, or something like that?

26:09 – You can set that as a parameter.

26:11 And if you don't, and you use a synchronous function, you get a very nasty warning about that.

26:17 You can shut that off globally because some, yeah, that.

26:21 – Sync to thread equals false. Okay, cool.

26:24 – Yeah, you can shut that off globally if you don't want to be warned about it.

26:28 But so we made the decision that it should be a deliberate choice if you want that behavior or not, because in many cases, you don't actually need that behavior because you're not doing any blocking IO operations or any other blocking or CPU bound operations or whatever.

26:44 So the, in fact, the synchronous functions are as blocking as the other async functions.

26:51 So there's no benefit to be had from running it in a thread.

26:54 – Yeah, and also a lot of times in production, the production server like G-Unicorn or whatever is already using multiple threads or things to deal with that.

27:05 And when, or at least multiple processes.

27:08 And then when you're talking to things like a database or something, you're doing a network call, which deep down is probably releasing the GIL while it's waiting anyway, right?

27:17 There's a lot of subtleties that are happening down there that maybe you don't want to juggle, right?

27:22 – One point to add to that is that the sync to thread option applies to the dependency injection as well.

27:27 And so it's not just the routes that you can add that flag to.

27:30 And so to your point about databases, all those pieces can have that same kind of behavior.

27:36 – There's a benchmark for that as well, somewhere that shows the same difference for the dependency stuff, I think.

27:42 Yeah, so that's one difference and another difference.

27:46 And so just to add, this is one choice that Starlette makes for you and by extension FastAPI as well makes for you that you can't easily turn off.

27:56 But if you look, for example, at the Sonic example, you see that it doesn't suffer from the same problem.

28:02 So you can attribute that to this decision.

28:06 The other big difference is because what we're looking at here is serializing a dictionary into a list of dictionaries into JSON.

28:15 And one of the reasons why Litestar is so much faster in this than FastAPI, for example, is because we use MessageSpec, which is a JSON validation and parsing library.

28:30 Well, not just JSON, it's also for MessagePack, which is an insane thing.

28:37 It's an insanely great piece of technology, which we have been using, I think, for almost a year now.

28:45 When we started to introduce it, yeah, that one.

28:48 And it's super fast.

28:50 It's written in C.

28:51 The code can be a bit hard to get into because it's like one massive 12,000 line C file.

28:57 So if you're not very familiar with C and the Python C API, it's not going to be an easy read.

29:03 Yeah, but it's insanely fast.

29:05 And it supports a lot of things out of the box.

29:09 So for example, well, JSON, so all built in Python data types, but it also supports data classes and type dicts, which helps us a lot.

29:18 And FastAPI, on the other hand, by default, well, it uses, for one, uses a standard library JSON module, which isn't as fast as any of the other external, not external, third party JSON libraries that you can have.

29:34 And it also uses Pydantic to validate the data, which I have to point out is something that we do not by default.

29:43 So that's the reason why there's such a big difference.

29:47 And even after Pydantic 2 has been released, which has been rewritten in Rust and has had a significant gain in performance.

29:58 Yes, Samuel Colvin says something like 22 times faster, which is remarkable.

30:03 Yeah, but still, if you just don't do that step at all, it's obviously going to be faster.

30:09 So yeah, that's true.

30:10 Can you, do you remember this graph here, whether this is FastAPI based on Pydantic 1 or 2?

30:16 This is Pydantic 2.

30:18 Okay.

30:18 You could see that it's noticeably faster now with Pydantic 2.

30:23 So there has been a huge gain.

30:24 And to be fair to Pydantic and FastAPI, mostly FastAPI, you could also use FastAPI's OR JSON response, which uses OR JSON to serialize that.

30:37 And it would be a lot faster.

30:39 But as I said earlier, that would, to me, fall into the category of optimization.

30:45 You could do similar things for Litestar.

30:47 And what we wanted to compare is performance out of the box.

30:51 And this is what you get.

30:53 Talk Python to me is partially supported by our training courses.

30:56 Python's async and parallel programming support is highly underrated.

31:02 Have you shied away from the amazing new async and await keywords because you've heard it's way too complicated or that it's just not worth the effort?

31:09 With the right workloads, a hundred times speed up is totally possible with minor changes to your code.

31:15 But you do need to understand the internals.

31:17 And that's why our course, Async Techniques and Examples in Python, show you how to write async code successfully as well as how it works.

31:25 Get started with async and await today with our course at talkpython.fm/async.

31:31 That's some of the stuff you were talking about, right?

31:33 You could include new JSON parsers.

31:35 You could include UV loop, for example, and lots of optimizations, right?

31:39 The benchmarks are on UV loop.

31:41 I think that's one optimization we did across the board for everybody.

31:44 Everyone uses a single UV corn worker.

31:47 Yes. So the environment is the same for all frameworks that we test.

31:52 It's UV corn with UV loop, the Cython dependencies and one worker pinned to one CPU core that's shielded.

32:01 So it just sort of get like something comparable.

32:04 And that's awesome, actually.

32:05 That's really cool. I like it.

32:06 And I guess I'd just like to point out though, that often there's other things that are in your bottleneck in your application, right?

32:12 And so obviously benchmarks, take them with a grain of salt.

32:15 And the other thing is that message pack, or message spec, excuse me, is awesome, but it's not as feature complete as something like pydantic, which is really great.

32:25 So I think there's some differences there.

32:28 And so we wanted to make sure that you have the ability to use both.

32:30 But in the context of benchmarks, sometimes I guess it's worth noting that pydantic is probably doing more or can do more than message spec.

32:38 But I don't think it's necessarily always going to be what you see here at the serialization piece that's going to be your slowest part.

32:44 I agree. As a database guy, you might have database indexes and the lack thereof coming to mind or something, right?

32:50 Well, that's one of the things, right?

32:51 You know, it's, and you kind of touched on it.

32:53 It's the network latency and those kinds of things between that's really going to consume quite a bit of the time.

32:59 Yeah.

32:59 And I think we do have a benchmark, which is serialization of complex objects, like pydantic models or data classes of something like that, which actually I think is very interesting because it shows that if you're using pydantic with Litestar, it's actually not faster than FastAPI, because then what you're measuring is the speed of pydantic, which in both cases is the same.

33:23 And you can, it sounds like, which is interesting.

33:26 Okay. So quick takeaway, Litestar is quite fast.

33:29 One of the reasons you might choose it is the speed.

33:31 And it sounds like there's a lot of good options there.

33:34 All right.

33:34 But not the only one.

33:35 I want to point out, on that note, if you allow me to point out one more thing.

33:38 Of course.

33:39 We are quite fast.

33:40 And I think for the feature set that we have, we are probably among the fastest, but we are not by far the fastest ASGI framework out there.

33:50 That would be, to my knowledge, Black Sheep, which is insanely fast.

33:55 And we actually don't include that in the benchmarks because it makes the benchmarks absolutely useless because then you just have one gigantic bar, that's Black Sheep.

34:04 And then you have two very, very teeny, tiny bars, which is everything else, which is another micro framework that's written in Cython.

34:12 I have not heard of Black Sheep.

34:14 That's something I have to look into.

34:15 Okay, cool.

34:16 But obviously, speed is interesting.

34:19 Speed is important.

34:20 It's certainly something that if it was really poor, people might choose like, well, it's interesting, but it's not that fast.

34:25 But given the speed, it's certainly an advantage, not a drawback.

34:30 But I think a lot of the advantages come from a bunch of the features.

34:34 So maybe we could talk through some of these and whoever wants to just jump in on them as we go, feel free to.

34:39 So I think it probably came through already from the conversation, but the programming API is very micro framework, Flask, FastAPI like, right?

34:47 You create a function, you do an at get, give it a URL, decorator on the front, and you've got an endpoint on the web that you can do things with.

34:57 So that's pretty straightforward.

34:58 At its core, exactly.

34:59 And we take it one step further.

35:01 So all of the patterns you know and love from Django.

35:04 So some of the things that you see from Django REST framework.

35:07 So we have controllers that are very similar to that, where you can define a class and have multiple methods in it.

35:12 And so, you know, that's really kind of where things start to differentiate.

35:15 But at its core, we definitely wanted to make sure you had that exact micro framework experience that you see everywhere.

35:21 So the first one, let's just touch on some of the main features here.

35:24 The first one is data validation and parsing.

35:27 So leveraging the power of type ints, which is very, very nice.

35:30 Who wants to highlight that feature?

35:32 Take your own mute.

35:33 Good that you pointed that out.

35:35 So that's definitely one of the areas that was directly inspired by a FastAPI.

35:40 Because FastAPI, a few years ago, came up with this brilliant idea of the combination of just leveraging the type hints and the emerging Pydantic stuff and whatever, and build your APIs on that, based around that as your core.

35:57 And it's been very increasingly popular to build your APIs like that.

36:03 So it's definitely directly inspired and influenced by this.

36:07 We are approaching things a bit differently though.

36:09 So for example, you are not tied to Pydantic.

36:12 You can use any data modeling, not any, but a lot of data modeling libraries that you might want to choose are supported out of the box.

36:21 Pydantic is supported.

36:23 You can also use the method spec, which supports some data modeling like Pydantic, not as featureful, but very, very fast.

36:32 You can use adders, you can use plain data class or type dicts to validate your data and to transform your data, which is what you are currently looking at, which are our DTOs, which have been written by the brilliant Peter Schutt, who isn't here with us today, which are, well, data transfer objects.

36:56 So they are a way for you to define how your data should be transformed on the way in or on the way out.

37:03 So you have incoming data that's unstructured, JSON data, and you have a target model, and you might want to apply certain transformations to that, say rename fields from snake case to camel case, very common thing to do while you are validating it on the fly that it confirms to a certain schema, for example, a Pydantic model or a data class.

37:27 And so DTOs are basically an abstraction layer between that where you can say, okay, this is my source model, this is my Pydantic model, and it has a user ID that's an integer and it has a name that's a string.

37:41 And by default, Litestar, if you give it that, will validate that the incoming data confirms to that schema, will have Pydantic run all the validation and parsing on it like you would normally, which is quite similar to how FastAPI does it, or how you might also want, would do it by hand.

38:01 The DTOs come in where you have one data model that has different representations.

38:07 So for example, you might have a database model that's a SQLAlchemy model, but on the way out, you don't want to include the password field because of reasons.

38:17 But you want it on the way in when you create the user to sign up.

38:23 So one way to do that manually would be if you're using Pydantic to create two models, one for the way in, one for the way out, or to create one base model with all the properties that are the same and then two additional models, whatever.

38:37 DTOs basically do that, but they do it for you.

38:41 So you don't have to actually write out those two models.

38:45 They can take in one of the models, one of the supported model types.

38:49 I think at the moment, we support Pydantic, SQLAlchemy, MessageSpec, Adders, and DataClasses.

38:56 Correct me if I'm wrong.

38:57 No, I think you got all of them.

38:58 So if you have a class of that type, you can create DTO from it.

39:03 And then you have a DTO config where you can say, exclude these fields or only include these fields and rename these fields.

39:12 And all you have to do is create a type annotation with that DTO.

39:16 And Litestar will take it, use it to transform your data, and then give you back your original model in the form you specified.

39:25 I see.

39:26 And you say in the decorator, you set the DTO model that does that conversion for you.

39:31 Got it.

39:32 Yeah, that's a good point.

39:33 You set it in the decorator and not at the point where you receive or return the data.

39:37 So the data you receive and return will always be the actual model that you're dealing with, which has the great benefit that your type annotations are always correct.

39:48 And you don't have to worry about that, about, you know, casting something to something else or doing the serialization in your route handler directly, because otherwise the type annotations for the return type won't match because you have excluded the field or whatever.

40:01 So you just set it completely separately from that, just as information for Litestar to say, okay, use this to do the transformations.

40:10 But the end result is my original model, whatever you want it to be.

40:15 Okay, so this is kind of the model equivalent of FastAPI.

40:20 The DTO, that's really neat.

40:22 There's a lot of, maybe a little bit of overlap in something like SQL model, right?

40:25 Where you can declare your SQLAlchemy model as a pydantic model.

40:29 And in this case, and you're welcome to use SQL model with Litestar, but in this case, you can now just use the normal SQLAlchemy model and declare a DTO and it'll automatically convert that to, you know, a message spec struct on the way out and serialize it that way.

40:45 Very cool.

40:45 That's a good point.

40:46 You bring up the message spec struct.

40:48 So that's one other area where we use message spec for to create these models because message spec is extremely fast and has this struct type, which is sort of like an Adders class or a pedantic model, but it has the benefit of being, as far as I know, the fastest library for that type of stuff for Python that exists at the moment.

41:12 So what we are building there, the transformation layer is as performant as it can be.

41:18 Excellent.

41:19 In fact, I think, and I had to go look up the actual quote, but I think the struct is actually faster than the data class in a lot of scenarios.

41:27 And so they've done an incredible job with that library.

41:29 That is incredible, actually.

41:30 And you're beating the building stuff, right?

41:32 That's cool.

41:33 All right.

41:33 We talked a little bit about the open ecosystem, right?

41:36 The ability to use pydantic versus other custom DTOs, other libraries, open API, swagger, the whole generate your documentation for you.

41:47 That sounds pretty excellent.

41:48 I'm guessing it's based a little bit on the DTOs as well to describe the schema.

41:52 Every class has the ability to export what that output looks like.

41:56 And so the DTO knows how to output its signature so that it can generate the correct open API schema.

42:03 And I guess really the main thing to point out, and we obviously do the typical swagger schemas, but one other thing that we add in is redot and stoplight elements as well.

42:13 And so you've got a couple of options for your documentation host.

42:17 Middleware.

42:18 So middleware is things that can see the request before your view function runs or make changes after it runs for cores or logging or other things.

42:29 Want to talk about that?

42:30 Cody, do you want to talk about another ChromePreston stuff, for example?

42:34 I'll happily do that.

42:34 So, you know, you kind of nailed what the core of the middleware is, but really it's all those pieces that you need to add in to maybe add in security or add in some type of additional functionality, compression, for instance.

42:47 And so a lot of, you know, outside of the plugin system, a lot of that functionality is included in the middleware.

42:53 And so you'll see built in stuff for most of the things that you're going to want to do out of a normal application.

42:59 There's probably a few things that you may need to roll on your own, but we've got all the core things.

43:04 And so you've got compression, both broadly and GZIP.

43:07 You've got open telemetry and Prometheus integration.

43:11 You've got several different types of authentication backends that would get integrated here, including a session-based backend that one, there's a cookie-based backend and a session-based backend where it stores, you know, on the actual server itself.

43:25 And we also have a JWT auth configuration that you can use here.

43:29 And so I encourage all the listeners to just check out what we have as part of the default middleware.

43:34 But, you know, most of the things that they're going to want to do from a web app are going to be built right in.

43:39 We also have the logging, the cores, all the basic stuff.

43:44 See it like cross-site, reference, rate limiting.

43:47 Cross-site, yeah, requests forgery for forms.

43:50 Yeah.

43:50 And then you can add your own and they're all just little ASCII apps that you can plug in as you need them.

43:55 On before and on after requests, something like that, right?

43:58 This is really cool here where you could say in a particular view decorator, you can say, for example, exclude from CSRF for just this form, for example.

44:08 And actually, this is something that you'll see as a feature.

44:10 You're going to see this all throughout the code and it's layered permissions.

44:15 And so this exclude from CRS, CRSRF and several other things.

44:20 You may see that.

44:21 It is a mouthful.

44:22 You're going to see that in several different places, right?

44:24 And so you can apply that at the controller or you can apply that at the route level that you see here.

44:30 And so that's one of the helpful features that you'll see where you can put it in one spot.

44:36 It'll cascade down.

44:37 Yeah, I saw that for the DTOs.

44:38 Yeah, we have a lot of that like layered dependency stuff, like the dependency, but dependency injection, like you could also do at the app level or just at the controller level.

44:48 There's so much other applications, controllers, routers and the route handles.

44:53 These are our basic layers.

44:55 And most of these types of configurations, so middleware's dependencies, header configurations, middleware configurations, they're all layered.

45:04 So you can apply them on every layer you want and they will affect the layers below that.

45:10 So it's quite flexible how you want to or where you want to configure your stuff.

45:16 Well, the ORM integration, we talked a little bit about SQLAlchemy as well.

45:20 So that's pretty cool.

45:21 And I'll be happy to elaborate on that.

45:23 But, you know, we've covered quite a bit of what you'll see here.

45:26 I think the only thing that I haven't mentioned that we've integrated in and that will be coming in 2.1 is the use of lambda statement.

45:35 And I'm not sure if you're even have seen that or if your listeners have, but it's a relatively new function that's in SQLAlchemy to help with statement caching.

45:44 And so the repository has been converted over that.

45:46 Actually, I see some great things in the chat.

45:48 There's HTMX integration.

45:50 Obviously, we want to make sure we touch on that.

45:52 And I really want to let somebody talk about the WebSockets and the channels integration too.

45:58 So there's some really cool stuff I'd love for your listeners to hear.

46:01 Before we move on to the WebSockets, which I also want to talk about, I do want to give, since we're on the ORM integration, I can see some comments out there from, for example, Roman behind Beanie, which is a MongoDB ORM or ODM.

46:16 Says, I like the DTO concept.

46:18 Having such tools separately would be great.

46:20 I mean, things such as SQLAlchemy, excuse me, SQLAlchemy objects when needed, pandas, data frames, by data model, depending on the context is really cool.

46:27 We actually have talked about that.

46:29 Well, not we, the people present here, but Peter Schutt, the person who created the DTO implementation at me, we have actually talked about that, making the DTOs a separate library because it's a very useful concept.

46:44 So it's not something we have planned.

46:48 It's something that has crossed our minds as well.

46:50 – That's very cool.

46:51 The question was, what about the MongoDB people or the other NoSQL folks for whom SQLAlchemy doesn't necessarily want to talk to because it's relational?

47:02 What's the story there?

47:03 Like, is it still pretty easy to use Lightstar?

47:05 – It is.

47:05 And I think that there's actually a native integration that's maybe not totally finished, but there is an open PR for a Mongo-based repository.

47:13 So there's going to be that much tighter coupling coming soon for those that want to use it.

47:18 But if they, there's nothing that would limit compatibility now.

47:22 So if they want to go ahead and configure that with their application, there, you're certainly free to do so.

47:26 But there will be a first party kind of clean integration for those things coming soon.

47:30 – Oh, that's excellent.

47:31 So for right now, you know, B&E is based on Pydantic.

47:35 You all work with Pydantic.

47:36 It sounds like, can I just use that as the go-between maybe?

47:39 – Absolutely.

47:40 And you're free to use Pydantic with Litestar just as you can with FastAPI, and it'll just work.

47:45 And so there's no reason to change everything to message spec if you want to.

47:50 You can mix and match and leave everything in Pydantic if that's what you prefer as well.

47:54 – That's a good point.

47:55 So the thing we do, all these integrations with Pydantic and others and whatever, so they are not baked in somewhere deep into the application.

48:05 They are all plugins, plugins that you could write yourself if you wanted to and write for every library that you desire.

48:13 That's one of the larger things that we tackled with the 2.0 release, where we tried to decouple us from Pydantic because we were based on Pydantic before and we wanted to be more open.

48:25 So we basically ripped out everything Pydantic in Litestar's core and put it into a plugin and at the same time made sure that the plugin API was so versatile that it could support all the features that we had supported before.

48:40 And now we're at the point where it's very trivial actually to add support for a library like Pydantic with everything from DTOs to OpenAPI to serialization, validation, parsing fully supported by a fairly trivial plugin that you have to add.

48:59 So even if it's not provided out of the box, it's fairly easy to just do it yourself.

49:06 It's really cool.

49:07 All right, WebSockets.

49:08 Let's talk about WebSockets here a little bit.

49:10 Janik was the mastermind behind this.

49:12 All right, Janik, first tell people what WebSockets are and why they care about them.

49:16 Why is this not just another HTTP request?

49:18 WebSockets, explaining WebSockets in a few sentences.

49:22 Probably not that easy.

49:22 Yeah, you have three sentences, go.

49:26 So for people who have been around longer in the web development space, they might remember long polling.

49:33 So where you had, where you faked, well, back and forth communication between the server and the client by having a request that never terminates.

49:42 And then you can always send more data from the server because the request wasn't actually done yet.

49:48 And I would say WebSockets is kind of like that concept but evolved.

49:54 So you can easily send bidirectional data from the server to the client with a very, very minimal overhead.

50:01 And WebSockets are a core functionality of ASGI.

50:05 You could, there were several ways you could do WebSockets with WSGI, but they were all not very easy and straightforward because they are asynchronous by nature.

50:17 This is what they are.

50:18 They are an asynchronous communication channel.

50:20 So baking that into a synchronous protocol is always a bit tricky.

50:25 And I think there's no ASGI framework that I know of that does not support WebSockets in some way.

50:32 So it is a core functionality of that type of Python framework, I would say.

50:37 And so our WebSocket implementation has kind of like two layers.

50:43 You have the low layer where it's basically you receive the socket object, which is just the connection.

50:51 And then you can act on that connection and you have to accept it and you can terminate it and you can send data or whatever.

50:57 And then you have what we call WebSocket listeners, which are an abstraction over WebSockets.

51:04 And they basically work like you would normally define a route handle in Flask or FastAPI or Litestar, where you receive some data and then you return some data and that is your function.

51:18 And the rest will be handled by the listeners.

51:20 So you define a handler function for an event that might occur.

51:25 One of the cool aspects of this is that these support all the features that Lightstar supports in other layers of the application.

51:32 So you can use DTOs with them.

51:34 You can use validation with them.

51:36 So if you define a DTO and you say, okay, so this is my model and this is what I want to receive, the incoming data from the WebSocket will be parsed as this model.

51:45 It will be validated and then it will be presented to your function.

51:49 So functionally, these WebSocket listeners, they look and work exactly the same as a regular HTTP route.

51:58 Yeah, that's really cool.

51:59 Enables another thing that a lot of ACI frameworks don't have, which is handling of WebSockets synchronously, because we do the async stuff in the background.

52:09 And so you can use an asynchronous function, but you can also just synchronous function because all the dealing with the actual WebSocket itself is handled somewhere else.

52:20 It's deeper, yeah.

52:21 So my thought was probably this is what I want to write in a standard, like I want to receive a message from the client back to the server or and process that there.

52:32 But if you want to do like all the weird multicast stuff, different listeners or groups of listeners, you can do with WebSockets.

52:39 That's probably the lower level version you're talking about, right?

52:42 Did I get that right?

52:43 Yeah, perhaps not.

52:45 It depends how weird you want to get.

52:47 So a fairly standard use case would be for something like, let's say, a chat room where you have some sort of predefined channels and then you have multiple clients that want to send data over the same channel and then fan it out to all the other clients.

53:03 And for stuff like that, we actually have a full integration, which we call channels, which themselves aren't necessarily tied to WebSockets.

53:13 They are basically distributed message bus sort of thing.

53:18 I have yet to come up with a good, short description of what channels actually are.

53:24 It changes a bit from depending what I'm talking about.

53:26 I think message bus is the way to think about it, right?

53:30 And it keeps the history of the events.

53:32 And so a message bus is perfectly fine.

53:34 They are backed by, at the moment, by Redis via different methods.

53:38 So you can use PubSub or other methods that are quite a bit more involved.

53:43 And they can also handle WebSockets for you.

53:47 So you can say, OK, so I want to create a channel named Chat.

53:53 And every time someone signs up to that, please accept the WebSocket request and then add the client to this subscriber list.

54:01 And then every time a message comes in, I want to do this.

54:06 And then I also want to distribute the message they send to all these other clients in those channels.

54:12 And I know if they send a special message, then I want to unsubscribe them.

54:17 And so for this kind of standard use case, we have that built in.

54:22 You can, of course, build your own logic on top of that.

54:25 And as you said, if you want to go into really weird stuff, you will have to use the low-level interface, which is also there, which you can also access from the WebSocket listener.

54:36 So you can, via dependency injection, just receive the raw socket object if you want to deal with that for some reason.

54:43 So it's available if you need it, but you can do the easy thing by default.

54:47 Oh, awesome.

54:47 That sounds really cool.

54:48 And the channels sound great.

54:49 Also, Chris out in the audience said, does that also mean server-side events are?

54:54 Yes.

54:54 Server-sent events, rather, excuse me, are available.

54:56 And I saw that you have a dependency on HTTPxSSE, which is like a lightweight, lightweight WebSocket type of thing.

55:04 It's very cool.

55:05 That's a development dependency for testing.

55:07 Ah, for testing.

55:08 OK.

55:08 So we do have server-sent events support built in.

55:12 You can do that.

55:13 And you can, for example, use that in combination with channels.

55:17 So instead of fanning out the messages via WebSockets, you could do that with a server-sent event as well.

55:24 Excellent.

55:24 All right.

55:25 We're getting quite short on time.

55:27 I want to close this out with maybe what would be the last thing in the process of building out an app in Litestar?

55:34 That would be deployment.

55:36 And I didn't see a lot of conversation about how I should be deploying this stuff on the website.

55:41 But I saw on the benchmarks that you said, we use GUnicorn with the UVicorn worker and so on.

55:46 And it sounds like maybe using GUnicorn is a good option.

55:50 What's the deployment story?

55:52 What do you tell people that want to put this in some real scaled-out production story?

55:56 That's still a good option.

55:58 But personally, I think Cody's well.

56:00 We've just been using UVicorn, not as a worker, just by itself.

56:06 That has worked really well.

56:08 It really just depends on how you're going to run it.

56:09 So if you're using Docker or Kubernetes or something else that's managing that process, then it's possible that GUnicorn may not be something that really is needed in your environment.

56:21 And in fact, it might actually just add overhead.

56:24 And so if you've got something like Cloud Run or Docker or Kubernetes or anything like that, what we've realized is that sometimes it's quite a bit faster to just run it with UVicorn.

56:32 And if the process dies, then your container management, whatever you're using to manage those things, will automatically start and scale those processes out.

56:41 It notices that the container exited anyway, and so it's going to create a new one, right?

56:45 OK, got it.

56:46 Correct, yeah.

56:46 But if I'm going to a Linux machine directly?

56:50 Then I think there's more of a decision to make on whether or not you want to host it through something like GUnicorn.

56:54 And I guess the other thing I'd like to add is that really there's any of the ASCII servers can run Lightstar.

57:02 So you've got Daphne, Hypercorn.

57:05 There's work that we're doing right now with Socketify.

57:08 It's a mouthful as well.

57:09 But they've got some cool stuff going on there.

57:11 And so hopefully we'll have compatibility with that soon.

57:13 And so yeah, the idea is that the same way that you would host any other ASCII app would apply to how you would manage a Litestar app.

57:20 Excellent.

57:21 And maybe Nginx or something like that in front for SSL and Let's Encrypt and all those things.

57:26 Absolutely.

57:26 I think that's probably about it for time.

57:29 Final thing here.

57:30 So you all said you just released a week ago or so.

57:33 Last week, released version two.

57:36 I want to give a quick shout out to the changes for people maybe already using it.

57:39 I don't know if that's even possible.

57:41 This has been in development for over seven months now.

57:44 Okay.

57:44 And there have been substantial changes to basically every part of the application that you could think of.

57:52 A lot of the features that we have talked about today, they are new in version 2.0.

57:57 The DTOs, they are new in 2.0.

58:00 The SQLAlchemy integration is new.

58:02 Channels are new.

58:02 The WebSocket listeners are new.

58:04 Message spec integration is new.

58:06 Pydantic billing being optional is new.

58:08 So I think it doesn't make a lot of sense to compare it to version 1.0.

58:12 So just in this context.

58:13 Strongly encourage people to use 2.0.

58:16 Yeah, definitely.

58:17 Please use version two.

58:19 We also have the new stores interface.

58:22 As well.

58:22 And so many other features that I forgot about or don't have time to list.

58:28 HTMX request.

58:30 All sorts of good stuff.

58:30 Okay.

58:31 Yeah, there have been so many people, amazing contributors over the last several months spending time on this and delivering awesome features for us, for the community.

58:42 There's been so much work going into this.

58:44 And well, I was relieved when I finally was able to hit the publish button on GitHub and we could finally get it out.

58:54 It's a big, big project.

58:55 That's right.

58:56 A lot of people have already been using version 2.0 in production.

58:59 Cody, you have.

59:00 Jacob, you have as well.

59:01 Basically, since we started development, I actually haven't.

59:05 I've stuck on 1.5 for a long time.

59:11 Now it's out.

59:12 One of the things I'll add is that the velocity of the project is, it seems to be really high.

59:16 And it's encouraging to see all the contributions and all the edits that everybody's making.

59:22 And so I, for one, I'm really excited about what it's going to look like in a year.

59:26 I think we've got a lot of opportunity ahead of us and looking forward to seeing everybody jump in and try things out.

59:33 And if something doesn't work the way that you want it to, feel free to open up an issue or hop on Discord.

59:38 We're all very responsive and would love to kind of hear what our users are thinking about as they build their applications.

59:45 It seems like a great framework.

59:46 I really like the balance you're striking between the micro frameworks and some of the batteries included.

59:52 So congrats to all of you.

59:54 Now, before we get out of here, just I'll ask one quick question, you know, for I usually ask at the end, and that is a quick shout out to some package or library out there that you like and want to spread the word about.

01:00:05 I guess I'll start.

01:00:07 DuckDB for me.

01:00:08 So I have used a massive amount of DuckDB and you can kind of think about it as like an analytical SQLite.

01:00:15 And so it's in process.

01:00:17 And so you can start it up and just run SQL directly from your Python process.

01:00:22 And so the project that I'm working on at Google actually has quite a bit of DuckDB as kind of this like middle ETL piece where data gets ingested.

01:00:30 We do things in DuckDB and then that actually gets exported to BigQuery or other database engines.

01:00:36 And so this really has kind of opened up the flexibility of us to be able to do quite a bit of transformations, just in RAM without having to write to disk.

01:00:43 That's cool.

01:00:44 An in-process SQL lab database management system.

01:00:47 Cool.

01:00:48 He's been recruiting us to use DuckDB for quite a while now.

01:00:52 And he succeeded.

01:00:53 That's cool.

01:00:55 I love what the standard library has, but Click is like one of my favorite CLI building tools.

01:01:01 Rich is the thing that makes you make your terminal beautiful.

01:01:06 But there's this really cool package.

01:01:07 I mean, you can use them both separately, but rich-qlik.

01:01:11 And it's what we use for our CLI.

01:01:15 You have the great Click CLI building stuff, but then rich on top of it automatically makes everything pretty.

01:01:21 That's cool.

01:01:22 So it's like all the magic and niceness of rich, but available for Click, like colors and your help documents.

01:01:29 I haven't gotten to check out Sebastian's typer yet, but I've seen some screenshots and it's sort of similar.

01:01:34 I don't know a whole lot about that, but I think it also uses rich, right?

01:01:38 I think so.

01:01:39 Nice.

01:01:39 All right, Yannick.

01:01:40 Well, for me, it's got to be message spec because it's just...

01:01:44 So if you do any kind of JSON parsing or serialization or message spec parsing or serialization or data modeling that you might usually want to do with a data class and want to add a bit of validation on top, or just if you're curious, you should absolutely check this library out because it's super amazing.

01:02:06 The author is really, really great.

01:02:09 I can just give him just a huge shout out because he's done such a great job at supporting our integration with it.

01:02:16 It has been quite a tight collaboration at some points because when we started integrating it, there were a lot of things where we felt like, okay, so, well, we kind of can't use it right now because of this reason or that reason.

01:02:31 And he's been so responsive and helpful in finding ways for us to work around that or just straight up implementing features that were missing for us.

01:02:40 And it's really...

01:02:40 That's pretty awesome.

01:02:41 I can't thank him enough.

01:02:43 It's really great.

01:02:44 It's a pleasure to work with him.

01:02:46 And it's an awesome library that everyone should check out, I think.

01:02:50 Cool. That's news to me.

01:02:52 So I will definitely check it out.

01:02:53 All right. Well, thank you all for being here.

01:02:55 Final call to action.

01:02:56 People want to get started with Litestar.

01:02:58 What do you tell them?

01:02:59 Go to litestar.dev.

01:03:00 You can read the docs.

01:03:01 Use it 15 minutes, 30 minutes.

01:03:03 You'll know if you like it or not.

01:03:04 Join us on Discord if you have questions.

01:03:06 We're happy to help answer anything that may come up.

01:03:09 I don't think I can add anything valuable to that anymore.

01:03:12 All right.

01:03:14 Well, yeah.

01:03:15 Sounds good, guys.

01:03:16 Thank you for being here.

01:03:17 Congrats on the project.

01:03:18 Thank you, Michael.

01:03:18 Bye-bye.

01:03:20 This has been another episode of Talk Python to Me.

01:03:22 Thank you to our sponsors.

01:03:24 Be sure to check out what they're offering.

01:03:26 It really helps support the show.

01:03:27 Take some stress out of your life.

01:03:30 Get notified immediately about errors and performance issues in your web or mobile applications with Sentry.

01:03:36 Just visit talkpython.fm/Sentry and get started for free.

01:03:41 And be sure to use the promo code talkpython, all one word.

01:03:44 Want to level up your Python?

01:03:46 We have one of the largest catalogs of Python video courses over at Talk Python.

01:03:50 Our content ranges from true beginners to deeply advanced topics like memory and async.

01:03:55 And best of all, there's not a subscription in sight.

01:03:58 Check it out for yourself at training.talkpython.fm.

01:04:01 Be sure to subscribe to the show.

01:04:03 Open your favorite podcast app and search for Python.

01:04:06 We should be right at the top.

01:04:07 You can also find the iTunes feed at /iTunes, the Google Play feed at /Play, and the direct RSS feed at /RSS on talkpython.fm.

01:04:17 We're live streaming most of our recordings these days.

01:04:19 If you want to be part of the show and have your comments featured on the air, be sure to subscribe to our YouTube channel at talkpython.fm/YouTube.

01:04:27 This is your host, Michael Kennedy.

01:04:29 Thanks so much for listening.

01:04:30 I really appreciate it.

01:04:32 Now get out there and write some Python code.

01:04:34 Bye.

01:04:42 Bye.

01:04:54 this is a test.

Back to show page
Talk Python's Mastodon Michael Kennedy's Mastodon