Learn Python with Talk Python's 270 hours of courses

#463: Running on Rust: Granian Web Server Transcript

Recorded on Tuesday, May 7, 2024.

00:00 So you've created a web app using Flask, Django, FastiAPI, or even Emmet.

00:04 It works great on your machine.

00:06 How do you get it out to the world?

00:08 Well, you'll need a production-ready web server, of course.

00:10 On this episode, we have Giovanni Barriari to tell us about his relatively new server named Greenian.

00:17 It promises better performance and much better consistency than many of the more well-known ones today.

00:23 This is Talk Python to Me, episode 463.

00:26 Are you ready for your host, here he is!

00:29 You're listening to Michael Kennedy on Talk Python to Me.

00:33 Live from Portland, Oregon, and this segment was made with Python.

00:36 Welcome to Talk Python to Me, a weekly podcast on Python.

00:43 This is your host, Michael Kennedy.

00:45 Follow me on Mastodon, where I'm @mkennedy, and follow the podcast using @talkpython, both on fosstodon.org.

00:52 Keep up with the show and listen to over seven years of past episodes at talkpython.fm.

00:58 We've started streaming most of our episodes live on YouTube.

01:01 Subscribe to our YouTube channel over at talkpython.fm/youtube to get notified about upcoming shows and be part of that episode.

01:09 This episode is sponsored by Neo4j.

01:11 It's time to stop asking relational databases to do more than they were made for

01:16 and simplify complex data models with graphs.

01:20 Check out the sample FastAPI project and see what Neo4j, a native graph database, can do for you.

01:26 Find out more at talkpython.fm/Neo4j.

01:31 And it's also brought to you by us over at Talk Python Training.

01:35 Did you know that we have over 250 hours of Python courses?

01:40 Yeah, that's right.

01:41 Check them out at talkpython.fm/courses.

01:44 In fact, I want to tell you about our latest course we just released last week,

01:48 Getting Started with NLP and Spacey.

01:51 This one is written by Vincent Warmerdom.

01:53 You may know him from many of his educational projects and channels, but he also worked at Explosion AI, the makers of Spacey.

02:01 So it's safe to say he knows his stuff when it comes to NLP and Spacey.

02:05 If you have text you need to analyze, pull entities from, understand the sentiment, and so much more,

02:12 then Spacey is one of the best frameworks out there for this.

02:15 And now we have an awesome course you can use to get way better at NLP.

02:20 During the course, you need a fun project, right?

02:23 Well, Vincent uses the past nine years of Talk Python transcripts, along with a few data science programming bits of magic,

02:29 to process them all with Spacey and ask awesome questions like, which frameworks are we talking about over the years?

02:37 Sign up for the course at talkpython.fm/Spacey.

02:40 And if you hurry and get it in the month of May, 2024, we're doing a special 10% off to celebrate the launch.

02:47 That's talkpython.fm/Spacey.

02:50 The link is in your podcast player show notes.

02:52 Enjoy the course.

02:53 Now, on to that interview.

02:55 Giovanni, welcome to Talk Python to Me.

02:57 Hello, Michael.

02:58 Thank you for having me on the show.

03:00 It's great to have you on the show.

03:02 Some people you learn about just from like their public speaking or their writing,

03:07 and other people you meet through their projects, right?

03:09 I got to know you through Granian, your Rust-based Python and other thing, web server,

03:16 that I thought was really awesome.

03:17 Started playing with it, and we started talking on GitHub around some ideas.

03:20 And then here you are, sort of explore more, learn more about some of your frameworks that

03:24 like you'd created from.

03:25 So I'm excited to talk about Emmett, Granian, and a bunch of other things that you built

03:30 that kind of all go together in a big mix there.

03:32 Yeah, I'm excited as well.

03:34 Yeah, it should be a lot of fun.

03:35 Before we get into all the details of all that stuff, just tell us a bit about yourself.

03:40 I'm Giovanni Bariglari.

03:41 I actually born in Italy, but today I'm living in Vienna, in Austria.

03:47 I'm actually a physicist.

03:48 So yeah, I graduated in physics at the university.

03:51 And let's say I started working as a software engineer, focused especially on web software,

04:00 pretty soon after the university.

04:02 So it's like 10 years, something.

04:04 I'm working as a software engineer, also like as a cypher diabetes engineer.

04:10 So let's just say I'm quite like on the backend side of the things usually.

04:16 And I also started, I actually started like contributing to open source software projects,

04:24 even before actually starting working as a software engineer.

04:29 And particularly I started like contributing to the Web2Py project.

04:34 It's a quite old project by Massimo Di Piero.

04:38 And yeah, today I'm working as a cyber liability engineer for Sentry.

04:42 I bet that pretty much of the people know about Sentry.

04:47 Awesome.

04:48 Yeah, I didn't even know that you worked for Sentry until just a few minutes ago.

04:52 That's pretty awesome.

04:53 Obviously, people know Sentry.

04:55 They're big supporters of the show and sponsor some of the episodes.

04:59 But yeah, how's it like to work at Sentry?

05:02 Must be fun.

05:02 Well, it's super nice.

05:04 A lot of talented people.

05:06 They're super nice.

05:08 It's a really nice environment to be within.

05:10 So yeah, I'm super happy.

05:13 Yeah.

05:13 Awesome.

05:14 What does a software reliability engineer do?

05:17 So let's say it might be a complicated question because like actually the original title comes from Google.

05:25 So let's say it's kind of related to infrastructure and monitoring in software.

05:35 So let's say to simplify that it's about be sure that everything runs smoothly with no incidents and stuff like that.

05:45 I see.

05:45 Make sure you can monitor bugs, slowdowns.

05:48 Yeah.

05:49 work on failover type of situations, that kind of stuff.

05:52 Exactly.

05:52 I imagine you probably use Sentry to monitor Sentry for reliability.

05:57 Is that right?

05:59 Yes.

05:59 Yes.

06:00 We have this project called like Sentry for Sentry.

06:04 Okay.

06:04 Which is like a separated Sentry instance that monitors the actual SAS instance of Sentry.

06:12 That's pretty interesting because of course, if Sentry went down, you're using it to monitor it.

06:16 Yeah.

06:17 Everyone else uses Sentry to monitor their thing.

06:19 It's not about when their code goes down, it doesn't affect it.

06:22 But when your code goes down, it might actually affect your ability to know that it's down.

06:26 So a separate copy.

06:27 That's wild.

06:27 Okay.

06:28 I hadn't even thought of that.

06:29 Exactly.

06:30 Super cool.

06:31 All right.

06:31 Now, first of all, there's a little bit of love out in the audience for your whole larger project, Emmett.

06:36 So Tushar says, did you say Emmett?

06:39 Emmett is amazing, which is super cool.

06:42 Tools like that encourage him to work on his dev tooling, which is really great.

06:45 Before we get into the details of that, though, why create another web framework?

06:49 I don't mean this in a negative way.

06:50 It's just like there's already, there's Flask and Django, and then we have FastAPI and so on.

06:56 So why not just go, oh, I'm just going to use this one?

06:59 Like what inspired you to go, like, I think I'll make one of them.

07:01 So I think we should go back a bit in time because actually like this year will be like the 10th birthday of like Emmett.

07:11 So let's just say it's like a long time.

07:15 So it's not that new.

07:16 Okay.

07:17 Out there.

07:18 Yeah.

07:19 I see.

07:20 Yeah.

07:20 So originally it was released as, it had like a different name.

07:27 It was called Weppy and I changed the name in 2020, I think.

07:32 like when I moved from synchronous paradigm to the asynchronous one.

07:41 So let's say at the time I designed Weppy, so the original version in 2014, the main thing was about, so in that time, it was like the time of Ruby and Rails being super popular.

07:57 And I originally started working in web development using Ruby and Rails.

08:04 And when comparing, let's say, the amount of, let's say, batteries included in the box of Ruby and Rails to the Python ecosystem.

08:13 So let's say that the major competitor at that point in time was Django.

08:17 But let's say the feeling I got from Django at that time compared to Ruby and Rails was completely different.

08:24 In a sense that I found myself like spending much more time on building stuff compared to Ruby and Rails.

08:33 And this is also what bring me to the Web2Py project, Web2Py community, because it was, in a sense, pretty similar in some of the design decisions with RAR.

08:46 But at the same time, like once you start contributing to a web framework, you have time to like to dig into a lot of the internals and decisions.

08:56 And so Web2Py at that time, so I used Web2Py to build my first, the code behind my first startup, actually.

09:04 And it had quite a lot of scaling issues at that time.

09:09 So let's say at that point in time, I just was looking out for the options and I started like digging into the code internals of Django and also Flask, which I mean, I really loved like the Flask approach of things.

09:26 But at the same time, it was so micro.

09:30 Yeah.

09:31 I mean, like to build an actual project, it required like to have like tons of extensions and other pieces, let's say other libraries to add it to the project that, yeah, I think like I ended up just, you know, saying, okay, let's just rebuild Web2Py the way I wanted.

09:49 And that's eventually how Web2Py came out today.

09:53 Yeah, that's pretty much the story behind it.

09:56 Yeah.

09:56 Okay.

09:57 Yeah, I didn't realize it went that far back.

09:59 How about Granian?

10:00 Is that newer?

10:01 Yeah, Granian is, I think like the first public release is like from one year ago or something.

10:08 Yeah.

10:08 And I, because I learned about Emmet through Granian and like, oh, it's kind of all, probably all the same project.

10:14 I didn't realize the history.

10:15 Why the new name?

10:16 Why Emmet?

10:17 So the thing was that to support, let's say the upgrade between Web2Py and Emmet.

10:23 So since like all the interfaces has to be changed to support like async code, the idea was to provide, let's say, a quick way to do that.

10:35 Meaning that to make it possible for developers to, you know, install like a new version of Web2Py and getting like everything broken because of, you know, the new interfaces.

10:45 So yeah, I just decided to, you know, changing the interface and also changing like the package name in order to say, sure.

10:54 Okay.

10:54 If you want to upgrade, you can upgrade safely.

10:57 Otherwise, it's like a super mega version change.

11:01 Not only you change the version, but you change the name.

11:03 Yeah.

11:04 I see.

11:05 Exactly.

11:05 That's interesting.

11:08 All right.

11:09 Well, let's dive into it.

11:11 So I like the title, Emmet, the Web Framework for Inventors.

11:15 And yeah, maybe give us a sense of like, what are some of the core features of Emmet?

11:20 And what are your goals with building it?

11:22 From an API perspective.

11:23 The idea was to have like all in one, let's say, framework to build web application.

11:28 All in one, let's say, in a sense of, again, when the project actually started.

11:34 So like even 10 years after that, I still usually prefer to develop web projects without reliant too much on front-end frameworks.

11:46 So this is like a big, let's say, preamble to the thing.

11:50 Like this is originally from an era where like front-end web framework did exist.

11:56 Like I think it was just AngularJS and maybe Ember at that time.

12:01 Yeah.

12:01 I mean, you're basically describing my life in 2024.

12:04 So I'm a big fan of the server-side frameworks, you know?

12:07 Yeah.

12:08 Also because like it seems sometimes that we reinvent like a lot of stuff to catch up like the beginning at the end.

12:17 Like, yeah, I saw like all of the theme about, you know, server-side rendering with front-end frameworks and server-side render components and all that kind of stuff.

12:26 So sometimes it just feels, you know, we're getting back to the origin.

12:30 But yeah.

12:32 So the idea behind Emet is to have like all-in-one solution to develop web applications.

12:39 So you have all the standard features you have with the web framework.

12:43 So like routing and middlewares and that kind of stuff.

12:48 You have an ORM.

12:49 You have a templating system plus a few, let's say, tools embedded with Em.

12:56 So for instance, it's very easy to use, I don't know, sessions or to have an authentication system.

13:04 It's all like provided inside the box.

13:08 So yeah, the idea was to have like, let's say, a battery of tools like in one place to do the most common things when you start developing a web application.

13:19 Yeah, very nice.

13:20 So yeah, like you said, it has an ORM built in and it feels, I guess, SQLAlchemy-ish in a sense, but not exactly the same.

13:29 Or Django ORM would be, you know, another way in some ways there.

13:33 Yeah, I think it's more near to SQLAlchemy in that sense.

13:38 You tend to have like an API for using Python objects to build queries rather than, how to say, use like a lot of strings attributes like you usually tend to do in Django.

13:52 Yeah, I mean, it's more close to SQLAlchemy in that sense.

13:57 I think like the major difference with the ORMs out there is that the model class you define are not like, so when you, for example, select records from the database, the single, let's say, rows you select are not instances of the model class.

14:16 So let's say like the model class acts more like management class.

14:21 Like a schema definition sort of thing.

14:24 Yeah.

14:24 I mean, it does like a lot of helpers top of that.

14:28 But yeah, I think like it's definitely the major difference between like the vast majority of ORMs out there for Python.

14:35 When you usually have like the model class, which is also like the class of all the records you select and work on from the database.

14:42 Yeah.

14:43 So what do you get back in this world here?

14:45 What do you get if you do a query?

14:47 Like in your example on the homepage, you have a time traveler.

14:50 So what do you get back when you get a group of them, a set of them?

14:53 So you get like a different class.

14:55 So there's like a separated class.

14:58 Every model has, it's called like row class.

15:01 So it's an instance of that class.

15:04 And this design, it's mostly made for two reasons.

15:11 Like the first one is performance in a sense, meaning that when you select records or operate on records,

15:18 it avoids to, you know, fulfill like all those objects with the actual model class attributes or functions or methods.

15:28 And the validation and stuff.

15:31 Yeah.

15:31 Yeah.

15:31 And on the other end was also to kind of remind to the developer that he is working with actual data from the database

15:43 and not like real Python objects in a sense, which is.

15:46 Yeah.

15:47 Yeah.

15:48 I think like in the years is like the first reason why people tend to object against ORMs.

15:55 So.

15:56 Yeah.

15:56 Those two were the main reasons behind this design.

15:59 It's something like, you know, in the between of an ORM and just some database abstraction layer.

16:06 This portion of Talk Python to Me is brought to you by Neo4j.

16:12 Do you know Neo4j?

16:13 Neo4j is a native graph database.

16:16 And if the slowest part of your data access patterns involves computing relationships,

16:21 why not use a database that stores those relationships directly in the database,

16:26 unlike your typical relational one?

16:28 A graph database lets you model the data the way it looks in the real world,

16:32 instead of forcing it into rows and columns.

16:35 It's time to stop asking a relational database to do more than they were made for

16:39 and simplify complex data models with graphs.

16:42 If you haven't used a graph database before, you might be wondering about common use cases.

16:47 You know, what's it for?

16:48 Here are just a few.

16:50 Detecting fraud.

16:51 Enhancing AI.

16:53 Managing supply chains.

16:54 Managing a 360 degree view of your data and anywhere else you have highly connected data.

17:00 To use Neo4j from Python, it's a simple pip install Neo4j.

17:06 And to help you get started, their docs include a sample web app demonstrating how to use it both from Flask and FastAPI.

17:13 Find it in their docs or search GitHub for Neo4j movies application quick start.

17:18 Developers are solving some of the world's biggest problems with graphs.

17:22 Now it's your turn.

17:23 Visit talkpython.fm/neo4j to get started.

17:28 That's talkpython.fm/neo, the number four, and the letter j.

17:32 Thank you to Neo4j for supporting Talk Python to me.

17:37 I like the query syntax.

17:39 You know, people visit the homepage, you'd see something like time travel dot where, then lambda of t goes to t dot return equal equal true.

17:48 And while some of the ORMs let you write code in terms of like the class fields or whatever, it's never looked quite right because you're working with, say, the static value out of the class.

18:03 Whereas what you really are trying to talk about is the instance level of the record, right?

18:07 So instead of saying t, you'd say time travel dot return, but we'd never test that because it's the global value of it, right?

18:15 And stuff like that.

18:15 Or you just use strings, which is basically, in my mind, no good.

18:19 But what's cool, you know, also, do you want to do an OR or an AND?

18:23 And then what weird thing do you import to do the OR?

18:27 And like, you know, how do you wrap the query in it?

18:29 All that kind of stuff, whereas if it's a lambda, you can just express the conditions how you want.

18:34 Yeah, yeah.

18:34 That's pretty much the idea.

18:36 So I like to use, you know, special methods from Python objects and translate those expressions like in actually SQL code.

18:44 So, yeah.

18:45 Nice.

18:46 For my apps, I have a combination of Beanie and Mongo Engine, depending on which one you're talking about.

18:52 And for Mongo Engine, you do things that are pretty funky.

18:56 Like, if you want to say greater than, you would say time travel dot, I don't know, it doesn't have a value, but age.

19:01 Let's say there's an age.

19:02 Like, time travel dot age underscore underscore GT equals value.

19:08 And you're like, yes.

19:10 Well, it's not equal to it, and that's not the name of it.

19:14 But okay, I guess that means, you know what I mean?

19:17 Like, there's a real weird way it's, like, jammed into a syntax, whereas, like, here you say greater than whatever, right?

19:22 Yeah.

19:23 Yeah, it's, like, the same of, it's one of the things I dislike still today of Django or M.

19:30 Yeah.

19:30 In that sense.

19:31 I mean, it has, like, a lot more capabilities.

19:35 Because, for instance, like, when you want to represent, like, complex queries, it tends to be more powerful in that sense.

19:44 Meaning that special methods are limited.

19:46 So, at some point, you start making custom methods.

19:51 So, like, I don't know, starts with, for example.

19:54 Yeah, starts with, or in this set, or the set includes this, and something like that, right?

19:59 Exactly.

20:00 So, I think, yeah, there are pros and cons in both, let's say, approaches.

20:04 Yeah, cool.

20:05 All right, so, we have a lot to talk about, even though all this code fits on one screen.

20:08 The other part is to define an endpoint.

20:11 This is about an API, right?

20:12 So, you have an async def, which is awesome.

20:15 Supports async and await.

20:16 I think it's super valuable.

20:18 Yeah, one note is that the RM is still synchronous.

20:22 Yeah, yeah.

20:23 So, what about that?

20:25 Are you planning on adding an async thing, or are you just saying it's just synchronous?

20:29 So, it's like a very long story, in a sense, because, like, I started asking myself the same question, like, several years ago.

20:40 And I think, like, at some point, probably, I will end up doing that, in the same way SQLAlchemy did that.

20:50 Even if I remember, like, a super nice blog post from the author of SQLAlchemy, stating that asynchronous code and databases are not the best way to use that.

21:03 So, yeah, let's say, like, in the last few years, I just waited, in a way, to see what everyone else was doing.

21:10 But, yeah, I think, like, at some point, it will be inevitable, in a sense.

21:16 I just don't feel the time has come yet.

21:19 So, we'll see.

21:20 Yeah, cool.

21:21 And then, I guess, the last thing to talk about is you have a decorator app.route, pretty straightforward.

21:28 Yeah.

21:28 But then, you also have an at service.json.

21:32 What's this decorator do?

21:33 So, you can think about that decorator, like, the service decorator, as, like, the JSONify function in Flask.

21:42 So, yeah, in Emmet, you have, like, both the JSON service and the XML service.

21:48 Because, like, in old times, I had to write stuff to talk with XML and points and stuff like that.

21:56 Yeah, yeah, yeah.

21:57 So, yeah, it's just an easy way to wrap and say everything that returns from this function, just serializing JSON or XML or whatever.

22:07 If I return, rather than a response, just return a dictionary and it'll do the serialization, right?

22:13 Yeah, exactly.

22:13 Nice.

22:14 And the audience, let's ask, does it generate an open API documentation?

22:19 Like, does it automatically generate documentation?

22:22 So, from standard routes, no.

22:26 There's an extension, though, meaning that if you plan to design REST, let's say, APIs with Emmet, there's an extension for that.

22:37 It's called Emmet REST, which, let's say, gives you, like, more tools to structure your routes and serialization and deserialization and all that kind of stuff.

22:48 And that extension also brings OpenAPI documentation generation.

22:53 Eventually, let's say, the OpenAPI documentation generation will come also to plain routes in Emmet.

23:01 But there's quite a few design implied to do that.

23:06 Meaning that, so Emmet, it's, like, not designed to have a strong type system.

23:12 Because, again, it comes from the days where, like, typing.

23:15 That didn't exist?

23:17 Let's not, yeah.

23:18 So, let's say that, for instance, for frameworks like FastAPI, which are practically designed on top of something like Pydantic.

23:27 So, you have, like, a strong type system.

23:29 So, everything that comes in and out from the majority of routes you write has types.

23:36 And so, it's really easy for the framework to inspect the code and understand what's going on.

23:42 On, let's say, general frameworks like Emmet, where you, I mean, you might have, like, I don't know, HTML routes or other kind of stuff going on.

23:53 There's no, let's say, design behind that to support in the first play, like, strong typing.

23:59 So, yeah.

24:00 Making, like, OpenAPI documentation out of standard Emmet routes involves, like, quite a lot of decisions.

24:08 So, yeah.

24:09 We'll see.

24:09 We'll see.

24:10 Yeah.

24:10 Okay.

24:11 Yeah.

24:11 Very cool.

24:12 Yeah.

24:12 We'll come back and talk about Emmet Rests in a minute.

24:15 That's one of the fun things.

24:17 That also has WebSocket support, right?

24:19 Yep.

24:19 Okay.

24:20 Absolutely.

24:20 WebSockets are these things that I'm always like, man, they're so cool and you can do all this interesting stuff.

24:25 And then I never, ever, ever have a use case for it in my world.

24:29 I just haven't yet.

24:31 And so, I'm like, well, they're very cool, but I don't have it yet.

24:33 Yeah.

24:34 So, I mean, I'm not building Slack.

24:35 Yeah.

24:36 The thing is that usually, like, when you work with WebSockets, it's also pretty common that you need some broadcast facility.

24:46 Yeah.

24:47 So, usually you want to do channels or that kind of stuff, which usually tends to involve, like, other software.

24:57 Like, you usually have Redis or something like that in order to, since Python is not exactly good in, let's say, managing threads or communicating across different processes.

25:08 That's probably why it's not so easy in the Python world to actually rely on WebSockets a lot.

25:14 I don't know, for instance, if you take, like, languages like, I don't know, Alex here, or you have, like, tons of stuff based on the fact that everything is actually communicating over Socket.

25:26 So, yeah.

25:27 And I think, like, one single thing to say on WebSockets, it's, I think Emmet is the only or one of the few frameworks that allows you to write middlewares with Sockets.

25:41 So, you can.

25:42 Okay.

25:43 So, if you have, like, your chain of middlewares on the application, you can also define behaviors for the same middlewares to behave on WebSockets.

25:51 So, you can probably use, like, a lot of code.

25:54 Like, I don't know, if you are in a WebSocket and need to talk with the database, you can use the same middleware for the database connection you use on the standard request.

26:05 So, I think that might be worth noting.

26:08 Yeah, absolutely.

26:09 Another thing that's interesting that I don't see in a lot of ORMs, they kind of just leave it to the, well, in SQL, so you write it, is aggregation.

26:18 Right?

26:18 Right?

26:18 The aggregation stuff you have here is pretty interesting where you do a bunch of calculation type stuff in the database and then get a, sort of, the results back.

26:28 Right?

26:28 So, here you can say, like, select, talking about an event, like, event.location.

26:32 Get the counts.

26:33 And then group by this thing, order by that.

26:35 Having these sort of properties.

26:37 That's pretty unique.

26:38 I don't see that in a lot of ORMs.

26:40 Yeah, I think, like, you can do pretty much the same with SQLAlchemy, but probably, like, the syntax is less sugarly, let's say.

26:49 I mean, again, this design comes from the fact that with my first startup, we had to do, like, a lot of aggregation over the database.

26:58 So, that's why I wrote all of that, you know?

27:01 Yeah, that's cool.

27:02 The same code.

27:04 Yeah, nice.

27:05 You know, I'm familiar with it from all the MongoDB stuff that I've done, like, that big aggregation pipeline over there as well.

27:11 Yeah, I'm also familiar.

27:13 Like, I'm not a huge fan of Mongo, though.

27:16 Probably because, like, being an SRE, like, making Mongo reliable is, like, a mess sometimes.

27:23 I think it depends on how people rely on it, right?

27:27 Yeah.

27:28 For me, it's been absolutely.

27:29 I've run my stuff on over 10 years, and it's been perfect.

27:31 However, that's because I use a lot of structured code to talk to Mongo from one tech stack, right?

27:38 But if some people are using dictionaries to talk to it, other people are using this framework, other people are using that framework,

27:44 then the lack of schema structure, I think, becomes a problem.

27:49 So, I think it really depends on how you use it.

27:52 But, yeah, I hear what you're saying for sure.

27:54 I think that that's not even necessarily a Mongo challenge.

27:57 That's a document database challenge, generally, right?

27:59 Yeah.

28:00 Just Mongo is primarily the way people do document databases.

28:03 Yeah.

28:03 I tended to, like, use it for separated stuff.

28:07 So, in several projects I worked on, I had, like, for instance, like the main database with Postgres, for instance,

28:13 like another database with Mongo for specific stuff.

28:16 Maybe stuff you don't need transactions on, or maybe you want to store, like, time series data or, you know, that kind of stuff.

28:24 So, for that, I think it's really cool.

28:26 Yeah, nice.

28:27 All right.

28:28 All right.

28:28 I guess one final thing here that is worth covering, then I want to dive into a grand new as well, is the template syntax.

28:35 So, you've got your own template syntax that's kind of like...

28:39 That's not a syntax.

28:40 All right.

28:42 You tell people about it.

28:44 You tell them about it.

28:44 Yeah.

28:45 So, the template system embedded in Emmet is called Renoir.

28:49 And the idea behind it is to don't have a syntax at all.

28:56 So, the idea behind Emmet template system was why...

29:01 So, the question I had is, like, why do I have to learn a new language to write server-side render templates?

29:09 Like, why it came out?

29:10 Yeah.

29:10 And those languages are...

29:12 Yeah.

29:12 And they're very, very Python-like, but they're not Python-like.

29:15 Exactly.

29:15 So, I just said, well, I guess I'll try to do just Python, you know, wrap it in the same brackets every other templating language has.

29:25 So, it's just plain Python.

29:27 You can do pretty much everything you can do in Python.

29:31 You can even do imports inside.

29:34 Not that I suggest to do that, but you can...

29:36 Still, you can do that.

29:37 Don't go create PHP, people.

29:40 Come on now.

29:41 Exactly.

29:42 The only, let's say, major difference from standard Python code is that you have to write the pass keyword after a block of code.

29:52 So, if you write, like, a for loop or an if statement, the template engine has to know when that block ends, given that Python relies on indentation to understand like that.

30:06 But in templates, you don't have like the same indentation level you have in Python.

30:10 So, that's the only major difference from plain Python code plus a few, let's say, keyword added to the game.

30:19 So, you have extend and include in order to extend and include.

30:25 So, there are partial templates, let's say, and blocks.

30:29 That's it.

30:30 Right.

30:30 Blocks for the layout sections, right?

30:33 Exactly.

30:33 Yeah, that's really nice.

30:35 I like the idea.

30:36 I'm sure there are people listening that'd be like, I would like to try this stuff out, but I've got 10,000 lines.

30:42 Of Jinja.

30:43 Jinja, or I've got 10,000 lines of Chameleon or.

30:48 Yeah, yeah, yeah, I know.

30:49 Whatever.

30:49 What's the story?

30:51 I mean, I'm working in the office with Armin Ronaker every day.

30:55 So, the amount of Jinja code we have in Sentry is like huge.

30:59 So, yeah, I perfectly understand the point.

31:03 I don't have like, let's say, a marketing line for selling a Renoir.

31:08 It's just something that I, so today I'm so, I'm equilibrary familiar to Jinja templates and Renoir templates.

31:17 I'd say it really depends on how you usually structure your application code.

31:24 So, I think one good way to try out Renoir is if you tend to don't use a lot like Jinja filters or stuff like that.

31:37 That might be a good case scenario to trying out Renoir.

31:42 Yeah.

31:42 But, of course, it has to be like a new project because converting, I mean, there's no sense into moving, translating code from one system to another once you picked.

31:53 It's not super different.

31:54 So, I think you change an end if to a pass, for example, or end for into a pass.

32:00 But, I was thinking more, is there a way to use Jinja within Emmet instead of using Roar?

32:07 I mean, there's no plain, there's no read extension for that.

32:12 But, I mean, if you create like a Jinja instance over the Emmet application, you can call it in your roots.

32:20 You can even create a middleware for that.

32:23 So, I think it's pretty easy also to set up Emmet to work with Jinja.

32:26 Yeah, I would think so.

32:27 I created a chameleon FastAPI, which lets you basically put a decorator on FastAPI endpoints.

32:32 And it does chameleon template rendering with a dictionary instead of rest endpoints.

32:36 It wasn't that much.

32:37 You basically just have to juggle it from behind.

32:39 So, I imagine you could probably, someone could create a Jinja decorator.

32:43 Like you have service.json, like a template.jinja or whatever, something like that, right?

32:47 Probably?

32:48 Yeah, yeah, absolutely.

32:50 That said, I'm not a fan of Jinja.

32:51 I think it's overly complicated.

32:53 So, I'm not encouraged.

32:54 I'm not suggesting it.

32:55 But the reality is, even as much as I've tried to fight against it, is that the majority of Python web HTML, dynamic HTML, is done in Jinja these days, right?

33:05 Yeah, probably too.

33:07 Yeah, you kind of got to live in that space, even if you don't want to.

33:10 All right.

33:11 And let's talk about the thing that I opened, I talked about at the opening, is Granian.

33:16 Where's Granian gone?

33:18 There we go.

33:19 So, this is how, as I said, I got to learn about this framework and what you're doing and stuff with Granian.

33:26 Tell us about Granian.

33:26 And as a way to sort of kick this off, Cody, who I've had on the show before from Litestar, says,

33:32 thanks for the work on Granian.

33:33 I've had an excellent time using it with Litestar.

33:35 Litestar is also awesome.

33:36 Yeah, thanks, Cody.

33:38 Yeah, so tell us about it.

33:39 Yeah, so as the description suggests, it's just an HTTP server for Python application.

33:47 So, it has the same scope of UVcorn, G-Unicorn, Hypercorn, and all that libraries.

33:55 The main difference compared to every other HTTP server for Python application is that it's not written in Python.

34:03 It's written in Rust.

34:06 And it supports natively both BoostG and ASGI, so both synchronous and asynchronous applications.

34:16 Plus, a new protocol I also wrote with Granian, which is called RSGI.

34:22 But the only existing framework using it that I'm aware of is Emmet, indeed.

34:27 Yeah, I think there's a lot of things that are nice about this.

34:30 And I have actually most of the things, including Talk Python itself, running on Granian, which is pretty cool.

34:37 Cool.

34:39 Yeah, yeah, absolutely.

34:40 So, single correct HTTP implementation.

34:43 Sounds awesome.

34:44 Support for version 1, 2, and 3, I guess, when it's ratified, right?

34:47 Yeah, so HTTP3, let's say, so since Granian is actually based on a Rust library, which is called Hyper, which is a super cool library.

34:59 It's like vastly adopted, like everywhere in the world.

35:03 Like, I don't know how many thousands, hundreds of thousands of libraries in the Rust ecosystem use it.

35:11 It's used in Cloudflare for a lot of their production systems.

35:15 So, super strong library.

35:17 But yes, it doesn't yet support HTTP3.

35:22 So, yeah, I guess when Hyper will support HTTP3, that could be easily added to Granian.

35:31 Right, right.

35:32 That's cool.

35:32 Yeah, with things like Genocorn, you've then got to also integrate Uveacorn workers, and you kind of have a lot of stuff at play, right?

35:39 So, here, you've just got one thing, which is cool.

35:43 Yeah, I mean, like, I tended to find annoying the fact that if you want, like, to squeeze out, like, performance out of Uveacorn, you usually need to, yeah, pile up different libraries together.

35:57 Like, oh, wait, I need to add the HTTP tools dependency, so it can use, like, the C written parsers for HTTP.

36:08 I'll wait, and probably I want some process management, so I need also a Junicorn.

36:13 Yeah.

36:14 It's not super easy, like, for starters, at least.

36:18 Yeah.

36:19 I guess maybe we should just set the stage a little bit for people that don't live and breathe Python web deployment.

36:25 Apologies.

36:27 So, typically, you would have something like Nginx or Caddy that browser actually talks to, and then behind the scenes, you set up those, let's just say, Nginx, to when there's a request for a dynamic content or Python-based content, as opposed to, like, a CSS file or something.

36:44 So, again, you can use it as a CSS file or something.

36:49 So, again, you can use it as a CSS file or something.

36:51 So, again, you can use it as a CSS file or something.

36:53 So, again, you can use it as a CSS file or something.

36:55 So, again, you can use it as a CSS file or something.

36:57 So, again, you can use it as a CSS file or something.

36:59 So, again, you can use it as a CSS file or something.

37:01 So, again, you can use it as a CSS file or something.

37:03 So, again, you can use it as a CSS file or something.

37:07 So, again, you can use it as a CSS file or something.

37:08 So, again, you can use it as a CSS file or something.

37:09 So, again, you can use it as a CSS file.

37:10 So, again, you can use it as a CSS file.

37:11 So, again, you can use it as a CSS file.

37:13 So, again, you can use it as a CSS file.

37:14 So, again, you can use it as a CSS file.

37:15 So, again, you can use it as a CSS file.

37:16 So, again, you can use it as a CSS file.

37:17 So, again, you can use it as a CSS file.

37:18 So, again, you can use it as a CSS file.

37:19 So, again, you can use it as a CSS file.

37:20 So, again, you can use it as a CSS file.

37:21 So, again, you can use it as a CSS file.

37:22 So, again, you can use it as a CSS file.

37:23 So, again, you can use it as a CSS file.

37:25 So, again, you can use it as a CSS file.

37:26 So, again, you can use it as a CSS file.

37:27 So, again, you can use it as a CSS file.

37:28 So, again, you can use it as a CSS file.

37:29 So, again, you can use it as a CSS file.

37:30 So, again, you can use it as a CSS file.

37:31 So, again, you can use it as a CSS file.

37:32 So, again, you can use it as a CSS file.

37:33 So, again, you can use it as a CSS file.

37:34 So, again, you can use it as a CSS file.

37:35 So, again, you can use it as a CSS file.

37:36 So, again, you can use it as a CSS file.

37:37 So, again, you can use it as a CSS file.

37:40 So, again, you can use it as a CSS file.

37:42 So, again, you can use it as a CSS file.

37:43 So, again, you can use it as a CSS file.

37:44 So, again, you can use it as a CSS file.

37:46 So, again, you can use it as a CSS file.

37:47 So, again, you can use it as a CSS file.

37:48 So, again, you can use it as a CSS file.

37:49 So, again, you can use it as a CSS file.

37:50 So, again, you can use it as a CSS file.

37:51 So, again, you can use it as a CSS file.

37:52 So, again, you can use it as a CSS file.

37:53 So, again, you can use it as a CSS file.

37:54 So, again, you can use it as a CSS file.

37:55 So, I mean, having Nginx above Granian makes sense only if you want to root something out of Granian and not serve it from Granian.

38:05 But, yeah, in general, I'd say that you can use it in both ways, behind Nginx or not, up to the specific needs of the application, let's say.

38:16 Yeah.

38:17 I have one Nginx Docker server container handling, like, 15 different apps.

38:23 And so, for me, that's kind of the setup.

38:26 But typically, the SSL that I do is over Let's Encrypt using Certbot.

38:31 If I want to do HTTPS with Granian, how do I do it?

38:34 You can keep, like, the Let's Encrypt and Backman HSH generation thing because Granian supports the SIGAP signal.

38:42 So, whenever you need to refresh the certificate, you can issue a SIGAP to the Granian process.

38:48 And that process will reload the workers picking up the new certificate.

38:55 So, I think it's pretty straightforward.

38:57 I mean, if you already manage, like, SSL certificates and, like, renewal chain and all that kind of stuff, it's pretty straightforward to do the same in Granian.

39:08 You can just pass, you know, the paths to the certificates to the CLI command or even use environment variables up to you.

39:17 Got you. Okay.

39:18 One thing that I thought was pretty interesting was the performance.

39:21 Not necessarily that it's so, so, so much faster, but that it's so, so much more consistent.

39:28 You want to talk about that a little bit?

39:29 Yeah.

39:30 So, I think if you want to show to the YouTube audience the comparison thing, you'll find the link in the bottom of the page.

39:40 Because the first phase of benchmarks in the repository contains just benchmarks of Granian itself.

39:47 Whereas in the versus page, you can find, like, comparison with other servers.

39:52 So, the thing behind, let's say, the stable keyword I used into describing, like, the performance of Granian was about the fact that usually when people look at benchmarks, just look at the, you know, number of requests.

40:09 Yeah, yeah, yeah, yeah.

40:11 What's the max request per second you can get with this thing or whatever?

40:14 Yeah, exactly.

40:15 But the, another, like, very important value of this to me is like the latency, because you can, you can still serve like a lot of requests in parallel, but the amount of time each request will take to be served.

40:31 It's also like important.

40:32 I mean, I can serve like a thousand records per second, but if those requests takes a second or it takes like 10 milliseconds, it's like a huge difference for the end user.

40:43 Yeah.

40:44 And so, the thing is that from, from, at least from benchmarks, it appears that the way Granian works, which relies on having like all the network stack separated from Python.

40:58 So all the IO, the real IO part involving the network communication is not tied to the Python interpreter.

41:08 And so it doesn't suffer from the global interpreter block and threats getting blocked between each other.

41:15 It seems to make like, like, like, Runyon to be more, let's say, predictable in response time, meaning that the both the average latency and the maximum latency you have in the benchmarks is much lower compared to other, let's say, implementations, other HTTP servers.

41:36 So yeah, it's not like super faster.

41:37 It won't make like, obviously it won't make the Python code of your application faster.

41:42 We can shut down all of our servers except for one $5 digital ocean server and just.

41:49 Yeah, no, not really.

41:50 Yeah.

41:50 No, not really.

41:52 But at least it should normalize in a way the response time of your application.

41:59 Yeah.

42:00 Yeah.

42:01 Yeah.

42:02 Yeah.

42:03 So the standard deviation of the request time is way, way tighter.

42:05 Exactly.

42:06 The distribution of the request time is way, way tighter, even though you do seem to have generally the fastest times.

42:12 But if you look at the difference of the average times in the max times.

42:16 Yeah.

42:17 The difference is a lot smaller.

42:19 It's like two and a half times variation versus some of the other ones are many.

42:26 Yeah.

42:27 10 X or something.

42:28 Yes.

42:28 Yeah.

42:29 Maybe a hundred X or some of them.

42:30 Yeah.

42:31 Yeah, absolutely.

42:32 Okay.

42:33 Yeah.

42:33 That's what really I thought was pretty interesting is the super predictability of it.

42:37 Yeah.

42:38 One thing I want to ask you about is you did say it does this RSGI.

42:41 Do you want to call it G?

42:43 G SGI, the Granian server interface?

42:45 No, whatever.

42:46 No, it's like a RAS server gateway interface.

42:49 Yeah.

42:50 Yeah.

42:51 Yeah.

42:52 Yeah.

42:52 That's what I figured.

42:53 And you said Emmet uses this, which is awesome.

42:56 What's the advantage?

42:57 Is there a significant advantage to doing things differently rather than ASGI or something?

43:02 Would it be worth like things like flask saying, Hey, should we support this if we're running on top of granny?

43:07 Or things like that's what I'm getting at.

43:08 So I didn't actually know if flask today also supports a synchronous.

43:13 With court they do.

43:14 Request.

43:15 Yeah.

43:16 Okay.

43:17 Okay.

43:18 So Quartz might take advantage of RSGI, meaning that it's still asynchronous protocols.

43:23 So you have to be in an asynchronous context to use RSGI.

43:27 But the main difference, let's say between ASGI and RSGI is that it's in the, how to say, the communication mechanism or let's say the communication entities, meaning that.

43:42 So in ASGI, you have usually have two methods, two awaitable methods, which are like send and receive.

43:54 So we have to push, let's say dictionaries to those methods, which are referred as messages.

43:59 So usually have like a dictionary, which has type key, which contains like the type of message, which might be, I don't know, HTTP request or HTTP body or WebSocket message.

44:15 And all the intercommunication between like the server and application relies on those dictionaries with specific keys and strings.

44:24 And since you have like a single, let's say, interface to rely on that, and that interface is asynchronous, it also means you, it means two things.

44:37 The first thing is that every time you want to say something to the server or to the client, you have to await for that message, even if there's no actually IO involved in that operation.

44:51 So, right. Which is a context switch and overhead and all of that stuff, right?

44:55 Exactly. So for example, when you, so sending back a response in ASGI, it involves typically at least two messages.

45:04 So the first one is to start the response. So you instruct the server with the response code and the headers you want to send back to the client.

45:14 So the following message or messages are the body of that response.

45:21 So the final fact is that the response start event doesn't involve any IO at all. It doesn't use the network.

45:29 So what happens in that is that you delaying the operation that you are supposed to do, which is just saying, okay, I'm gonna send some data.

45:42 And these are the like, here's some text. Yeah. You're gonna delay that operation to the next cycle of the event in your Python code.

45:50 So that adds quite a lot of overhead. And I mean, I understand like why the interface is made in this way, because it's like super straightforward.

46:00 It's very simple. You have like the same interface to do everything, but at the same time, it feels very unperformant in a way because we are wasting like a tons of like, even, I don't understand why do we need like to waste event loop cycles to do something that is actually synchronous code.

46:18 Yeah, sure. And so RSGI changed this in a way that you have interfaces which are synchronous or asynchronous depending on what you're actually planning to do.

46:32 For example, like if you have the entire body. So if your root returns, I don't know, a JSON stream, okay, you don't need to actually await for sending the body because you already have like all the bodies. So the interface in our

46:47 Right, right. It's all in memory. Yeah, there's no IO.

46:50 Yeah, the interfacing.

46:51 It's not like a file, file stream pointer or whatever they said to return. Yeah, exactly. So in that case in RSGI, you can use a synchronous method to just move the body to the server.

47:01 to the server and just let the response goes nice. Whereas if you want to stream content, then you can use a specific interface for that in RSGI, which is responsible stream.

47:11 And that gives you like an interface to send chunks of body or iterate over something as you're supposed to do.

47:19 Oh, yeah. So that's the major thing. The other thing, like the other reason why RSGI exists is that.

47:26 Yeah, ASGI is designed based on the fact that the network communication happens under Python, which is something that Granian can do because can emulate because it supports ASGI.

47:40 So that's also waste. So if you have the chance to have a different implementation that makes like things a lot more difficult to implement, meaning reasoning, like if you work in Python, but you're actually in a different language.

47:57 So yeah, that's the other reason why RSGI exists.

48:01 Okay. Yeah, that's very interesting.

48:02 Yeah, maybe some of the other frameworks could look at that and go, well, if it's available, it's an option.

48:07 Okay, a couple of things I want to talk about before we run out of time here.

48:11 One is Jazzy Coder out in the audience asks, how did you validate your library following the correct spec?

48:17 Did you reference the RFCs or another library or if you use a go-down, back to principles of the Unix networking programming book?

48:26 And for background, interested in this approach because I'm building my own WSGI server.

48:31 Okay, cool. So the idea, I mean, WSGI protocol is like documented in a PEP.

48:38 So I just implemented tests that respect what is defining to the original PEP about WSGI with just an exception.

48:50 So the only exception in Fragornian in WSGI protocol is that it's able to serve HTTP/2 over WSGI,

49:00 which is not supposed to happen. But with Granon, you can serve your WSGI application directly with HTTP/2.

49:06 But yeah, that's the way I was sure to respect the protocol.

49:11 Yeah. How about like the HTTP/2 protocol? Are you using just a library that already has it all figured out or?

49:18 Yes, yes. I mean, reinventing the wheel like also for HTTP handling was something I wasn't looking for.

49:25 No, I wouldn't want to do it either.

49:27 So yeah, hyper hyper is, I mean, I don't know.

49:29 It's a REST crate or something like that?

49:34 Yeah, exactly.

49:35 Awesome. All right. Very cool.

49:36 The other thing I want to ask you about or just let you speak to real quick is there's a bunch of features like specifying the HTTP interface level.

49:51 Like, do you want to restrict it to one or two? Yeah, you might care because there was a vulnerability in HTTP/2 create like some kind of too much work or too many retries or something recently.

49:58 So maybe you want to switch it to one for a while. I don't know.

50:04 Fun fact that Granian wasn't affected by that because hyper the library behind it wasn't affected by that.

50:10 Oh, nice. That's awesome.

50:13 Yeah, I figured basically just in this case, you wait until hyper either fixes it or hyper is not a problem, right?

50:19 Which is great. But maybe just talk about some of the things that we haven't touched on that are interesting like blocking threads or threading mode or specifying the loop or so on.

50:27 So yeah, in Granian, so since Granian has this unique architecture where you have an event loop running on the RAST side.

50:38 So for instance, if you're like deploying your ASGI application with Granian, you will have two event loops like the Python one, the ones that runs your code and also a RAST event loop,

50:52 which is actually the Tokyo runtime is another super popular crate in the RAST ecosystem.

50:59 There are different ways to run the RAST runtime, meaning that RAST is not limited to having a single thread running the loop.

51:12 And thus you can have an event loop running on several different threads on the RAST side.

51:20 And so the threading mode option in Granian lets you specify that behavior, meaning that if you use the runtime option, you will end up having like multi-threaded runtimes on the RAST side.

51:34 Whereas if you specify the workers option for the threading mode, it will still use a single threaded runtime also on the RAST side.

51:44 If you say that the runtime mode, did the workers themselves each get multiple threads?

51:49 Is that how that works?

51:50 Yes, exactly.

51:51 So in runtime mode, every worker has multi-threaded runtimes, whereas on the worker side, you have like the worker is also the runtime.

52:01 Yeah, got it.

52:02 And the option is there because like depending on the load of your application, like one of the two might work better.

52:10 Sure.

52:11 Depends on the IO and CPU boundness of your application.

52:14 So, yeah.

52:15 I don't want to go too much into these.

52:16 But if I set threading mode to runtime, is it reasonable to have just one worker?

52:21 Or does it still make sense to have multiple workers for Python app?

52:26 So the thing is that with a single worker, so the workers will spawn their own Python interpreters.

52:33 So every worker is limited to the global interpreter lock, meaning that even if you spawn like a single worker with, I don't know, 12 threads, those 12 threads will run in the RAST code.

52:49 Yeah, yeah.

52:50 But they share a single Python runtime, which means all the things that that means.

52:55 Exactly.

52:56 Got it.

52:57 Okay.

52:58 So the only way to scale.

52:59 So the workers is the way to scale, let's say the Python code of your application.

53:03 Okay.

53:04 Threads and threads are useful to scale the RAST runtime of stuff.

53:10 No, there are some, the RAST side of stuff, of the things, meaning that those will be like the amount of threads used by RAST to handle your requests.

53:20 So for example, if your application runs, opens like a tons of WebSocket, maybe you have like a WebSocket service.

53:29 It might be helpful to spawn more threads for the RAST side.

53:34 So it can actually handle more of those requests in the WebSocket land.

53:40 And the blocking threads are mostly relevant only for the VOOSKI protocol, meaning that the blocking threads are the amount of threads used by Granian to interact with Python code.

53:56 So on ASGI, since you will have like the event loop, there's no so much difference in how many threads you, how many blocking threads you spawn because those blocking threads will still have to schedule stuff on the Python event loop.

54:11 But on VOOSKI, since you don't have, you're not limited to the main thread of Python.

54:18 So if you're, I don't know, maybe your application is using, you're using CyperPG to connect to the database and those libraries are able to release the global interpreter block.

54:31 So having multiple blocking threads on VOOSKI might be helpful, still be helpful because like all the code which doesn't involve the GIL will be able to run in parallel.

54:45 Right, right.

54:46 Maybe one part, one thread, one request is waiting on a database call, which it hits the network, which releases the GIL, for example.

54:52 Right.

54:53 Exactly.

54:54 Yeah.

54:54 Okay.

54:55 What about this loop optimizations, this opt/noop?

54:58 Yeah, that's, that's a good thing.

55:00 Yeah, that's, that's...

55:01 What kind of magic is in there?

55:03 that's a bit complicated, meaning that, so I think like writing Runyon was like super helpful for me, at least to understand like the internals of Async KO in the Python world.

55:16 And if I have to be honest, I don't really like how Async KO is implemented behind Hoot.

55:23 But anyway.

55:24 Yeah, I feel like you have to juggle, you have to be so aware of what loop is running.

55:29 Has a loop been created?

55:30 Is there a different one?

55:31 Is there a different one?

55:32 Have I got the wrong loop?

55:33 Like all of that stuff, it should be utterly transparent.

55:36 And I just, I should just tell Python, I want to run stuff in a loop.

55:39 Yeah.

55:40 You know, I don't want to, it's not like I'm managing the memory or juggling, you know, the GC.

55:46 Like I feel like Async IO should be the same.

55:48 You should say, I want to run stuff asynchronously.

55:50 And maybe somewhere I've configured that or maybe it's automatic or whatever, but just do any, you're always kind of like, you know, for example, if you talk to the database asynchronously at the start of your web app, and then you use FastAPI and you try to use it in your request there, it'll say, well, you're on the wrong event loop.

56:07 Like, well, why do I care about this?

56:09 Just run it on the loop.

56:11 Like, you know what I mean?

56:12 Like that's kind of, that's been a complaint of mine since 3.5, but hey.

56:15 Yeah.

56:16 Yeah.

56:17 Nobody asked me, so.

56:18 Yeah.

56:19 So yeah.

56:20 And those, let's say optimizations are actually a few hockey parts.

56:25 In fact, like, I think like FastAPI doesn't work with loop optimization enabled with Granian because it skips.

56:33 So those optimization just keeps like one of the first iterations into running asynchronous code.

56:39 I think going more deeper than this in details would be.

56:43 Yeah.

56:44 Yeah.

56:45 Okay.

56:46 Not, not worth it, right?

56:47 Long and hard to follow.

56:48 But let's just say it just keeps some steps in the task running in the other one.

56:52 Yeah.

56:53 Okay.

56:54 You can do things like specify the SSL certificates and stuff.

56:56 This one right here is pretty excellent.

56:57 I mean, I don't know who worked on that one, but I didn't work out, but I inspired, I requested this one as the right way to put that.

57:04 You can say the process name, which is nice.

57:06 Yeah.

57:07 Yeah.

57:08 If you're running multiple sites, all on Granian on the same server.

57:11 You can differentiate which one is using the memory again.

57:14 And while we're looking at this, can I ask, can I propose an idea and just get your feedback on it?

57:19 Get your thoughts.

57:20 What about a lifetime management type of feature where you say after 10,000 requests, just recreate the worker or after an hour recreate the worker or something like that?

57:32 Is this something that is in any interest to you?

57:34 I understand the need for that, especially for, I think it's like one of those requests that is based on the fact of using like Boosgi or at least like it's coming more from the jungle user land.

57:49 So I think making a lifetime that kind of check would be kind of hard in a sense that there's a lot like involved into the, you know, process management of Granian because like, you know, the Python process and then you have Python threads and then you have like the rust threads and then you have like the runtimes.

58:10 Yeah.

58:11 And then reasoning of lifetime probably it's kind of hard.

58:14 I think like the fixing, like a maximum number of requests per worker is something that can be done, let's say pretty easily.

58:23 That issue is like, it's opened by, there's an issue for that, like opened by a few times, if I recall correctly.

58:29 The thing is that like in the, let's say in the prioritization queue.

58:33 It's not at the top.

58:34 So let's say at the moment, like I'm talking with some people that propose their self to join as a contributor on Granian, but let's say at the moment I'm still like a single main contributor.

58:48 So yeah, sure.

58:49 I need to make, you know, some priority queues into issues.

58:53 A look after your wellbeing.

58:55 Yeah.

58:56 Yeah.

58:57 I think like the one which is more requested right now is like the access log.

59:01 Yeah.

59:02 So I think like that would be the next one probably.

59:06 Yeah.

59:07 I think honestly, that's more important of the access log than this one.

59:10 Yeah.

59:11 There's something changed in one of my apps and it just all of a sudden slowly just keeps growing in memory.

59:17 And I'm pretty sure I've not done anything to make those changes.

59:21 And it's something in a third party library, data access database or something else.

59:26 I don't know what it is, but it's just, and it's fine, but it just consumes so much memory.

59:30 And so I ended up setting up some other thing to just, just say, look after like a day, just, you know, just give it a refresh.

59:38 Let it refresh itself.

59:39 You know that we sent the SDK now we have also profiling.

59:44 So you can actually look into the stocks of memory locations, even like live on your application.

59:51 So that's true.

59:52 If you need to that like something like that, you can try with century.

59:57 Like distributed tracing or something like that.

59:59 Yeah.

01:00:00 Or the APM stuff.

01:00:01 I mean, this should be dressing is more like about chaining together, like different sources.

01:00:05 Like this is profiling, like.

01:00:08 Yeah.

01:00:08 Yeah.

01:00:09 Okay.

01:00:09 Yeah.

01:00:09 Flame graphs and stuff to see where does your application.

01:00:13 Maybe I'll put it in.

01:00:14 Cause it's, it's driving me crazy.

01:00:15 And I would love to just do a PR to somewhere and just go, Hey guys, this change, here's the problem.

01:00:21 Or if it is my problem.

01:00:22 And I just like, there's nothing I really changed in the core of this thing, but it seems to have started going weird that maybe I don't know.

01:00:27 But anyway, it would be great.

01:00:28 I'll have a look at it.

01:00:29 Thanks.

01:00:30 All right.

01:00:30 What's next for Grandin?

01:00:31 Yeah.

01:00:32 I think like fulfilling a couple of feature requests, like the access log, like the worker max requests or a few, let's say minor things in that sense.

01:00:43 I think like in terms of major features, it's pretty solid at the moment, as a server after let's say the idea is like after these feature requests, the idea was to, I mean, it's just an idea at the moment, but I'd like to try to add some features to the RSGI protocol.

01:01:07 So for example, we talked before about channels and web sockets.

01:01:12 So as I said before, like I find very annoying, like every time I want to make even just a chat room, I need to, you know, put red is there and manage red is and whatever.

01:01:26 And not like that kind of complexity to my project.

01:01:30 And so I was thinking about like embedding some broadcasting features into the RSGI protocol, because, you know, while all other servers for Python are written in Python.

01:01:42 And so they're like still bound to, you know, the process paradigm of Python on the rough side of things.

01:01:50 That's not true anymore. So, right. Yeah.

01:01:53 It would be to have something to broadcast messages between processes and even different granular servers.

01:02:01 So that's cool.

01:02:02 Yeah.

01:02:03 That's what I have on my table at the moment.

01:02:05 All right. Well, excellent.

01:02:07 And thanks for working on this.

01:02:08 It's an excellent project and it's really cool to see like kind of the innovation, like you're saying just there, you know, if it's not in Python, if it could be in Rust, like what would we change that would make that, you know, more capable, even for us?

01:02:19 More capable even for the Python people, right?

01:02:21 Yeah, exactly. And I think it's like the, I think it's like the baseline philosophy or of people like Samuel Colvin with the Pydantic project, like to, you know, to try to empower Python, like the most keeping like the simplicity and the syntax we all love about Python.

01:02:43 But I think it's like a very good way of evolving even the Python language.

01:02:50 Yeah, absolutely. You know, sometimes you'll hear people say Python is slow and then like in some sort of pure sense, that's true. But then, you know, you put it on top of things like gradient and all of a sudden it's awesome. Right. So thanks for playing your part in that.

01:03:04 Thank you too.

01:03:05 Yeah, you bet. And thanks for coming on the show. See you next time.

01:03:08 Thank you. Bye.

01:03:09 Bye.

01:03:10 This has been another episode of Talk Python to Me. Thank you to our sponsors. Be sure to check out what they're offering. It really helps support the show.

01:03:18 It's time to stop asking relational databases to do more than they were made for and simplify complex data models with graphs. Check out the sample FastAPI project and see what Neo4j native graph database can do for you. Find out more at talkpython.fm/Neo4j.

01:03:37 Want to level up your Python? We have one of the largest catalogs of Python video courses over at Talk Python. Our content ranges from true beginners to deeply advanced topics like memory and async. And best of all, there's not a subscription in sight. Check it out for yourself at training.talkpython.fm.

01:03:54 Be sure to subscribe to the show. Open your favorite podcast app and search for Python. We should be right at the top. You can also find the iTunes feed at /itunes, the Google Play feed at /play and the direct RSS feed at /rss on talkpython.fm.

01:04:09 We're live streaming most of our recordings these days. If you want to be part of the show and have your comments featured on the air, be sure to subscribe to our YouTube channel at talkpython.fm/youtube.

01:04:20 This is your host, Michael Kennedy. Thanks so much for listening. I really appreciate it. Now get out there and write some Python code.

01:04:26 I'll see you next time.

Back to show page
Talk Python's Mastodon Michael Kennedy's Mastodon