Monitor performance issues & errors in your code

#463: Running on Rust: Granian Web Server Transcript

Recorded on Tuesday, May 7, 2024.

00:00 So you've created a web app using Flask, Django, FastAPI, or even Emmet.

00:04 It works great on your machine.

00:06 How do you get it out to the world?

00:07 Well, you'll need a production ready web server.

00:09 Of course, on this episode, we have Giovanni Barillari to tell us about his

00:14 relatively new server named Granian.

00:17 It promises better performance and much better consistency than many of the more well-known ones today.

00:23 This is Talk Python to Me, episode 463.

00:27 Are you ready for your host, Darius?

00:29 You're listening to Michael Kennedy on Talk Python to Me.

00:33 Live from Portland, Oregon, and this segment was made with Python.

00:36 Welcome to Talk Python to Me, a weekly podcast on Python.

00:43 This is your host, Michael Kennedy.

00:45 Follow me on Mastodon, where I'm @mkennedy, and follow the podcast using @talkpython, both on fosstodon.org.

00:53 Keep up with the show and listen to over seven years of past episodes at talkpython.fm.

00:57 We've started streaming most of our episodes live on YouTube.

01:01 Subscribe to our YouTube channel over at talkpython.fm/youtube to get notified

01:06 about upcoming shows and be part of that episode.

01:09 This episode is sponsored by Neo4j.

01:11 It's time to stop asking relational databases to do more than they were made

01:16 for and simplify complex data models with graphs, check out the sample FastAPI

01:22 project and see what Neo4j, a native graph database, can do for you.

01:26 Find out more at talkpython.fm/neo4j.

01:31 And it's also brought to you by us over at Talk Python Training.

01:35 Did you know that we have over 250 hours of Python courses?

01:40 Yeah, that's right.

01:41 Check them out at talkpython.fm/courses.

01:44 In fact, I want to tell you about our latest course we just released last week,

01:48 Getting Started with NLP and Spacy.

01:51 This one is written by Vincent Wormerdam.

01:53 You may know him from many of his educational projects and channels, but

01:58 he also worked at Explosion AI, the makers of Spacy, so it's safe to say he knows

02:03 his stuff when it comes to NLP and Spacy.

02:05 If you have text you need to analyze, pull entities from, understand the

02:10 sentiment, and so much more, then Spacy is one of the best frameworks out there for

02:14 this.

02:15 And now we have an awesome course you can use to get way better at NLP.

02:20 During the course, you need a fun project, right?

02:22 Well, Vincent uses the past nine years of Talk Python transcripts, along with a few

02:27 data science programming bits of magic, to process them all with Spacy and ask

02:33 awesome questions like, "Which frameworks are we talking about over the years?"

02:37 Sign up for the course at talkpython.fm/spacy.

02:40 And if you hurry and get it in the month of May, 2024, we're doing a special 10%

02:46 off to celebrate the launch.

02:47 That's talkpython.fm/spacy.

02:50 The link is in your podcast player show notes.

02:52 Enjoy the course.

02:53 Now onto that interview.

02:54 Giovanni, welcome to Talk Python.

02:57 Hello, Michael.

02:58 Thank you for having me on the show.

03:00 It's great to have you on the show.

03:02 Some people you learn about just from like their public speaking or their writing,

03:07 and other people you meet through their projects, right?

03:09 I got to know you through Granian, your Rust-based Python and other thing, web

03:15 server, that I thought was really awesome.

03:17 Started playing with it and we started talking on GitHub around some ideas.

03:20 And then here you are, sort of explored more, learn more about some of your

03:23 frameworks that like you'd created from.

03:25 So I'm excited to talk about Emmet, Granian, and a bunch of other things that

03:29 you built, like kind of all to go together in a big mix there.

03:32 Yeah, I'm excited as well.

03:33 Yeah, it should be a lot of fun.

03:34 Before we get into all the details of all that stuff, you know, just tell us a bit about yourself.

03:39 I'm Giovanni Barillari.

03:41 I actually born in Italy, but today I'm living in Vienna, in Austria.

03:47 I'm actually a physicist.

03:49 So yeah, I graduated in physics at the university.

03:51 And let's say I started working as a software engineer, focused especially

03:58 on web software pretty soon after the university.

04:02 So it's like 10 years something.

04:04 I'm working as a software engineer, also like as a Cypher Database engineer.

04:09 So let's just say I'm quite like on the backend side of the things usually.

04:16 And I also started, I actually started like contributing to open source software

04:23 projects, even before actually starting working as a software engineer.

04:29 And particularly I started like contributing to the Web2Py project.

04:33 It's a quite old project by Massimo Di Piero.

04:38 And yeah, today I'm working as a Cypher Database engineer for Sentry.

04:42 I bet that pretty much of the people know about Sentry.

04:47 Awesome.

04:48 Yeah.

04:48 I didn't even know that you worked for Sentry until just a few minutes ago.

04:52 That's pretty awesome.

04:53 Obviously people know Sentry, they're big supporters of the show and sponsor

04:58 some of the episodes, but yeah.

05:00 How's it like to work at Sentry?

05:02 Must be fun.

05:02 Well, it's super nice.

05:04 A lot of talented people.

05:06 They're super nice.

05:08 It's a really nice environment to be within.

05:11 So yeah, I'm super happy.

05:13 Yeah.

05:13 Awesome.

05:14 What does a software reliability engineer do?

05:17 So let's say it might be a complicated question because like actually the original title comes from Google.

05:25 So let's say is kind of related to infrastructure and monitoring in software.

05:34 So let's say to simplify that it's about be sure that everything runs smoothly

05:42 with no incidents and stuff like that.

05:44 I see.

05:45 Make sure you can monitor bugs, slowdowns, work on failover type of situations, that kind of stuff.

05:52 Exactly.

05:53 I imagine you probably use Sentry to monitor Sentry for reliability.

05:57 Is that right?

05:59 Yes.

05:59 Yes.

05:59 We have this project called Sentry for Sentry.

06:04 Okay.

06:04 Which is like a separated Sentry instance that monitors the actual SAS instance of Sentry.

06:12 That's pretty interesting because of course, if Sentry went down, you're

06:15 using it to monitor it.

06:16 Yeah.

06:16 Everyone else uses Sentry to monitor their thing.

06:19 Then it's not about when their code goes down, it doesn't affect it.

06:22 But when your code goes down, it might actually affect your ability to know

06:25 that it's down.

06:26 So a separate copy, that's wild.

06:27 Okay.

06:28 I hadn't even thought of that.

06:29 Exactly.

06:30 Super cool.

06:31 All right.

06:31 Now, first of all, there's a little bit of love out in the audience for your

06:34 whole larger project, Emmet.

06:36 So Tushar says, "Did you say Emmet?" Emmet is amazing, which is super cool.

06:42 Tools like that encourage him to work on his dev tooling, which is really great.

06:45 Before we get into the details of that though, why create another web framework?

06:49 I don't mean this in a negative way.

06:50 It's just like there's Flask and Django and then we have FastAPI and so on.

06:56 So why not just go, "Oh, I'm just going to use this one." Like what inspired you to go, "I think I'll make one of them."

07:01 So I think we should go back a bit in time because actually like this year will be

07:08 like the 10th birthday of like Emmet.

07:11 So let's just say it's like a long time.

07:15 So it's not that new.

07:16 Okay.

07:17 Out there.

07:17 Yeah.

07:19 I see.

07:20 Yeah.

07:20 So originally it was released as, it had like a different name.

07:27 It was called Webpy and I changed the name in 2020 I think, like with the, when I

07:34 moved from synchronous paradigm to the asynchronous one.

07:40 So let's say at the time I designed Webpy, so the original version in 2014, the main

07:49 thing was about, so that time it was like the time of Ruby on Rails being super

07:56 popular and I originally started working in web development using Ruby on Rails.

08:03 And when comparing, let's say the amount of, let's say batteries included in the

08:09 box of Ruby on Rails to the Python ecosystem.

08:13 So let's say that the major competitor at that point in time was Django, but let's

08:18 say the feeling I got from Django at that time compared to Ruby on Rails was completely

08:24 different in a sense that I found myself like spending much more time on building

08:31 stuff compared to Ruby on Rails.

08:33 And this is also what bring me to the Web2Py project or to Py community, because

08:39 it was in a sense, pretty similar in some of the design decisions with RAR.

08:46 And, but at the same time, like once you start contributing to a web framework,

08:51 you have time to like to dig into a lot of the internals and decisions.

08:56 And so Web2Py at that time, so I used Web2Py to build my first, the code behind

09:02 my first startup actually, and it had quite a lot of scaling issues at that time.

09:08 So let's say at that point in time, I just was looking out for the options and I

09:15 started like digging into the code internals of Django and also Flask, which, I mean, I

09:22 really loved like the Flask approach of things, but at the same time it was so

09:30 micro.

09:30 Yeah.

09:31 I mean, like to build an actual project, it required like to have like tons of

09:36 extensions and other pieces, let's say other libraries to add it to the project

09:40 that, yeah, I think like I ended up just, you know, saying, okay, let's just

09:45 rebuild Web2Py the way I want it.

09:49 And that's eventually how WebPy came out and today Emmet.

09:54 Yeah.

09:54 That's pretty much the story behind it.

09:56 Yeah.

09:56 Okay.

09:57 Yeah.

09:58 I didn't realize it went that far back.

09:59 How about Granian?

10:00 Is that newer?

10:01 Yeah.

10:01 Granian is, I think like the first public release is like from one year ago or

10:07 something.

10:08 Yeah.

10:08 And I, because I learned about Emmet through Granian and like, oh, it's kind

10:12 of all, probably all the same project.

10:14 I didn't realize the history.

10:15 Why the new name?

10:16 Why Emmet?

10:17 So the thing was that to support, let's say the upgrade between WebP and Emmet.

10:23 So since like all the interfaces has to be changed to support like a sync code,

10:29 the idea was to provide, let's say a quick way to, to do that.

10:35 Meaning that to make it possible for developers to, you know, install like a

10:40 new version of WebP and getting like everything broken because of, you know,

10:44 the new interfaces.

10:45 So yeah, I just decided to, you know, changing the interface and also changing

10:51 like the package name in order to say, sure.

10:54 Okay.

10:54 If you want to upgrade, you can upgrade safely.

10:57 Otherwise, it's like a super mega version change.

11:01 Not only do you change the version, but you change the name.

11:03 Yeah.

11:04 I see.

11:05 Exactly.

11:05 That's interesting.

11:08 All right.

11:09 Well, let's dive into it.

11:11 So I like the title Emmet, the web framework for inventors.

11:15 And yeah, maybe give us a sense of like, what are some of the core features of

11:19 Emmet and what are your goals with building it?

11:22 From an API perspective.

11:23 The idea was to have like all in one, let's say framework to build web application, all in one, let's say in a sense of, again, when the project

11:33 actually started.

11:34 So like even 10 years after that, I still usually prefer to develop web projects without relying too much on front-end frameworks.

11:46 So this is like a big, let's say a preamble to the thing.

11:50 Like this is originally from an era where like front-end web framework didn't exist.

11:56 Like, I think it was just AngularJS and maybe Ember at that time.

12:01 Yeah.

12:01 You're basically describing my life in 2024.

12:04 So I'm a big fan of the server-side frameworks, you know?

12:07 Thanks, Jack.

12:08 Yeah.

12:08 Also because like, it seems sometimes that we reinvent like a lot of stuff to catch

12:15 up, like the beginning at the end.

12:16 Like, yeah, I felt like all of the theme about, you know, server-side rendering

12:21 with front-end frameworks and server-side render components and all that kind of

12:26 stuff.

12:26 So sometimes it just feels, you know, we're getting back to the origin.

12:31 But yeah, so the idea behind Emmet is to have like all in one solution to develop

12:38 web applications.

12:39 So you have all the standard features you have with the web framework.

12:43 So like routing and middlewares and that kind of stuff.

12:48 You have an ORM, you have a templating system plus a few, let's say, tools

12:55 embedded within.

12:56 So for instance, it's very easy to use, I don't know, sessions or to have an

13:03 authentication system.

13:04 It's all like provided inside the box.

13:07 So yeah, the idea was to have like, let's say a battery of tools, like in one place

13:13 to do the most common things when you start developing a web application.

13:19 Yeah, very nice.

13:20 So yeah, like you said, it has an ORM built in and it feels, I guess, SQL

13:26 alchemy-ish in a sense, but not exactly the same.

13:29 Or Django ORM would be, you know, another way in some ways there.

13:33 Yeah, I think it's more near to SQL alchemy in that sense.

13:38 You tend to have like an API for using Python objects to build queries rather than

13:44 how to say, use like a lot of strings attributes like you usually tends to do in

13:52 Django.

13:53 Yeah, I mean, it's more close to SQL alchemy in that sense.

13:58 I think like the major difference with the ORMs out there is that the model class

14:05 you define are not like, so when you, for example, select records from the database,

14:11 the single, let's say rows you select are not instances of the model class.

14:16 So let's say like the model class acts more like management class.

14:21 Like a schema definition sort of thing.

14:23 Yeah, I mean, it does like a lot of helpers top of that, but yeah, I think like

14:29 it's definitely the major difference between like the vast majority of ORMs out

14:34 there for Python when you usually have like the model class, which is also like

14:38 the class of all the records you select and work on from the database.

14:42 Yeah.

14:43 So what do you get back in this world here?

14:45 What do you get if you do a query, like in your example on the homepage, you have a

14:49 time traveler.

14:50 So what do you get back when you get a group of them, a set of them?

14:53 So you get like a different class.

14:55 So there's like a separated class.

14:58 Every model has, it's called like row class.

15:01 So it's an instance of that class.

15:04 And this design, it's mostly made for two reasons.

15:10 Like the first one is performance in a sense, meaning that when you select records

15:16 or operate on records, it avoids to, you know, fulfill like all those objects with

15:23 the actual model class attributes or functions or methods.

15:28 And the validation and stuff.

15:30 Yeah.

15:31 Yeah.

15:31 And on the other end was also to kind of remind to the developer that he is working

15:40 with actual data from the database and not like real Python objects in a sense,

15:46 which is, yeah.

15:47 Yeah.

15:47 I think like in the years is like the first reason why people tend to object against

15:54 ORMs.

15:55 So those two were the main reasons behind this design.

15:59 It's something like, you know, in the between of an ORM and just some database abstraction layer.

16:06 This portion of Talk Python to Me is brought to you by Neo4j.

16:11 Do you know Neo4j?

16:13 Neo4j is a native graph database.

16:16 And if the slowest part of your data access patterns involves computing relationships, why not use a database that stores those relationships directly in the

16:25 database?

16:26 Unlike your typical relational one, a graph database lets you model the data the way it

16:31 looks in the real world, instead of forcing it into rows and columns.

16:35 It's time to stop asking a relational database to do more than they were made for

16:39 and simplify complex data models with graphs.

16:43 If you haven't used a graph database before, you might be wondering about common use

16:47 cases.

16:47 You know, what's it for?

16:48 Here are just a few.

16:50 Detecting fraud, enhancing AI, managing supply chains, gaining a 360 degree view of

16:56 your data, and anywhere else you have highly connected data.

17:00 To use Neo4j from Python, it's a simple pip install Neo4j.

17:06 And to help you get started, their docs include a sample web app demonstrating how to

17:10 use it both from Flask and FastAPI.

17:13 Find it in their docs or search GitHub for Neo4j Movies Application Quickstart.

17:18 Developers are solving some of the world's biggest problems with graphs.

17:22 Now it's your turn.

17:23 Visit talkpython.fm/neo4j to get started.

17:27 That's talkpython.fm/neo, the number four, and the letter J.

17:32 Thank you to Neo4j for supporting Talk Python To Me.

17:35 I like the query syntax.

17:39 You know, people visit the homepage, you'd see something like time travel dot where,

17:44 then lambda of T goes to T dot return equal equal true.

17:47 And while some of the ORMs let you write code in terms of like the class fields or

17:55 whatever, it's never looked quite right because you're working with, say, the

18:00 static value out of the class.

18:02 Whereas what you really are trying to talk about is the instance level of the record,

18:07 right?

18:07 So instead of saying T, you'd say time travel dot return, but we'd never test that

18:12 because it's the global value of it, right?

18:15 And stuff like that.

18:15 Or you just use strings, which is basically in my mind, no good.

18:19 But what's cool, you know, also, do you want to do an OR or an AND?

18:23 And then what weird thing do you import to do the OR?

18:27 And like, you know, how do you wrap the query and all that kind of stuff where if

18:30 it's a lambda, you can just express the conditions how you want.

18:33 Yeah.

18:34 Yeah.

18:34 That's pretty much the idea.

18:36 So like to use, you know, special methods from Python objects and translate those

18:42 expression like in actually SQL code.

18:44 So yeah.

18:45 Nice.

18:46 For my apps, I have a combination of Beanie and Mongo engine, depending on which one

18:51 you're talking about.

18:52 And for Mongo engine, you do things that are pretty funky.

18:56 Like if you want to say greater than, you would say time travel dot, I don't know,

19:00 it doesn't have a value, but age.

19:01 I'll say there's an age, like time travel dot age, underscore, underscore GT equals

19:07 value.

19:08 And you're like, well, it's not, it's not equal to it.

19:12 And it's not that that's not the name of it, but, but okay.

19:15 That, I guess that means, you know what I mean?

19:17 Like there's a real weird way it's like jammed into a syntax, whereas like here

19:20 you just say greater than whatever.

19:22 Right.

19:22 Yeah.

19:23 Yeah.

19:23 It's like the same of, of it's one of the things I dislike still today of Django or

19:29 ORM in that sense.

19:31 I mean, it has like a lot of, a lot more capabilities because for instance, like

19:36 when you want to represent like complex queries, it tends to be more powerful in

19:43 that sense, meaning that special methods are limited.

19:46 So at some point you start making custom methods.

19:51 So like, I don't know, starts with, for example.

19:54 Yeah.

19:55 Starts with, or in this set or the set includes this and something like that.

19:59 Right.

19:59 Exactly.

20:00 So I think, yeah, there are pros and cons in both, let's say approaches.

20:04 Yeah.

20:05 Cool.

20:05 All right.

20:05 So we have a lot to talk about, even though all this code fits on one screen.

20:08 The other part is to define an endpoint.

20:11 This is about an API, right?

20:12 So you have an async def, which is awesome.

20:15 Supports async and await.

20:16 I think it's super valuable.

20:18 Yeah.

20:18 One note is that your Ram is still synchronous.

20:22 Yeah.

20:22 Yeah.

20:23 So what about that?

20:25 Are you planning on adding an async thing or are you just saying it's just synchronous?

20:29 So it's like a very long story in a sense, because like I started asking myself the

20:36 same question like several years ago.

20:40 And I think like at some point probably I will end up doing that in the same way.

20:48 SQLAlchemy did that.

20:50 Even if I remember like a super nice blog post from the author of SQLAlchemy stating

20:57 that asynchronous code and databases are not the best way to use that.

21:03 So yeah, let's say like in the last few years I just waited in a way to see what

21:09 everyone else was doing.

21:10 But yeah, I think like at some point it will be inevitable in a sense.

21:16 I just don't feel the time has come yet.

21:19 So we'll see.

21:20 Yeah.

21:20 Cool.

21:21 And then I guess the last thing to talk about is you have a decorated app.route.

21:27 Pretty straightforward.

21:28 Yeah.

21:28 But then you also have a @service.json.

21:32 What's the stacker to do?

21:33 So you can think about that decorator like the service decorator as like the JSONify

21:40 function in Flask.

21:42 So yeah, in Emmet you have like both the JSON service and the XML service because like in

21:50 old times I had to write stuff to talk with XML and points and stuff like that.

21:55 So.

21:56 Yeah.

21:56 Yeah.

21:57 So yeah, it's just an easy way to wrap and say everything that returns from this

22:03 function, just serializing JSON or XML or whatever.

22:07 If I return rather than a response, just return a dictionary and it'll do the

22:12 serialization, right?

22:13 Exactly.

22:13 Nice.

22:14 And the audience asks, does it generate an open API documentation?

22:19 Like auto, does it automatically generate documentation?

22:22 So from standard routes?

22:25 No.

22:26 There's an extension though, meaning that if you plan to design REST let's say

22:33 APIs with Emmet, there's an extension for that.

22:37 It's called Emmet REST, which let's say gives you like more tools to structure

22:43 your routes and serialization and deserialization and all that kind of stuff.

22:48 And that extension also brings open API documentation generation.

22:54 Eventually let's say the open API documentation generation will come also to plain routes in Emmet, but there's quite a few design implied to do that.

23:06 Meaning that so Emmet it's like not designed to have a strong type system

23:12 because again, it comes from the days where like typing.

23:15 That didn't exist.

23:17 Was not.

23:17 Yeah.

23:19 So let's say that for instance, for frameworks like FastAPI, which are practically designed on top of something like Pydantic, so you have like a strong

23:28 type system, so everything that comes in and out from the majority of routes you

23:34 write has types and so it's really easy for the framework to inspect the code and

23:41 understand what's going on.

23:42 On let's say general frameworks like Emmet where you, I mean, you might have

23:47 like, I don't know, HTML routes or other kinds of stuff going on.

23:53 There's no, let's say design behind that to support in the first play, like strong

23:59 typing.

23:59 So yeah, making like open API documentation out of standard Emmet routes involves

24:06 like quite a lot of decisions.

24:08 So yeah, we'll see.

24:09 We'll see.

24:10 Yeah.

24:10 Okay.

24:11 Yeah.

24:11 Very cool.

24:11 Yeah.

24:12 We'll come back and talk about Emmet REST in a minute.

24:15 That's one of the fun things.

24:17 It also has a WebSocket support, right?

24:19 Yep.

24:19 Okay.

24:20 Absolutely.

24:20 WebSockets are these things that I'm always like, man, they're so cool and you

24:24 can do all this interesting stuff.

24:25 And then I never, ever, ever have a use case for it in my world.

24:29 I just haven't yet.

24:30 And so I'm like, well, they're very cool, but I don't have it yet.

24:33 Yeah.

24:34 So, I mean, I'm not building Slack.

24:35 Yeah.

24:36 The thing is that usually like when you work with WebSockets, it's also pretty

24:43 common that you need some broadcast facility.

24:47 Yeah.

24:47 So usually you want to do channels or that kind of stuff, which usually tends

24:55 to involve like other software, like you usually have Redis or something like that

24:59 to, in order to, since Python is not exactly good in, let's say, managing

25:05 threads or communicating across different processes, that's probably why it's not

25:10 so easy in the Python world to actually rely on WebSockets a lot.

25:14 I don't know, for instance, if you take like languages like, I don't know,

25:17 Elixir or you have like tons of stuff based on the fact that everything is

25:23 actually communicating over Socket.

25:26 So, yeah.

25:27 And I think like one single thing to say on WebSockets, it's, I think EMMET is

25:35 the only, or one of the few frameworks that allows you to write middlewares with

25:40 Sockets, so you can, so if you have like your chain of middlewares on the

25:45 application, you can also define behaviors for the same middlewares to behave on

25:51 WebSockets.

25:51 So you can probably reuse like a lot of code.

25:54 Like, I don't know if you are in a WebSocket and need to talk with the database, you can use the same middleware for the database connection you use on

26:03 the standard request.

26:05 So I think that's might be worth noting.

26:08 Yeah, absolutely.

26:09 Another thing that's interesting that I don't see in a lot of ORMs, they kind of

26:13 just leave it to the, well, in SQL, so you write it, is aggregation, right?

26:18 The aggregation stuff you have here is pretty interesting where you do a bunch

26:23 of calculation type stuff in the database and then get a, sort of the results back.

26:27 Right?

26:28 So here you can say like, select, I'm talking about an event, like event.location,

26:32 get the counts and then group by this thing, order by that, having these sort of

26:36 properties.

26:37 That's, that's pretty unique.

26:38 I don't see that in a lot of ORMs.

26:40 Yeah, I think like you can do pretty much the same with the SQL alchemy, but

26:45 probably like the syntax is less sugary, let's say.

26:49 I mean, again, this design comes from the fact that with my first startup, we had to

26:55 do like a lot of aggregation over the database.

26:58 And so that's why I wrote all of that, you know.

27:01 Yeah, that's cool.

27:02 Yeah.

27:05 Nice.

27:05 I, you know, I'm familiar with it from all the MongoDB stuff that I've done,

27:08 that like that, the big aggregation pipeline over there as well.

27:11 Yeah.

27:12 I'm also familiar, like I'm not a huge fan of Mongo though, probably because like

27:18 being an SRE, like making Mongo reliable is like a mess sometimes.

27:23 So I think it depends on how people rely on it.

27:26 Right.

27:27 Yeah.

27:28 For me, it's been absolutely, I've run my stuff on over 10 years and it's been

27:31 perfect, however, that's because I use a lot of structured code to talk to Mongo

27:36 from one tech stack, right?

27:38 But if some people are using dictionaries to talk to it, other people using this

27:42 framework, other people using that framework, then the lack of the schema

27:47 structure, I think becomes a problem.

27:49 So I think it really depends on how you use it, but yeah, I hear what you're

27:53 saying for sure.

27:54 I think that that's not even necessarily a Mongo challenge.

27:56 That's a document database challenge generally, right?

27:59 Yeah.

28:00 Just Mongo is primarily the way people do document databases.

28:03 Yeah.

28:03 I tended to like use it for separate stuff.

28:07 So in several projects I worked on, I had like, for instance, like the main

28:11 database with Oscars, for instance, like another database with Mongo for specific

28:16 stuff, maybe stuff you don't need transactions on, or maybe you want to store like time series data or, you know, that kind of stuff.

28:24 So for that, I think it's really cool.

28:26 Yeah.

28:26 Nice.

28:26 All right.

28:27 I guess one final thing here that is worth covering, then we'll, I want to dive

28:32 into a gradient as well as the template syntax.

28:35 So you've got your own template syntax.

28:37 That's kind of like, that's not a syntax.

28:40 You tell people about it.

28:43 You tell them about it.

28:44 Yeah.

28:45 So the inflating system embedded in Emmet is called Renoir.

28:49 And the idea behind it is to don't have a syntax at all.

28:56 So the, the idea behind Emmet template system was why?

29:01 So the question I had is like, why do I have to learn a new language to write

29:07 server-side rendered templates?

29:09 Like why came out?

29:10 Yeah.

29:10 And those languages are, yeah.

29:12 And they're very, very Python like, but they're not Python.

29:15 Exactly.

29:15 So I just said, well, I guess I'll try to do just Python, you know, wrap it in the

29:22 same brackets every other templating language has.

29:26 So it's just plain Python.

29:27 You can do pretty much everything you can do in Python.

29:31 You can even do imports inside.

29:34 Not that I suggest to do that, but you can still, you can do that.

29:37 Are you, you'll go create PHP people.

29:40 Come on now.

29:41 Exactly.

29:42 The only, let's say major difference from standard Python code is that you have to

29:47 write the pass keyword after a block of code.

29:52 So if you write like a for loop or an if statement, the template engine has to know

29:59 when that block ends, given that Python relies on indentation to understand like

30:06 that, but in templates, you don't have like the same in the initial level you have in

30:10 Python.

30:10 So that's the only major difference from plain Python code, plus a few, let's say

30:17 keyword added to the game.

30:19 So you have extend and include in order to extend and include.

30:25 So there are partial templates, let's say, and, and blocks.

30:29 That's it.

30:30 Right.

30:30 Blocks for the layout sections, right?

30:33 Exactly.

30:33 Yeah.

30:34 That's really nice.

30:35 I like the idea.

30:36 I'm sure there are people listening that'd be like, I would like to try this stuff out,

30:40 but I've got 10,000 lines of Jinja or I've got 10,000 lines of chameleon or yeah.

30:48 Yeah.

30:49 I know.

30:49 Whatever.

30:50 What's the story?

30:51 I mean, I'm working in the office with Armin Ronaker every day.

30:55 So the amount of Jinja code we have in Sentry is like huge.

30:59 So yeah, I perfectly understand the point.

31:03 I don't have like, let's say marketing line for selling Renoir.

31:07 It's just something that I, so today I'm so, I'm equitably familiar to, to Jinja

31:15 templates and Renoir templates.

31:17 I'd say it really depends on how you usually structure your application code.

31:24 So I think one good way to try out for Renoir is if you tend to don't use a lot

31:34 like Jinja filters or stuff like that, that might be a good case scenario to try

31:40 it out Renoir.

31:42 Yeah.

31:42 But of course it has to be like a new project because.

31:46 Yeah.

31:46 Converting, I mean, there's no sense into moving, translating code from one

31:51 system to another once you pick.

31:53 It's not super different.

31:54 I think you change an end if to a pass, for example, or end for into a pass.

32:00 But I was thinking more, is there a way to use Jinja within Emmet?

32:05 Right.

32:06 Instead of using Renoir.

32:07 I mean, there's no plain, there's no read extension for that.

32:12 But I mean, if you create like a jinja instance over the application, you can

32:18 call it in your roots.

32:20 You can even create a middleware for that.

32:23 So I think it's pretty easy also to set up Emmet to work with Jinja.

32:26 Yeah, I would think so.

32:27 I created a chameleon FastAPI, which lets you basically put a decorator on FastAPI

32:32 endpoints.

32:32 And it does chameleon template rendering with the dictionary instead of rest

32:36 endpoints.

32:36 It wasn't that much.

32:37 You basically just have to juggle it from behind.

32:39 So I imagine you could probably, someone could create a ginger decorator, like you

32:43 have service.json, like a template.ginger or whatever, something like that.

32:47 Right?

32:47 Probably.

32:48 Yeah, yeah, absolutely.

32:49 That said, I'm not a fan of Jinja.

32:51 I think it's overly complicated.

32:53 So I'm not encouraged, I'm not suggesting it, but the reality is, even as much as I've

32:57 tried to fight against it, is that the majority of Python web HTML, dynamic HTML

33:03 is done in Jinja these days.

33:04 Right.

33:05 Which.

33:05 Yeah, probably true.

33:07 Yeah.

33:07 You kind of got to live in that, that space, even if you don't want to.

33:10 All right.

33:11 And let's talk about the thing that I opened and talked about at the opening is

33:15 Granian.

33:16 Where's Granian gone?

33:18 There we go.

33:18 So this is how, as I said, I got to learn about this framework and what you're doing

33:24 and stuff with Granian.

33:26 Tell us about Granian.

33:26 And before, as a way to sort of kick this off, Cody, who I've had on the show before

33:31 from Lightstar says, thanks for the work on Granian.

33:33 I've had an excellent time using it with Lightstar.

33:35 Lightstar is also awesome.

33:36 Yeah.

33:37 Thanks to Cody.

33:38 Yeah.

33:38 So tell us about it.

33:39 Yeah.

33:39 So as the description suggests, it's just an HTTP server for Python application.

33:47 So it does the same scope of a Uvicorn, G-Unicorn, Hypercorn and all that libraries.

33:56 The main difference compared to every other HTTP server for Python application is that

34:02 it's not written in Python.

34:03 It's written in Rust.

34:06 And it supports natively both BoozG and A-S-G-I.

34:12 So both synchronous and asynchronous applications.

34:16 Plus a new protocol I also wrote with Granian, which is called R-S-G-I.

34:22 But the only existing framework using it that I am aware of is Emmet, indeed.

34:27 Yeah.

34:28 I think there's a lot of things that are nice about this.

34:30 And I have actually most of the things, including Talk Python itself running on

34:35 Granian, which is pretty cool.

34:37 So.

34:38 Cool.

34:38 Yeah.

34:39 Yeah, absolutely.

34:40 So single correct HTTP implementation.

34:43 Sounds awesome.

34:44 Support for version one, two and three, I guess when it's ratified, right?

34:47 Yeah.

34:48 So HTTP/3, let's say since Granian is actually based on a Rust library, which is

34:56 called Hyper, which is a super cool library, it's like vastly adopted, like

35:01 everywhere in the world, like, I don't know how many thousands, hundreds of

35:07 thousands of libraries in the Rust ecosystem use it.

35:10 It is used in Cloudflare for a lot of their production systems.

35:15 So super strong library, but yes, it doesn't yet support HTTP/3.

35:22 So yeah, I guess when Hyper will support HTTP/3, that could be easily added those two Granian.

35:31 Right.

35:31 Right.

35:32 That's cool.

35:32 Yeah.

35:32 With these things like GNU-Core, and you've then got to also integrate U-Vehicle and workers and you kind of have a lot of stuff at play, right?

35:39 So here you've just got one thing, which is cool.

35:42 Yeah.

35:43 I mean, like I tended to find annoying the fact that if you want, like to squeeze

35:49 out like performance out of U-Vehicle, you usually need to pile up different

35:56 libraries together, like, Oh wait, I need to add the HTTP tools dependency.

36:02 So it can use as like the C written parsers for HTTP.

36:08 Oh wait, and probably I want some process management.

36:12 So I need also a GNU-Core.

36:14 Yeah.

36:14 It's not super easy, like for starters, at least.

36:18 Yeah.

36:19 I guess maybe we should just set the stage a little bit for the people that

36:22 don't live and breathe Python web deployment, apologies.

36:26 So typically you would have something like Nginx or Caddy that browser actually talks to.

36:33 And then behind the scenes, you set up those, let's just say Nginx to when

36:38 there's a request for a dynamic content or Python based content, as opposed

36:42 to like a CSS file or something, then it will talk to this category of servers

36:47 that then maybe is juggling multiple processes so that it can increase the

36:52 scalability of your Python apps and stuff like that.

36:54 Right.

36:55 So Granian lives in that sort of bubble behind Nginx typically, right?

37:00 Use it other ways?

37:00 Yes and no, meaning that you can also deploy it like on the edge.

37:07 So I think it really depends how your structure, let's say your code.

37:14 So, so for instance, like for Vue applications, like in Django, we tend to, I mean, we open like offload, let's say static file serving to

37:27 Nginx since we already have Nginx somewhere relying on some, you know, headers we sent to the response that Nginx actually parse and understand what to do.

37:40 So in general, if you don't need that kind of optimization, let's say you can

37:47 still use Granian like even on the edge, because I mean, it supports like SSL.

37:52 It supports like HTTP2 directly.

37:54 So, I mean, having Nginx above Granian makes sense only if you want to root

38:01 something out of Granian and not, you know, serve it from Granian.

38:05 But yeah, in general, I'd say that you can use it in both ways behind Nginx or

38:11 not, up to the specific needs of the application, let's say.

38:15 Yeah.

38:16 I have one Nginx Docker server container handling like 15 different apps.

38:23 And so for me, that's kind of the setup, but typically the SSL that I do is over

38:29 Let's Encrypt using Certbot.

38:31 If I want to do HTTPS with Granian, how do I do it?

38:34 You can keep like the Let's Encrypt and the ACME-SH generation thing because

38:40 Granian supports the sigap signal.

38:42 So whenever you need to refresh the certificate, you can issue a sigap to

38:47 the Granian process and that process will reload the workers picking up the new

38:54 certificate.

38:55 So I think it's pretty straightforward.

38:57 I mean, if you already manage like SSL certificates and like renewal chain and

39:03 all that kind of stuff, it's pretty straightforward to do the same in Granian.

39:08 You can just pass, you know, the paths to the certificates, to the CLI command or

39:14 even use environment variables up to you.

39:16 Gotcha.

39:17 Okay.

39:18 One thing that I thought was pretty interesting was the performance.

39:21 Not necessarily that it's so, so, so much faster, but that it's so, so much more

39:27 consistent.

39:28 You want to talk about that a little bit?

39:30 Yeah, so I think if you want to show to the YouTube, your audience, the comparison thing, you find the link in the bottom of the page because the first

39:41 phase of benchmarks in the repository contains just benchmarks of Granian

39:46 itself.

39:47 Whereas in the versus page, you can find like comparison with other servers.

39:52 So the thing behind, let's say the stable keyword I used into describing like

39:59 the performance of Granian was about the fact that usually when people look at

40:05 benchmarks, just look at the, you know, number of requests.

40:09 Yeah, yeah, yeah, yeah.

40:10 What's the max request per second you can get with this thing or whatever.

40:14 Yeah, exactly.

40:15 But the, another like very important value, at least to me, it's like the

40:22 latency because you can, you can still serve like a lot of requests in parallel,

40:27 but the amount of time each request will take to be served, it's also like

40:32 important.

40:32 I mean, I can serve like a thousand requests per second, but if those requests takes a second or it takes like 10 milliseconds, it's like a huge

40:41 difference for the end user.

40:43 Yeah.

40:44 And so the thing is that from, from at least from benchmarks, it appears that

40:50 the way Granian it works, which relies on having like all the network stack

40:56 separated from Python.

40:57 So all the IO, the real IO part involving the network communication is not tied to

41:06 the Python interpreter.

41:08 And so it doesn't suffer from the global interpreter lock and threats getting

41:13 blocked between each other.

41:15 It seems to make like Granian to be more, let's say predictable in response time,

41:22 meaning that the, both the average latency and the maximum latency you have in the

41:29 benchmarks is much lower compared to other, let's say implementations, other

41:35 HTTP servers.

41:36 So yeah, it's not like super faster.

41:39 It won't make like, obviously it won't make the Python code of your application

41:44 faster.

41:45 We can shut down all of our servers, except for one $5 DigitalOcean server and

41:49 just.

41:51 Yeah, no, not really, but at least it should normalize in a way the response

41:57 time of your application.

41:58 Yeah.

42:00 Yeah.

42:00 So the standard deviation of the request time is way, way tighter.

42:05 Exactly.

42:05 The distribution of the request time is way, way tighter, even though you do seem

42:09 to have generally the fastest times.

42:11 But if you look at the difference of the average times and the max times, the

42:17 difference is a lot smaller.

42:19 It's like one, two and a half times variation versus some of the other ones

42:24 are many.

42:25 Yeah.

42:26 10X or something.

42:27 Yes.

42:28 Yeah.

42:28 Maybe a hundred X or some of them.

42:29 Yeah.

42:30 Yeah, absolutely.

42:31 Okay.

42:31 Yeah.

42:32 That's what really I've thought was pretty interesting is the super predictability of it.

42:37 Yeah.

42:37 One thing I want to ask you about is you did say it does this RSGI.

42:41 You want to call it G, GSGI, the granian server?

42:45 No, whatever.

42:46 No, it's like a RAS, RAS, RAS server gateway interface.

42:49 Yeah.

42:51 Yeah.

42:52 That's, that's what I figured.

42:53 And you said Emmet uses this, which is awesome.

42:56 What's the advantage?

42:57 Is there a significant advantage to doing things differently rather than

43:00 ASGI or something?

43:02 Would it be worth like things like Flask saying, Hey, should we support this if

43:05 we're running on top of granian or things like that's what I'm getting at?

43:08 So I didn't actually know if Flask today also supports a synchronous.

43:13 With court they do.

43:14 A request.

43:15 I'm not.

43:16 Yeah.

43:16 Okay.

43:17 So quartz might take advantage of RSCI, meaning that is still asynchronous

43:22 protocols, so you have to be in an asynchronous context to use RSGI.

43:27 But the main difference, let's say between ASGI and RSGI is that it's in the, how to

43:36 say the communication mechanism, or let's say the communication entities, meaning

43:42 that, so in ASGI you have usually have two methods, two awaitable methods, which

43:49 are like send and receive, and you get or push, let's say dictionaries to, to those

43:56 methods, which are referred as messages.

44:00 So usually have like a dictionary, which has type key, which contains like the

44:07 type of message, which might be, I don't know, HTTP request or HTTP body or web

44:14 socket message and all the intercommunication between like the server and the application relies on those dictionaries with specific keys and strings.

44:24 And since you have like a single, let's say interface to rely on that, and that

44:31 interface is asynchronous, it also means you, it means two things.

44:36 The first thing is that every time you want to say something to the server or to

44:42 the clients, you have to await for that message, even if there's no actually

44:48 IO involved in that operation.

44:51 So, right.

44:52 Which is a context switch and overhead and all of that stuff, right?

44:55 Exactly.

44:56 So for example, when you, so sending back a response in ASGI involves typically at least two messages.

45:04 So the first one is to start the response.

45:07 So you instruct the server with the response code and the headers you want to send back to the client.

45:14 And the following message or messages are the body of that response.

45:21 The fun fact is that the response start event doesn't involve any IO at all.

45:28 It doesn't use the network.

45:29 So what happens in that is that you delaying the operation that you are supposed

45:37 to do, which is just saying, okay, I'm gonna send some data and these are the,

45:43 like, here's some text.

45:45 Yeah.

45:45 You're going to delay that operation to the next cycle of the event loop in your Python code.

45:50 So that adds quite a lot of overhead.

45:53 And I mean, I understand like why the interface is made in this way, because

45:57 it's like super straightforward, it's very simple, you have like the same interface to do everything.

46:03 But at the same time, it feels very unperformant in a way because we are wasting like a ton of like, even, I don't understand why do we need

46:12 like to waste event loop cycles to do something that is actually synchronous code.

46:17 Yeah, sure.

46:19 And so our SGI changed this in a way that you have interfaces which are synchronous or asynchronous, depending on what you're actually planning to do.

46:32 For example, like if you have the entire body, so if your route returns, I

46:37 don't know, a JSON stream, okay, you don't need to actually await for sending

46:43 the body because you already have like all the bodies.

46:45 So the interface in R.

46:47 Right, right.

46:48 It's all in memory.

46:48 Yeah.

46:49 There's no IO.

46:49 Yeah.

46:50 The interfacing.

46:51 It's not like a file, file stream point or whatever they said to return.

46:55 Yeah.

46:55 Exactly.

46:56 So in that case, in RSGI, you can use a synchronous method to just move the body

47:01 to the server and just let the response goes.

47:03 Nice.

47:04 Whereas if you want to stream content, then you can use a specific interface

47:09 for that in RSGI, which is RESTfulStream.

47:11 And that gives you like an interface to send chunks of body or iterate over

47:17 something as you're supposed to do.

47:19 Oh, yeah.

47:19 So that's the major thing.

47:22 The other thing, like the other reason why RSGI exists is that.

47:25 Yeah.

47:26 ASGI is designed based on the fact that the network communication happens under

47:32 Python, which is something that Granian can do because can emulate because it

47:38 supports ASGI, but that also makes that also waste.

47:44 So if you have the chance to have a different implementation that makes like

47:49 things a lot more difficult to implement, meaning reasoning, like if you were in

47:54 Python, but you're actually in a different language.

47:57 So yeah, that's the other reason why RSGI exists.

48:00 Okay.

48:01 Yeah.

48:01 That's very interesting.

48:02 And maybe some of the other frameworks could look at that and go, well, if

48:05 it's available, it's an option.

48:07 Okay.

48:07 A couple of things I want to talk about before we run out of time here.

48:11 One is Jazzy Coder out in the audience asks, how did you validate your library following the correct spec?

48:17 Did you reference the RFCs or another library or if you can use a go down

48:23 back to principles of the Unix networking programming book and for a background

48:28 interested in this approach, cause I'm building my own WSGI server.

48:30 Okay, cool.

48:32 So the idea, I mean, WSGI protocol is like documented in a PEP.

48:37 So I just implemented tests that respect what is defined into the original PEP

48:46 about WSGI with just an exception.

48:49 So the only exception in for Granian in WSGI protocol is that it's able to serve

48:58 HTTP/2 over WSGI, which is not supposed to happen, but with Granian you can serve

49:03 your WSGI application directly with HTTP/2.

49:06 But yeah, that's the way I was sure to respect the protocol.

49:11 Yeah.

49:12 How about like the HTTP/2 protocol?

49:14 Are you using just a library that already has it all figured out or?

49:18 Yes.

49:18 Yes.

49:19 I mean reinventing the wheel, like also for HTTP handling was something I wasn't looking for.

49:25 No, I wouldn't want to do it either.

49:27 So yeah.

49:28 Hyper, hyper is again, super battle tested is used by, I don't know, something

49:34 like Cloudflare in production.

49:35 So.

49:35 And this is a rest crate or something like that?

49:38 Yeah, exactly.

49:39 Awesome.

49:40 All right.

49:40 Very cool.

49:41 The other thing I want to ask you about, or just let you speak to real quick is

49:45 there's a bunch of features like specifying the HTTP interface level.

49:51 Like, do you want to restrict it to one or two?

49:53 Yeah, you might care because there was a vulnerability in HTTP/2 create like a,

49:59 some kind of too much work or too many retries or something recently.

50:02 So maybe you want to switch it to one for a while.

50:04 I don't know.

50:04 Fun fact, the Granian wasn't affected by that because hyper, the library behind

50:09 it wasn't affected by that back.

50:11 Oh, nice.

50:12 That's awesome.

50:13 Yeah.

50:14 I figured basically you just, in this case, you wait until hyper either fixes

50:18 it or hyper is not a problem, right.

50:19 Which is great.

50:20 But maybe just talk about some of the things that we haven't touched on that

50:23 are interesting, like blocking threads or threading mode or, or specifying the loop or so on.

50:28 So yeah, Granian.

50:30 So since Granian has this unique architecture where you have an event loop running on the Rust side.

50:38 So for instance, if you're like deploying your ASGI application with Granian, you will have two event loops, like the Python one, the ones that

50:47 runs your code and also a Rust event loop, which is actually the Tokyo runtime

50:55 is another super popular crate in the Rust ecosystem.

50:58 There are different way to run the Rust runtime, meaning that Rust is not

51:07 limited to having a single thread running the loop and thus you can have an event

51:16 loop running on several different threads on the Rust side.

51:20 And so the threading mode option in Granian lets you specify that behavior,

51:26 meaning that if you use the runtime option, you will end up having like multi-threaded runtimes on the Rust side.

51:34 Whereas if you specify the workers option for the threading mode, it will still use

51:41 a single threaded runtime also on the Rust side.

51:44 If you say though, the runtime mode, did the workers themselves each get multiple threads?

51:49 Is that how that works?

51:50 Yes, exactly.

51:51 So in runtime mode, every worker has multi-threaded runtimes.

51:56 Whereas on the worker side, you have like the worker is also the runtime.

52:01 Yeah.

52:02 Got it.

52:02 And the option is there because like depending on the load of your application, like one of the two might work better.

52:10 Sure.

52:10 Depends on the IO and CPU boundness of your application.

52:14 So yeah.

52:15 I don't want to go into too much into these, but if I set the threading mode to runtime, is it reasonable to have just one worker?

52:21 Or does it still make sense to have multiple workers for a pet run up?

52:26 So the thing is that with a single worker, so the workers will spawn their

52:31 own Python interpreters, so every worker is limited to the global interpreter

52:38 lock, meaning that even if you spawn like a single worker with, I don't know,

52:44 12 threads, those 12 threads we run in the Rust code, but they share a single

52:52 Python runtime, which means all the things that that means.

52:55 Exactly.

52:56 Got it.

52:57 Okay.

52:57 So the only way to scale, so the workers is the way to scale, let's say the Python code of your application.

53:03 Okay.

53:04 Threads and threads are useful to scale the Rust runtime of stuff.

53:10 No, there are some, the Rust side of stuff, of the things, meaning that those will be like the amount of threads used by Rust to handle your requests.

53:20 So for example, if your application runs, opens like a tons of websocket,

53:27 maybe you have like a websocket service, it might be helpful to spawn more threads

53:33 for the Rust side, so it can actually handle more of those requests in the

53:40 websocket lane and the blocking threads are mostly relevant only for the WSGI

53:47 protocol, meaning that the blocking threads are the amount of threads used

53:52 by Granian to interact with Python code.

53:56 So on ASGI, since you will have like the event loop, there's no so much difference in how many threads you, how many blocking threads you spawn,

54:05 because those blocking threads will still have to schedule stuff on the Python

54:11 event loop, but on WSGI, since you don't have, you're not limited to the main thread of Python.

54:18 So if you're, I don't know, maybe your application is using, you're using

54:24 CypherPG to connect to the database and those libraries are able to release

54:29 the global interpreter block.

54:31 So having multiple blocking threads on WSGI might be helpful, still be helpful because like all the code, which doesn't involve the GIL will

54:43 be able to run in parallel.

54:45 Right.

54:46 Maybe one part, one thread, one request is waiting on a database call, which

54:50 it hits the network, which releases the GIL, for example.

54:52 Right.

54:53 Exactly.

54:54 Yeah.

54:54 Okay.

54:54 What about this loop optimizations?

54:57 This opt, no op.

54:58 Yeah, that's, that's.

55:01 What kind of magic is in there?

55:02 that's a bit complicated, meaning that, so I think like writing runnion was like super helpful for me, at least to understand like the

55:13 internals of asyncIO in the Python world.

55:16 And if I have to be honest, I don't really like how asyncIO is implemented

55:23 behind the loop, but anyway.

55:24 I feel like you have to juggle, you have to be so aware of what loop is running.

55:29 Has a loop been created?

55:31 Is there a different one?

55:32 Have I got the wrong loop?

55:33 Like all of that stuff, it should be utterly transparent.

55:36 And I just, I should just tell Python, I want to run stuff in a loop.

55:39 Yeah.

55:40 You know, I don't want to, it's not like I'm managing the memory or juggling,

55:44 you know, the GC, like, I feel like asyncIO should be the same.

55:48 You should say, I want to run stuff asynchronously and maybe somewhere I've

55:51 configured that, or maybe it's automatic or whatever, but just do any, you're

55:55 always kind of like, you know, for example, if you talk to the database asynchronously at the start of your web app, and then you use FastAPI and you

56:03 try to use it in your requests there, it'll say, well, you're on the wrong

56:06 event loop.

56:06 It's like, well, why do I care about this?

56:09 Just run it on the loop.

56:11 Like, you know what I mean?

56:12 Like that's kind of, that's been a complaint of mine since three, five, but

56:15 Hey, nobody asked me.

56:17 So.

56:17 Yeah.

56:18 So yeah.

56:20 And those, let's say optimizations are actually a few hockey parts.

56:24 In fact, like I think like FastAPI doesn't work with loop optimization enabled with Granian because it skips.

56:32 So those optimization just keeps like one of the first iterations into running

56:38 asynchronous code.

56:39 I think going more deeper than this in details would be.

56:43 Yeah.

56:44 Yeah.

56:44 Okay.

56:44 Not, not worth it.

56:45 Right.

56:45 Long and hard to follow, but let's just say it just keeps some steps into the task running in the other loop.

56:52 Yeah.

56:52 Okay.

56:53 You can do things like specify the SSL certificates and stuff.

56:56 This one right here is pretty excellent.

56:57 I mean, I don't know who worked on that one, but I didn't work out, but I

57:00 inspired, I requested this one as the right way to put that you can say the

57:04 process name, which is nice.

57:06 Yeah.

57:06 Yeah.

57:07 If you're running multiple sites all on Granian on the same server, so you can

57:11 differentiate which one is using the memory again.

57:14 And while we're looking at this, can I ask, can I propose an idea and just get

57:18 your feedback on it and get your thoughts?

57:19 What about a lifetime management type of feature where you say after 10,000

57:25 requests, just recreate the worker or after an hour, recreate the worker

57:31 or something like that?

57:32 Is this a thing that is in any interest to you?

57:34 I understand the need for that, especially for, I think it's like one of those

57:40 requests that is based on the fact of using like WSGI or at least like it's

57:45 coming more from the Django user land.

57:48 So I think making a lifetime, that kind of check would be kind of hard in a sense

57:57 that there's a lot like involved into the, you know, process management of

58:02 Granian because like, you know, the Python process and then you have Python

58:05 threads and then you have like the Rust threads and then you have like the

58:09 runtimes and then reasoning of lifetime probably it's kind of hard.

58:14 I think like the fixing like a maximum number of requests per worker is something it can be done, let's say pretty easily.

58:22 That issue is like, it's opened by, there's an issue for that, like open by

58:27 a few times, if I recall correctly.

58:28 The thing is that like in the, let's say in the prioritization queue, it's not at the top.

58:34 So let's say at the moment, like I'm talking with some people that propose

58:39 their self to join as a contributor on Granian, but let's say at the moment,

58:45 I'm still like a single main contributor.

58:48 So yeah, sure.

58:49 I need to make, you know, some priority queues into issues.

58:53 I'll look after your wellbeing.

58:55 Yeah.

58:55 Yeah.

58:56 I think like the one which is more requested right now is like the access log.

59:01 Yeah.

59:02 So I think like that would be the next one probably.

59:06 Yeah.

59:06 I think honestly, that's more important of the access log than this one.

59:10 Yeah.

59:10 There's something changed in one of my apps and it just all of a sudden slowly just keeps growing in memory.

59:17 And I'm pretty sure I've not done anything to make those changes.

59:21 And it's something in a third party library, data access database or something else.

59:26 I don't know what it is, but it's just, and it's fine, but it's consumed so much

59:30 memory and so I ended up setting up some other thing to just say, look, after like

59:35 a day, just, you know, just give it a refresh, let it refresh itself.

59:39 You know that with Sentry SDK now we have also profiling, so you can actually look

59:46 into the stocks of memory allocations, even like live on your application.

59:51 So if you need to debug something like that, you can try with Sentry.

59:57 Like distributed tracing or something like that.

59:59 Yeah.

01:00:00 Or the APM stuff.

01:00:01 No, I mean, distributed tracing is more like about chaining together, like

01:00:05 different sources, like this is profiling, like.

01:00:07 The APM.

01:00:08 Yeah.

01:00:08 Okay.

01:00:09 Yeah.

01:00:09 Flame graphs and stuff to see where does your application.

01:00:13 Maybe I'll put it in because it's driving me crazy and I would love to just do a PR

01:00:17 somewhere and just go, Hey guys, this changed, here's the problem.

01:00:20 Or if it is my problem and I just like, there's nothing I really changed in the

01:00:24 core of this thing, but it seems to have started going weird that maybe I don't

01:00:27 know, but anyway, it would be great.

01:00:28 I'll have a look at it.

01:00:29 Thanks.

01:00:30 All right.

01:00:30 What's next for Granian?

01:00:31 Yeah, I think like fulfilling a couple of feature requests, like the access log,

01:00:36 like the worker, a max request or a few, let's say minor things in that sense.

01:00:43 I think like in terms of major features, it's pretty solid at the moment as a

01:00:50 server after, let's say the idea is like after these feature requests, the idea

01:00:57 was to, I mean, it's just an idea at the moment, but I'd like to try to add some

01:01:03 features to the RPSGI protocol.

01:01:07 For example, we talked before about channels and WebSockets.

01:01:11 So as I said before, like I find very annoying, like every time I want to make

01:01:17 even just a chat room, I need to, you know, put Redis there and manage Redis

01:01:25 and whatever, and add like that kind of complexity to my project.

01:01:29 And so I was thinking about like embedding some broadcasting features into the RPSGI protocol, because, you know, while all other servers for Python

01:01:41 are written in Python, and so they're like still bound to, you know, the process

01:01:46 paradigm of Python on the rough side of things, that's not true anymore.

01:01:52 So.

01:01:52 Right.

01:01:53 Yeah, it would be to have something to broadcast messages between processes

01:01:59 and even different granular servers.

01:02:01 So.

01:02:02 Yeah, that's cool.

01:02:03 Yeah.

01:02:03 That's what I have on my table at the moment.

01:02:05 All right.

01:02:07 Well, excellent.

01:02:07 And thanks for working on this.

01:02:09 It's an excellent project and it's really cool to see like kind of the innovation, like you were saying just there, you know, if it's not in Python,

01:02:15 if it could be in Rust, like what would we change that would make that more

01:02:19 capable even for the Python people, right?

01:02:21 Yeah, exactly.

01:02:22 And I think it's like the, I think it's like the baseline philosophy of people

01:02:27 like Samuel Colvin with the Pydantic project, like to, you know, to try to

01:02:34 empower Python, like the most keeping like the simplicity and the syntax we

01:02:41 all love about Python, but I think it's like a very good way of evolving

01:02:48 even the Python language.

01:02:50 Yeah, absolutely.

01:02:52 You know, sometimes you'll hear people say Python is slow and then like in

01:02:55 some sort of pure sense, that's true.

01:02:57 But then, you know, you put it on top of things like gradient and all of a sudden it's awesome.

01:03:01 Right.

01:03:01 So thanks for playing your part in that.

01:03:04 Thank you too.

01:03:04 Yeah, you bet.

01:03:05 And thanks for coming on the show.

01:03:06 I'll see you next time.

01:03:07 Thank you.

01:03:08 Bye.

01:03:09 This has been another episode of Talk Python to Me.

01:03:13 Thank you to our sponsors.

01:03:15 Be sure to check out what they're offering.

01:03:16 It really helps support the show.

01:03:18 It's time to stop asking relational databases to do more than they were made for and simplify complex data models with graphs, check out the

01:03:27 sample FastAPI project and see what Neo4j native graph database can do for you.

01:03:33 Find out more at talkpython.fm/neo4j.

01:03:37 Want to level up your Python?

01:03:39 We have one of the largest catalogs of Python video courses over at Talk Python.

01:03:43 Our content ranges from true beginners to deeply advanced topics like memory and async.

01:03:48 And best of all, there's not a subscription in sight.

01:03:50 Check it out for yourself at training.talkpython.fm.

01:03:53 Be sure to subscribe to the show.

01:03:55 Open your favorite podcast app and search for Python.

01:03:58 We should be right at the top.

01:03:59 You can also find the iTunes feed at /iTunes, the Google Play feed at /play,

01:04:05 and the direct RSS feed at /rss on talkpython.fm.

01:04:09 We're live streaming most of our recordings these days.

01:04:12 If you want to be part of the show and have your comments featured on the air,

01:04:15 be sure to subscribe to our YouTube channel at talkpython.fm/youtube.

01:04:20 This is your host, Michael Kennedy.

01:04:22 Thanks so much for listening.

01:04:23 I really appreciate it.

01:04:24 Now get out there and write some Python code.

01:04:27 [MUSIC]

Back to show page
Talk Python's Mastodon Michael Kennedy's Mastodon