#388: Python 3.11 is here and it's fast Transcript
00:00 Python 3.11 is here! Keeping with the annual release cycle, the Python Core devs have released
00:05 the latest version of Python, and this one is a big one. It has more friendly error messages
00:10 and is massively faster than 3.10, being between 10 to 60% faster in general, which is a big deal
00:18 for a year-over-year release of a 30-year-old platform. On this episode, we have Erit Katril,
00:23 Pablo Galindo-Sogato, Mark Shannon, and Brant Booker, all of whom participated in releasing
00:28 Python this week, the hero on Talk Python, to tell us all about that process and some of the
00:32 highlight features. This is Talk Python to Me, episode 388, recorded October 28, 2022.
00:39 Welcome to Talk Python to Me, a weekly podcast on Python. This is your host, Michael Kennedy.
00:57 Follow me on Twitter, where I'm @mkennedy, and keep up with the show and listen to past
01:01 episodes at talkpython.fm. And follow the show on Twitter via at Talk Python. We've started streaming
01:08 most of our episodes live on YouTube. Subscribe to our YouTube channel over at talkpython.fm
01:13 slash YouTube to get notified about upcoming shows and be part of that episode. This episode is sponsored
01:19 by Sentry. Don't let those errors go unnoticed. Use Sentry. Get started at talkpython.fm slash
01:25 Sentry. And it's brought to you by Command Line Heroes, an original podcast from Red Hat that
01:30 chronicles the history of the software industry. Listen to an episode at talkpython.fm slash
01:36 heroes. Transcripts for this episode are sponsored by Assembly AI, the API platform for state-of-the-art
01:43 AI models that automatically transcribe and understand audio data at a large scale. To learn more,
01:49 visit talkpython.fm/assemblyai. Hey, everyone. Welcome to Talk Python to Me. It's
01:55 great to have you all here. Erit, Brant, Pablo, and Mark. It's going to be super fun to speak with
02:00 all of you about Python 3.11. Before we get into it, I guess, just real quickly, I know some of you
02:06 have been on the show before, but not all of you. So let's just do a quick introduction about who you
02:11 are and how you ended up here on the show. Erit, you want to start first?
02:14 Yeah. Hi, I'm Erit, I'm a Python Core Dev. Earlier in the week, we streamed the release
02:20 of Python 3.11. And on the back of that, Michael just invited us all here for chat.
02:25 Fantastic. Yeah. That was a great live stream. And we'll talk about that for sure in a second.
02:30 But Brant, welcome back.
02:32 Hello. My name is Brant Bucher. I have been a Core Dev for like two years now. And I work with
02:37 Mark and Erit on the Faster CPython team at Microsoft.
02:40 Right on.
02:40 And I was on the show like a month ago.
02:42 Yeah, you were talking about the Faster Python stuff, which we'll touch on again.
02:46 Hello. I'm Pablo Galindo. I'm the infamous release manager. I released Python 3.11. And you can
02:53 redirect all your complaints to my email address. No, please don't do that. So I'm a cPython
02:58 core dev. I'm also serving this year and the last year on the Python steering council. And I
03:04 also released a release manager for Python 3.10 and 3.11, which is now the best version of Python.
03:10 Download it today. Apart from that, I do a bunch of parser stuff. But now we are not talking about
03:15 Yeah, fantastic. Well, welcome. Mark, welcome back.
03:17 Hi there. I'm Mark Chan. I'm the tech lead of the Faster CPython team. I work with Erit and Brant. I've been a Core Dev for some number of years. I don't
03:25 recall. You've been spending a couple years working on this Faster CPython thing and very excited to see some of the fruits of those labors, you know, starting to show up and get in the hands of everyone with this release.
03:37 Yeah, it's good to have the stuff out actually in public and in people's hands. It's really rewarding to know that stuff you're working on is actually used and used by a lot of people.
03:45 Yeah, that is totally true. It's it's one thing to build software. I mean, just by itself, but it's fun. But all of you are working on code that touches so many people. Think about there's layers, right? One layer is how many people use Python? Many millions, millions. Does anyone know a reasonable estimate of this number?
04:03 I think some I don't remember who came with the number, but I think they were estimating like 6 million Python developers, something like that. I mean, probably is between zero and 10 million, let's say.
04:15 Yeah, that's a massive impact. But and also maybe nervousness about pushing code out to that group. But then, you know, those people will build software for others, right? If you're using Instagram or using YouTube or other things, right? It's also having massive knock on effects there.
04:32 So thanks for putting all this together. Thanks for improving the tools that we all get to use. So yeah, big news. The big news is that Python 3.11 is out.
04:42 And as Eric had said, you all live streamed that release. So here we're all together. We're having an awesome chat about the features and the what people can do to take advantage of it and why they might care about new features and want to learn them.
04:58 But there you did a little bit of that. But also Pablo, you actually step by step did the release of CPython mostly live, right?
05:07 Yeah, I did. It was except the boring part. This is something that I started last year because apparently I didn't have enough things to worry. And I decided to make my life even more difficult. I'm an expert on that. Quite proficient.
05:22 I'm also an expert. I'm very bad at doing too many things.
05:25 And you could be a release manager is the only requirement. So yeah, the idea is that the kind of releasing Python is a process that is quite complicated. It's also quite boring. So it's not like, you know, you need to be have a galaxy brain kind of thing to do it. But it's just a lot of steps and it's very easy to do it wrong.
05:44 And it's very glamorous. So I said, Oh, wow, I'm sure people really would like to see a very glamorous process happen in life. And then I said, Let's do it.
05:53 And I asked around and I was surprised about how many people enjoy and glamorous processes. And then I did the release of Python 310 beta one, which turned out to be much funnier than I thought because we just broke GitHub. That happened live. Yes.
06:06 Was that when you imported all the, you imported all the issues and did that migration or was that separate?
06:13 You will think that that is a good candidate, but no, that was not the thing that broke GitHub. We renamed master to main on the CPython repo and the whole GitHub platform was down. What about that?
06:22 Wow.
06:23 Yeah, you can see those Ruby workers really struggling with the renaming, all those forks. I think we were the, I don't know, someone at GitHub may confirm this, but I think we were the first big project to do the renaming. Something went wrong.
06:38 And it was very funny because I literally said, How funny will it be if now I get a 500? There you go. A 500 on the screen. Yeah. Yeah. It's recorded. There is someone actually recorded that clip. Yeah. Yeah. So I said, Wow, man, this has been a such exciting thing that I can break such a big project. Let's do it more.
06:54 So I decided also to stream the 3.10 release itself. And I said, Well, technically the release, the final release is even more boring and longer. So that is actually probably not going to be even like, you know, something that someone went to see. So I said, Okay, let's not do it alone.
07:11 So I invited a bunch of friends and core developers. So they can actually talk about, you know, the things that they worked on the Python 3.10 release and brand unneeded were there. So that they can, they can probably tell you how they found out. But like, apparently, it was something that a lot of people enjoy. Because, you know, it's not only an opportunity to see how the sausage is made. Because, you know, I was just explaining all the commands and all the faces and whatnot. But like, when something became very boring, then, you know, like brand unneeded were there to save the day.
07:41 And explain the cool things they work on. So, you know, which is a very good opportunity. Because, you know, when is the last time you could hear the author of the feature that you love talk about the feature that you love? That is fantastic. And it happened.
07:54 Right. And not only did it happen, but as they were explaining the feature that they built, the action of it being delivered to the entire world was right. It was like all coming together in a pretty awesome way.
08:06 Exactly. And I could only do that just to be fair also. And, you know, credit where credit is due.
08:11 I could only do that because the first time I did the live thing, I was also doing all the, you know, pushing all the buttons and at the same time doing all the video stuff with, I don't know what is the software to the stream or like whatever.
08:23 And the second time we used the help of the Python Discord team, which are fantastic. And they help us a lot. They have this fantastic UI where, you know, all the questions that were asked on the chat are on the screen and we couldn't use it.
08:40 Do you know why? Because Facebook or now Meta decided to break DNS globally. What an incredible feat.
08:47 Just in time.
08:48 Just in time.
08:48 I think wanna.
08:49 So what I'm learning is if we need some sort of like big cloud global outage, you all just need to live stream.
08:55 Just call Pablo.
08:56 Yes. Exactly. Exactly.
08:57 Exactly. Just hire me today.
08:59 Okay.
09:00 So yes, now we were like two big outages on Python release. The, you know, there is only a line that passes through two points, but I, you know, it was a, it was a, it's a good, a good statistic already.
09:13 So we said, what can I, what else can we break? So there you go. We decided to do the 3.11 release again. And Mark was there as well, which increases the probability of things being broken by a lot.
09:26 Sorry, Mark. I had to do the joke. He also fixes them. So, you know, it's fine. And nothing broke. So, so kudos to Mark. Everything. Thanks to that.
09:34 And we did the release. So, so we did the same thing. We explained the whole thing. So people could see from the authors themselves, like why all the switches are very cool. And I did the non boring parts of the release.
09:46 And then we have a bit of some dramas in backstage because my Jubi key that I used to sign release broke and I freak out quite a lot, but I thankfully have a backup Jubi key. So no, I have, yeah. Yeah. So crazy.
10:00 Because if I didn't have that, then I will have to stop the whole thing, but we didn't have to do that. It was just backstage. So yeah, quite, quite exciting. Nothing broke except my Jubi key. I suppose that's the third thing that broke. It's not a global, you know, software, but I still mourn it. It's here.
10:15 Yeah. It served you well, but now it's, it gave its life for Python 3.11.
10:20 Too much power. Like 3.11 was too powerful. It just broke.
10:23 This is a dangerous job that you got.
10:27 Yeah.
10:28 But you've handed it off, right? This is your last time, last main release.
10:33 Yeah. Yeah. I need to do the security and bug fix releases, but I don't need to do the ones that, you know, you need to chase people down and ask for like cherry picking. And there was a bunch of things of the release that were quite boring. Like normally we release the previous version. Like before the final version, there is something called the release candidate, which is, you know, like the last version that people need to try out before we do the final release.
10:56 And ideally that is the last version that we publish. Normally it means that you publish from that commit.
11:01 But, but this was not the case. This is the first release that had 130 something commits on top of that.
11:07 But I have to painstakingly cherry pick and it was not fun, but I did that before the release. It's like two hours.
11:13 Yeah.
11:14 You need to fix conflicts and things like that. Yeah. Very, very boring. But yeah, I started the stream with that already done. So, so it was fine.
11:22 Yeah. Fantastic. Now, before we get into all the features and I want to maybe just talk a little bit about some of the tools for actually doing the release and maybe start with you is, what, what is 3.11 mean for you all getting this out? What does that mean for the Python community from your perspective?
11:40 Well, 3.11 is, it is a huge release. There's a lot packed into it compared to the last few releases. There are no features. There's the performance work. It's, it's just massive changes internally. It's, it's just a huge release. And personally, I've started working on, you know, exception groups about two years ago.
11:59 So for me, this is, it almost feels like finishing another PhD or something. It's, it's a massive kind of effort and here it is. It's done. It's, you know, it was a big day Monday. I had a bottle of champagne ready for the stream. It was a celebration.
12:13 Yeah, it was. Brent, how about you?
12:15 I'm really excited about 3.11 because I think there's something for everyone. And I think you'd be hard pressed to find someone who doesn't want their code to run faster and who doesn't want better error messages. And then you have all these other improvements on top of that.
12:28 It's really nice to see both these like new features, which are something that we get in most Python releases, but also just the stuff that's there for everyone else who just wants to upgrade Python and just have a better experience all around.
12:40 Yeah, I totally agree with that.
12:41 It's cool to see people's responses to that too, because responses have been really, really positive, which is another thing that I liked about the live stream because we did, you know, live Q&A and we had the chat and everything going on. And when you're starting at the same code base for like a year, you're like, okay, I'm pretty sure that what we've done here is really, really cool.
12:58 Cool. But, you know, like, is it actually as awesome as I think it is? You know, or have I just been staring at it for too long and then release it to the world and people are even more stoked about it than you are. And that's a really good feeling.
13:08 Yeah, it is. Awesome. Mark?
13:10 Yeah, well, I guess I started on trying to get Python faster 15 years ago, I guess, early PhD time.
13:16 Yeah, with Hot Pie, right?
13:17 Yeah, yeah. So that was a long, long. This has been a long time coming. So yeah, it's amazing to have it actually out and starting to see the speed ups. And obviously we're keeping working on it. So it's pretty good.
13:29 Yeah, fantastic. You must be really proud because, like you said, you have been proposing this for a really long time. You've had a lot of ideas. And finally, you've got a group of people working on it. And you're all on the same team with Mark and Guido.
13:42 Yes.
13:43 Yeah. And just making legitimate, serious progress here. So it's, you must be really proud to just sort of see this actually go out the door.
13:51 Yes, definitely.
13:52 Especially in main iPhone too. It's really nice that we're able to, you know, deliver this for everyone.
13:58 Yeah. For me, I see basically three things, like, kind of like you said, Brent, I see that obviously there's these new features like exception groups, which are lovely and make the language better.
14:08 But it also gets friendlier for, especially for beginners, but for everyone, of course, with the better error messages and better reporting and tracebacks. And it gets faster. And so, I mean, it, and all the axes that seem to matter. It's, it's really fantastic.
14:22 Okay. So let's dive in. I just, Pablo, let's go back just a little bit to the, the release process because people got to watch you do it, but they didn't actually, you know, see exactly what you're typing on your screen the whole time. It was more of about a, like an event of it. Sometimes your screen was up, sometimes it wasn't, but there's an official PEP that talks about like, here's the recipe for doing this, right?
14:44 But it's correct. It's PEP 101 doing Python releases. And that is a curious document. It's peculiar document. Talks about how it's done, but it's like, it's kind of weird. So the document is up to date. Like you can actually, you know, search PEP 101 and it will show you the thing.
15:01 So what is there is the actual process. It's just, it also contains these weird sentences. Like if you search for it, there is a bunch of places that says stop, stop, stop, stop, stop. Quite funny. And that was, if I recall correctly, Larry Hastings.
15:17 That he wrote those things and the idea is that he could search for those places. And he knows that at that stage, he needs to wait for something to happen or something. And we left it there. So there's a bunch of like weird artifacts and, you know, it's full of bullet points because you, at some stages, you need to do some things and some others and things like that.
15:37 And, you know, it says, okay, if you're running a beta release, then you need to do this bunch of things. And if you're running an alpha release, you need to do this bunch of things. And I have done the, I have done a state machine that goes through the whole thing because like, if you actually write this down, it's quiet, is, you know, the, the, how is this called? The maintainability index of this process is insane. It just rejects your thing.
15:58 Right. And I said like, yeah, I'm not doing this reading. So one thing I did, which is the thing that I was using at the, at the stream, my first work as a release manager is say, I'm not going to do this by hand. And that is the vision. And then I did this script that is on a GitHub slash Python slash release tools. And it's a, it's my attempt at automating this process as much as possible, which unfortunately, you know, it still requires a bunch of manual steps because like that's a lot of work.
16:28 that's life and things happen. But, it's quite automatic, like at least things that are not like final releases. So alpha and release candidates. And now that we are in backfix releases, it mostly runs automatically, except that, you know, in the final release, everything fails because there's the final release for you. And then you need to fix things manually. So you, I think you saw me, you know, executing a bunch of those fixes at some point. And I added a division by zero just to know that something was hit and that was seen on the screen because like, and people
16:58 like division by zero, why do you need that to release Python? I don't know. That's very complicated.
17:03 I could have asserted false. Come on. Anything would have worked.
17:06 No, no. We divide by zero. I'm a physicist. So that's what I do.
17:09 Okay.
17:10 You studied black holes, right?
17:13 Yeah.
17:14 You were looking for some sort of like infinite sort of thing there. Divide by zero. Yeah.
17:18 I'm too tired for today. Let's just collapse the universe. Divide by zero. Oh, but Python was too friendly. Instead of collapsing the universe, it sold me an exception. You know, quite nice. Only in 311. No, no, I'm joking.
17:32 This portion of Talk Python To Me is brought to you by Sentry. How would you like to remove a little stress from your life? Do you worry that users may be encountering errors, slowdowns, or crashes with your app right now? Would you even know it until they sent you that support email? How much better would it be to have the error or performance details immediately sent to you, including the call stack and values of local variables and the active user recorded in the report? With Sentry, this is not only possible, it's simple.
18:01 In fact, we use Sentry on all the Talk Python web properties. We've actually fixed a bug triggered by a user and had the upgrade ready to roll out as we got the support email. That was a great email to write back. Hey, we already saw your error and have already rolled out the fix. Imagine their surprise. Surprise and delight your users.
18:21 Create your Sentry account at talkpython.fm/sentry. And if you sign up with the code Talk Python, all one word, it's good for two free months of Sentry's business plan, which will give you up to 20 times as many monthly events as well as other features. Create better software, delight your users, and support the podcast. Visit talkpython.fm/sentry and use the coupon code Talk Python.
18:47 Anyway, so yes, yes, you can follow this PEP and, you know, just enjoy the whole process on its glory or you can see the script. But yeah, it's quite verbose. You can see that it's very, it's lots of places when everything can go wrong and you can panic.
19:01 Now we know one more.
19:02 Apparently your jubi key can break.
19:04 So that's something that can happen as well.
19:05 But like, you know, it's quite annoying.
19:07 And that's the main job of the release manager.
19:10 I go through this annoying process.
19:12 So yeah, I see that there's some parts in here.
19:15 You should have a few more stops.
19:16 I should say stop, stop, stop.
19:17 Make sure GitHub still works.
19:19 Stop, stop, stop.
19:19 Make sure Azure still works.
19:21 Stop, stop, stop.
19:22 Yeah, don't cry.
19:23 Don't cry at this stage.
19:24 Everyone is looking at you.
19:26 But yeah, the one thing that is knowing this is that you also are in charge
19:30 in theory of this extreme abstract mandate, which is that you're in charge
19:34 of the stability of the release, whatever that means.
19:37 That translates mostly on chasing people because they broke things.
19:41 Another unfortunate event that we are trying to also fix a bit for the for the releases
19:45 is that most people turn to the release manager to solve problems.
19:49 So they say, hey, this person says that we should do X while this other person says
19:54 that we should do Y.
19:55 We need someone to decide.
19:57 Let's let's let's let's reach the release manager.
19:59 But the release manager is this guy on the corner.
20:01 Like he doesn't know shit.
20:02 So I like, you know, it's not the best person to fetch it.
20:06 But everybody was like, what do you think, Pablo?
20:08 So we merge this.
20:09 I like, I don't know, man.
20:09 This is some in-and thanks.
20:11 Like, I don't know about this.
20:12 I have no context whatsoever.
20:14 Your only concern is will it still build and shift?
20:17 Exactly.
20:17 Like how?
20:18 I like it.
20:19 Yeah.
20:19 What about these 2000 lines?
20:20 Of course, it's all this tiny bag.
20:22 It was like, well, maybe let's not merge to that.
20:24 But yeah, like we are trying to also like, you know, redirect all of these to the steering
20:28 council, which also I am in the steering council.
20:30 So apparently I'm not going to get rid of these questions.
20:32 I'm joking.
20:33 I enjoy all these questions.
20:34 But as a release manager, I don't.
20:35 So I like the key here is that the release manager should not take unilateral decisions
20:39 on the evolution of these things because like it's just the release manager.
20:42 the reason the steering council is five people.
20:44 But you are the one who delivers the code.
20:46 You could kind of, you could sneak a feature in there.
20:48 No, no, no.
20:49 Come on.
20:49 I don't decide important things.
20:50 I just execute and chase people.
20:52 And I'm this annoying guy that says, you broke this, fix it.
20:54 But like then if there is some important decisions to be taken, you know, that's the steering
20:59 council job, which is five people because, you know, one person shouldn't decide these
21:03 things.
21:03 It's like, and this happens.
21:05 Like sometimes I say, hey, there is this PR when people are asking, what should we do?
21:09 And then this is my opinion as the member of the steering council and the other four
21:13 members, maybe they say, well, actually that's not a good opinion.
21:16 So what about this?
21:17 You know, we ended up in a much better place because it was five people, five persons doing
21:21 a decision instead of one.
21:22 But yeah.
21:23 Yeah.
21:23 Amazing.
21:24 Okay.
21:25 So if people want to follow along with the process, they can check out PEP 101.
21:28 Let's keep over here.
21:29 You also talked about the Python build bot that people can check out, but I think maybe we
21:33 want to jump into our first feature.
21:36 There's, as Arit said, there's a ton of features and things in here, but there's also maybe
21:41 some top level ones that'll be really important for a lot of folks.
21:44 And Arit, you want to tell us about your work?
21:47 You mentioned before the exception groups and exception star.
21:49 This is kind of a major new feature that we added.
21:52 And the idea is that sometimes you'll have a situation where you did several things and
21:59 maybe more than one of them raised an exception.
22:01 And now you need to report that there was more than one error in whatever you did.
22:05 And what you did could have been a bunch of asynchronous tasks, which is that that was
22:10 the use case that motivated this whole thing.
22:12 But there are also kind of situations where you just iterate over a few things and repeat
22:17 them and accumulate exceptions.
22:19 And you want to kind of report all of them.
22:22 And the PEP lists a bunch of examples of where this can happen.
22:26 So people typically what they do is they'll take a list of exceptions, wrap it in another
22:32 exception, multi-error, some other kind of wrapper and throw that.
22:36 And then you have to catch it.
22:37 And then you have to iterate over the list and look at the exceptions.
22:40 But you don't have a method to handle the exceptions.
22:44 Like you have try accept, like catch these, but not catch exceptions.
22:47 Right.
22:48 Right.
22:48 Because in accept, you might have like accept socket error, or you might have accept like
22:53 file not found type of thing.
22:54 But if those both happen, neither of those would run in Python 3.10, right?
23:00 Because it's some kind of weird wrapper and it's not a socket exception.
23:03 It's not a file exception, but it kind of contains both.
23:05 And so in a sense, you would both run?
23:07 I don't know.
23:08 And then if you catch the wrapper, even you do something with some of the exceptions,
23:12 you better not forget to erase the rest because you're not handling them.
23:15 So yeah, there are a lot of problems when you try to work around this and like what happened
23:20 with Trio.
23:21 So Trio had multi-error, would raise this wrapper and it had to do a lot of complicated
23:27 acrobatics just to have some error handling.
23:30 So the motivation was, yeah, we have task groups in Python 3.11, which are kind of like Trio
23:35 nurseries, kind of a structured collection of asynchronous tasks.
23:41 And task groups were on the cards.
23:43 They started like Yuri Selibanov, who was kind of maintaining asyncio in the beginning.
23:48 He wrote a lot of asyncio.
23:49 He wanted to add task groups since 2017, 2018, something like that.
23:55 And what was holding it up was error handling.
23:58 There was no good way to handle errors.
24:00 So now we have accept star, which is what generalizes accept and works with groups.
24:05 So you can say accept star socket error.
24:08 And then it will just extract all the socket errors from the group and give you those and
24:13 automatically re-erase everything else.
24:15 That's basically the idea.
24:17 This is pretty interesting.
24:18 We have try, do your thing, and then accept star, you know, one error type, accept star,
24:24 another error type, accept star, a set of errors potentially.
24:27 So what happens if I'm in this situation and say the first error type and maybe something
24:34 from the third error catch clause is thrown in one of these task groups, exception groups?
24:40 Each exception in the group will be handled by at most one of the clauses.
24:44 So the first clause that matches its type will consume it.
24:48 And each clause executes once.
24:49 So if there are more than one errors of that type, then what gets kind of bound in the accept
24:56 star full error as E, what gets bound to E is a group of full errors.
25:00 So you get all the full errors in a group, execute that clause, and then move on to the next clause
25:05 with whatever is not handled yet.
25:07 Interesting.
25:08 So it might run two of the clauses.
25:10 Exactly.
25:11 Whereas in traditional exception handling, it goes from top to bottom and it looks for an
25:16 inheritance type of match.
25:17 And the first one that matches, that's it.
25:19 But in this case with the star, you could get multiple ones.
25:22 I guess the star to me, when I look at this, the star is reminiscent of Args star where you
25:29 have...
25:29 Unpacking.
25:30 Yeah.
25:30 Yeah.
25:31 Yeah, exactly.
25:32 It's not exactly unpacking, but it was kind of the intention to make it look a bit like
25:37 unpacking.
25:37 Nice.
25:37 Yeah.
25:38 This looks like a really cool feature.
25:39 You talked about the task groups and trio and those things.
25:43 So when I saw this, concurrent errors obviously come to mind because if I try to both write
25:48 something to a database and call a web service asynchronously and I start both of those and
25:53 they both crash or multiple ones crash, which error do you want?
25:56 The database error or do you want the API error?
25:58 You probably want to know about both of them, right?
26:00 So that's a real natural reason to bring these together.
26:03 But maybe you'll also list out some of the other reasons that you might run into this.
26:07 Maybe give people some other ideas.
26:09 So the example of in the socket module, we have the create connection function.
26:13 And that function, I was showing it in the stream.
26:16 It iterates over all the configurations that you could try to connect with.
26:20 And then depending on what's going on on the other side, hopefully one of them works.
26:24 But if none of them work, you have to report errors.
26:27 And what we do in Python 3.10 is we just raise the last exception.
26:30 So you don't know what happened really.
26:33 You only know why the last attempt failed.
26:35 You don't even know how many attempts were made to connect to.
26:38 How many configurations did we try?
26:40 So that was a long-standing open problem.
26:43 Can we do better than just report the last error?
26:45 And we closed it.
26:47 We just added it for us to that.
26:48 Give me a demo for in a group.
26:50 Another place that comes to mind is maybe you're all familiar with some of these retry libraries.
26:54 Yeah.
26:55 Like, I think there's others as well.
26:57 Where you put a decorator onto some function.
27:00 You say, try this multiple times.
27:03 And if it fails, do some sort of exponential backoff because maybe the server is overloaded.
27:07 Right?
27:08 Those types of things would be really great.
27:10 Like, if it retries all the times it's supposed to and it fails, it'd be good to get all the errors.
27:15 Not just the last one or the first one or whatever it decided it was going to give you.
27:18 Yeah.
27:19 It's the kind of thing.
27:20 Yeah.
27:21 Nice.
27:21 Okay.
27:22 Well, congratulations on getting that feature out.
27:24 That's great.
27:24 All right.
27:25 What do we got next here?
27:26 I think also related to this, I wanted to talk about this PEP678.
27:32 That's a very small and simple feature that Zach Hadford Dodds wrote this PEP.
27:39 He was trying out Exception Groups.
27:40 He was the first kind of user.
27:42 Even before the PR was merged, he was trying it out.
27:45 He was trying to integrate it with the hypothesis library.
27:48 So there you write a test and the library executes it many times with different inputs and you
27:53 get failures in some of the inputs and you want to report all of them.
27:56 So Zach had an Exception Group, kind of an Exception Rapper, kind of like Trio Multi-Error.
28:01 He had his own version that he built in his library and he could associate each exception he attached
28:08 to it, which input generated this error, which is very important.
28:13 You need to tell people what the input was and what happened with it.
28:17 And he couldn't do that in a convenient way with Exception Groups.
28:22 He added this to base exception.
28:27 You can add strings.
28:29 You call it add note, give it a string and you can call it as many times as you want and add notes to the exceptions and they will appear in the default traceback that the interpreter prints.
28:40 So that's all it is.
28:41 It's a very simple feature, but it was received surprisingly well.
28:45 It's a very simple feature that you can enrich an exception after you catch it.
28:49 So you have the information that, you know, the error message and the type, you decide that when you raise the exception.
28:55 But then sometimes when you're catching, there's some more information, some context, like what was I trying to do when this error happened?
29:02 Sure. Yeah.
29:02 Because often you'll see, except some type, some exception type, you'll deal with what you can, but you can't really handle it there.
29:09 So you got to raise it again.
29:10 And this is a place to add more information without completely wrapping it.
29:14 Right.
29:14 Right.
29:15 Exactly.
29:15 A lot of people have to chain it.
29:18 Say this raised from that.
29:20 So there will be situations where maybe you won't need to do that.
29:23 Yeah.
29:24 I'd love to see that go away.
29:25 I sort of template libraries and stuff in the web all the time.
29:28 I see like there's all these different errors and you got to hunt through a bunch of stuff to figure out what happened.
29:33 Yeah.
29:34 Also think about, for instance, like I think this is super useful actually for end users.
29:38 Even like we think about that you're doing some query to the database.
29:42 Right.
29:42 And then, I don't know, it may fail for 6 million reasons.
29:45 And then you want to add what you're asking for.
29:48 Right.
29:48 So you add your query or your user or whatever.
29:51 Because probably the exception that the Postgres thingy that is underneath is not going to contain your actual thing.
29:59 So this actually may save you hours.
30:01 Right.
30:02 Because in many enterprise environments, you don't even have easy access to that.
30:07 Sorry, to prod.
30:08 So you cannot just go there and see what's going on.
30:11 So it would be super cool that you say, oh, if something fails, you know, I was trying to do this with this data and like with these things.
30:18 So like if it fails now, you can know what's going on and you don't even need to log in, which is, I think it's a.
30:23 Yeah.
30:24 Yeah.
30:24 It's a great idea.
30:25 Or if you know, look, here's probably why this happened as a library developer.
30:30 You're like, look, this is the error.
30:32 But here's a note.
30:33 This is probably because you didn't initialize the connection before you called this.
30:37 So make sure, you know, like another area where I see this could be useful is I want to raise like the example you have in the docs is type error.
30:47 But it could also, you know, it could be value error or some other built in low level type.
30:51 You know, like really, this is just I want to raise that error, but it doesn't have a place for me to put additional information.
30:56 And so I want to kind of enrich that with more.
30:59 And so not just catch, add the data and then raise it again.
31:02 But actually, I want to use a base error type that doesn't let me put more details in it and then just raise that.
31:07 Right.
31:07 That would also work.
31:08 I think so.
31:08 I mean, I think the intention was there was some discussions about using notes in the interpreter and I pushed back on it because I said this is owned by the application.
31:17 The interpreter shouldn't touch the notes, you know, because people can wipe out the notes.
31:21 They can change the order.
31:22 They can do what they want.
31:23 It's the applications, at least the way I see it.
31:26 The application owns it.
31:27 You put whatever context you want to put.
31:29 Is there only one note?
31:31 When I say add note, does that set the note or can I have a list of notes?
31:34 It's a list of notes.
31:36 Okay, got it.
31:36 Yeah.
31:36 And you can wipe it out if you want.
31:38 You can.
31:38 It's just a list.
31:40 It's attached to the exception.
31:41 You can do what you want with it, really.
31:43 Yeah.
31:43 Cool.
31:43 Okay.
31:44 Yeah.
31:44 It's a great.
31:45 It's a really great feature.
31:46 I mean, it's I'm sure it was way less work than except star, but it's also going to be really valuable.
31:51 I think.
31:52 This portion of Talk Python to Me is brought to you by Command Line Heroes, an original podcast from Red Hat.
32:01 Command Line Heroes chronicles the history of the software industry.
32:04 If you want to get the big picture on how macOS and Windows became dominant players, as well as how Linux brought its radical open source opinion to the scene, then Season 1 is a fascinating lesson.
32:15 In particular, check out OS Wars Part 1 and OS Wars Part 2, The Rise of Linux.
32:21 Season 3 is all about programming languages, and it kicks off with a topic near and dear to all the Talk Python listeners, Python's tale.
32:28 I even got to be a guest on that episode, as well as Emily Morehouse, who makes a connection between Python's technical extensibility and its inclusive community.
32:37 I talk about how Python is both easy to learn and powerful enough to build apps like YouTube and Instagram.
32:43 And Diane Mueller highlights how the Python community took the lead on so many inclusive practices that are spreading in tech, including the rise of community-led decision-making.
32:51 This award-winning podcast is hosted by Saran Yiparak.
32:54 Saran is a fantastic host, and the show is highly polished.
32:58 Seriously, if you love the history of software development, jump over to talkpython.fm/heroes and check it out for yourself.
33:04 Thank you to Command Line Heroes for keeping this show going strong.
33:11 So, Mark, I had you and Guido on back, wow, almost to the day a year ago.
33:18 We're off by November 1st, 2021, so not that long ago.
33:24 Let's talk a little bit about the work that you're doing there.
33:27 I guess the headline is that Python 3.11 is 10 to 60, 10 to 50% faster than previous, sort of on a reasonable range of applications.
33:38 Is that the story?
33:39 Yeah, somewhere between minus a few percent and plus 100, but it varies a huge amount.
33:45 I mean, if you've got some application that basically spends all its time in NumPy or something like that, you're not really going to speed up at all.
33:53 But if it's pure Python, you'd expect it to be a good 40, 50% faster.
33:58 But it depends.
34:00 Right, that's a good point because a lot of people do make Python faster by writing C or Rust or other languages.
34:06 And at that point, it's out of your hands, right?
34:08 Yeah.
34:09 So, I mean, we're looking, hopefully, for 3.12 to start looking at the sort of interface between Python and C code.
34:16 So, we should speed up code even though there's quite a lot of C code.
34:18 We won't spend up the time spent in the C code in doing the actual work in the C code, but there's still quite a lot of sort of marshalling of data that happens.
34:26 And hopefully, we'll streamline that.
34:27 The existence of C extensions is sort of, in some ways, limits our opportunity to speed things up, but it's also why Python is so proper in the first place are one of the main reasons.
34:37 So, definitely need to acknowledge it.
34:39 Yeah, absolutely.
34:40 So, Brandt, I'll definitely have you talk about the specializing interpreters, but Mark, maybe give us a rundown of some of the things from your plan that made it in here.
34:48 I know some were aimed for 3.10, but they didn't make it until here, right?
34:52 Yeah, so the whole thing, oh, that original plan I put up, that was more of a just to get the discussion going sort of thing.
34:59 And it's basically, it's more or less a year off.
35:01 So, if you just shift everything one forward.
35:03 I mean, there was a lot of discussion on speeding up the interpreter in the first iteration and then looking more to the data structures in the second thing.
35:11 It's much more jumbled than that.
35:12 We're doing sort of a bit of everything.
35:14 So, obviously, I was planning on, you know, expecting a smaller team.
35:17 So, things are being a bit shuffled.
35:19 So, yeah, there's a specializing interpreter, obviously, that's kind of key.
35:23 There's also quite a lot of stuff we've done with data structures.
35:25 I mean, we shrunk the Python object.
35:28 So, I mean, the Python object, you know, has been shrinking for years.
35:31 I mean, I've got some numbers here.
35:34 So, like in 2.7 and 3.2, like an object with just four attributes would take 352 bytes on a 64-bit machine.
35:42 And for 3.11, we've got it down to 112.
35:44 And for 3.12, it would be 96.
35:46 Well, before you get too excited, it's only 32 in C++.
35:49 So, you know, we've got a bit of a way to go.
35:51 Yeah, but, you know, it's going in the right direction for sure.
35:55 And, you know, sure, some people out there listening just say like, okay, well, it's half the size roughly and it's going to be less than that.
36:01 So, yay, we can use less memory.
36:04 But maybe you could talk a little bit about how that affects things like L1, L2, L3 cache hits and other sort of, like, it's more important than just I need less RAM, right?
36:14 Yeah, yeah.
36:14 So, there's two things that happen.
36:17 There's, yeah, things are faster because the hardware is just happier.
36:20 If you pack everything together, it's in a high-level cache.
36:23 So, you're not getting these sort of long pauses as you hit main memory.
36:27 And the other thing is just the data structures are, because there's less of them, there's less indirection.
36:31 So, for example, to load an attribute, we've got it down for basically an old, you know, older versions of Python.
36:37 It was sort of effectively five memory reads.
36:40 And they were dependent memory.
36:41 You just have to read one before the next one and so on.
36:43 Go to the class, find its, go to the object, find its dictionary.
36:47 Then find the pointer that's in the dictionary and then go to that, right?
36:50 Like, it's...
36:50 Yeah, yeah, very much that.
36:52 And it's down to more or less two now.
36:54 So, I mean, obviously, there's still interpretive overhead on that.
36:58 So, it's not quite that much faster, but it's getting there.
37:03 So, yeah, there's a data structure.
37:04 And then the frames, the Python frames, every time you call a Python function, we used to just allocate a heap object for the frame and all this stuff would go in there.
37:13 And now they're all basically in a big contiguous sort of block of memory.
37:17 So, it's just bumping a pointer rather than allocating, which is also faster.
37:21 And frames are just more anyway because of the zero cost exceptions, which I think we mentioned on the release thing.
37:29 But, yeah, this is...
37:30 Well, let's tell people about zero cost exceptions.
37:33 Okay.
37:33 Well, zero cost...
37:34 You shouldn't have to pay for errors if you're not raising errors, right?
37:37 Yeah, that's the idea.
37:38 And that's why they're called zero cost.
37:39 But zero cost is in quotes in this.
37:41 And the reason for that is it's just...
37:43 That's the name it has got.
37:44 They're definitely not zero cost.
37:46 The idea is that they're pretty low cost if you don't have an exception.
37:50 But they tend to be even more expensive if you do get an exception because you have to do more lookup.
37:54 The important thing here is that just there was lots of runtime information we need to maintain and we don't now.
38:00 So, that, again, shrinks the frames and just makes calls faster because calls in Python were notoriously slow.
38:06 So, that's one thing we've sped up significantly.
38:08 Yeah.
38:08 So, the idea was in previous releases of Python, if you...
38:13 Just to enter a try block, even if it was successful, there was a little bit of overhead to set up the mechanisms of potentially handling the errors and the information you needed, right?
38:22 Yeah.
38:22 And this wasn't just the overhead.
38:24 You defer more of that, right?
38:25 Yeah.
38:25 I mean, it's actually not so much that overhead as just the space you had to put that data in had to be allocated every time you made a call in case there was an exception.
38:35 And then we had to...
38:36 It was massively over-allocated to the amount of space anyone ever needed.
38:39 So, just that was the big sort of advantage.
38:41 Nice.
38:42 Yeah.
38:42 This is fantastic.
38:43 You don't want to discourage people from putting proper error handling in their code.
38:48 Yeah.
38:49 What do you think...
38:50 I see your name on this feature here in GitHub.
38:53 What are your thoughts on it?
38:54 Yeah, I think it's cool.
38:56 And I was kind of...
38:57 You know, it was a nice touch that Mark implemented it between when I wrote the prototype for exception groups and when the PEP was approved.
39:06 So, that got in the way a little bit.
39:10 But it was good.
39:11 I got intimately acquainted with zero-cost exceptions through that exercise.
39:17 Well, it's zero-cost for some people.
39:20 Yeah.
39:21 I tease Mark a lot about that.
39:24 No, I think it's a cool feature.
39:25 And I mean, I followed up on that.
39:28 We now have...
39:30 After we removed that, we still had a...
39:32 I was talking about this on Monday.
39:34 We had to jump over the exception handler.
39:36 And then I told them, wait a minute, there's a jump.
39:39 It's not zero.
39:39 You have to jump over the exception handler if there's no exception.
39:42 So, now we have...
39:44 We did...
39:45 We identify exception handlers as cold blocks.
39:48 And before we lay out the code of the function, we put all the code blocks in the end.
39:52 So, now if there's no exception, there isn't even an exception handler to jump over.
39:57 That would be in 312.
39:58 Excellent.
40:00 You made zero-cost even faster.
40:02 So, now it's...
40:03 Zero even smaller.
40:04 Yeah.
40:05 It's asymptotically approaching zero.
40:08 Yeah.
40:09 So, but it's kind of nice that we have this notion of cold blocks and hot blocks and we
40:14 can maybe do other things with it.
40:15 Kind of nice that all the happy...
40:17 But the fast code is kind of in the beginning of the functions bytecode block.
40:23 And, you know, in terms of caches and all that, you don't have to...
40:26 I think it will bring a few benefits beyond just not having to jump.
40:30 Yeah.
40:30 Yeah.
40:31 This is excellent.
40:31 It's a really great feature and pretty straightforward.
40:33 All right.
40:34 Brent, tell us about the specializing adaptive interpreter.
40:37 That's a big deal.
40:38 You and I spoke about that about six weeks ago, I think.
40:41 Yeah.
40:41 Yeah.
40:42 Basically, the headline is the bytecode changes while it's running to adapt to your code,
40:46 which is really neat.
40:47 So, it's kind of finding places where we can do the same thing, but using less work by like
40:54 cheating a little bit.
40:55 But cheating in a way that is not visible at all.
40:58 A good example is something like a global load or a load from the built-in.
41:04 So, if I'm looking at like the len function, that requires two dictionary lookups.
41:09 Every time I want to look at the len function anywhere, I first need to check the global namespace
41:12 and that's going to be a failed lookup.
41:14 Then I need to check the built-ins dictionary and that's going to be a successful lookup.
41:17 So, every time I want to use len or range or list or any of those built-ins, that's the
41:23 cost that I have to pay.
41:25 But people don't change the global namespace that often.
41:27 And people change the built-ins namespace even less often, or at least they shouldn't be
41:31 changing it very often.
41:32 I'm going to make false true and true false.
41:35 Let's see what that is.
41:36 And so, you can make these observations where it's like, okay, well, if the set of keys in
41:43 the global namespace hasn't changed since last time this bytecode instruction ran, then I
41:47 know that that lookup is going to fail.
41:48 Because if it failed last time and the keys are the same, then it's going to fail this time
41:52 as well.
41:52 So, we can just skip that.
41:54 And same for the built-ins dictionary.
41:56 You know, if we know that the keys in that dictionary haven't changed, that actually means that the
42:01 internal layout of the dictionary is the same.
42:03 And we don't even need to look up len in the built-ins dictionary.
42:07 We can reach directly to the last location where it was before and give you that instead.
42:12 So, you obviously see in a lot of code as like an older code as a kind of a micro-optimization.
42:18 Whenever someone was using a built-in in like a very hot Python loop, sometimes you'd see
42:23 them like do this kind of quark trick where they make it a local variable by saying like
42:28 len equals len or something like that as part of the functions arguments.
42:31 So that you turn it into a fast local load.
42:34 And what we've essentially done is, you know, made ugly acts like that totally unnecessary.
42:39 Yeah.
42:40 Which is really cool.
42:40 You do that behind the scenes transparently.
42:42 Yeah.
42:42 Exactly.
42:43 And so that's just, you know, one example.
42:45 We've done tons of specializations for all sorts of things ranging from calls to attribute
42:50 lookups to attribute stores, et cetera.
42:52 So yeah, it's a really, really powerful thing.
42:56 What was it?
42:57 It's the 569?
42:59 Yeah.
43:00 Mark wrote it.
43:00 It was 659.
43:01 659.
43:02 Almost there.
43:03 Yeah.
43:03 Yeah.
43:03 So this interpreter is Mark's baby.
43:06 He could tell you much more about it than I could.
43:07 Yeah.
43:08 I just want to give you a chance to give a shout out about specialist.
43:10 Yeah.
43:10 Yeah.
43:11 So this is why I was on your show.
43:12 A couple of weeks ago.
43:13 So looking at bytecode disassemblies is not fun.
43:16 And so one thing that's kind of cool is, you know, if you upgrade to Python 3.11, you run
43:20 your code and you saw it got, you know, 10, 20, 30% faster.
43:23 You might be wondering like, okay, where did it get faster?
43:26 Like what is faster about my code?
43:28 And so specialist is basically a package that I made.
43:32 It's pip installable.
43:33 It only works on 3.11.
43:34 And basically if you run your code using specialist instead of Python.
43:39 So you just type specialist, my project.
43:40 So you just type specialist, my project.by or whatever.
43:42 It will open a web browser and show you your code, but color highlighted to show you where
43:48 the interpreter was able to specialize your code, where it wasn't.
43:52 And that's really neat.
43:53 So you can see like, oh, actually, you know, these are the attribute loads that got faster.
43:56 These are the places where my global loads are being cached.
43:59 That's perfect.
44:00 Yeah.
44:00 That's awesome.
44:00 Yeah.
44:01 This is a really cool project.
44:02 And it has some proactive features, not just informational aspects.
44:06 I think anyway, you know, if you run a profiler, it'll show you where your code's spending time,
44:11 but it doesn't mean you should go change everything to make it faster.
44:15 You should look at like, oh, well, this loop or this one function is like the thing that maybe
44:19 we should think about slightly changing the algorithm or the way we do a loop or something.
44:24 And it's a little bit similar here because the specializing adaptive interpreter only specializes
44:29 some things like it doesn't specialize floats interacting with ints or those types of things.
44:35 And, or I think division as well.
44:37 And so there's certain ways you might be able to slightly change inside of a really hot loop,
44:41 you know, make something to float ahead of time.
44:43 If you know, it's going to be involved in floating point operations.
44:45 Right.
44:46 Yeah.
44:47 The idea is that this is show us how we can fix things so that you don't need to mess with
44:52 your code.
44:52 I see.
44:54 So this is in the future.
44:56 Okay.
44:56 Yeah.
44:56 Awesome.
44:57 I would not necessarily encourage people to start tuning individual bytecode instructions
45:02 in their code due to our implementation details.
45:04 Otherwise you will end coding in C.
45:06 You all mean I need to, I got to take all those decimal points back out of my code.
45:10 No, just kidding.
45:10 Yeah.
45:10 I want to get every single bytecode instruction green.
45:13 Some things will never specialize and that's just an artifact of programs.
45:17 But, you know, if we can specialize enough and we typically do, you know, one line, maybe
45:21 20 bytecode instructions if, you know, four of them get specialized successfully and two
45:26 of them don't generally, that will still be faster.
45:28 Do you know what you should do for April Fool?
45:31 Like you should do a pytest plugin that shows you the percentage of specialized instructions
45:36 in your code and people can fix the percentage so they can say, fail my test suite if my code
45:41 is not specialized more than 50%.
45:43 If you de-specialize it, it's like a performance regression.
45:47 Drop it.
45:48 Yeah.
45:49 Yeah.
45:50 It's like a coverage thing.
45:51 Yeah.
45:51 No, I was kind of thinking about this.
45:53 So Pablo can tell you more about this, but his cool new tracebacks, the whole reason
45:58 specialist is able to do these cool, you know, column level highlighting of your source code
46:03 is because we do have that fine grain position information under the hood.
46:07 So it's, we kind of just piggybacked on that feature in order to give you that.
46:11 But I was kind of thinking another, another thing, another April Fool's project could
46:16 be, you know, column level coverage information.
46:19 So to get to a hundred percent coverage, you have to cover every single column.
46:21 Exactly.
46:22 Yeah.
46:22 I feel like people might take that too seriously.
46:24 Even the white space, all this white space is not covered.
46:27 Yeah.
46:28 You think you're intense by having branch coverage turned on, just wait till you have column coverage
46:32 turned on.
46:32 Yeah.
46:32 You can only cover two white spaces per line.
46:35 So you got to call that a lot.
46:36 All right.
46:36 I think that's a perfect segue over to one of the most tangible contributions from Pablo
46:42 here.
46:43 Maybe tell us about this new fine grained error locations and tracebacks.
46:47 This is fantastic.
46:48 This will save people being in debuggers or rewriting their code with tons of print statements to
46:53 figure out what's going on.
46:54 Yeah.
46:55 Thank you very much.
46:55 We put a lot of effort into this.
46:57 So this is a man.
46:59 I don't even remember my pep.
47:01 So I don't know.
47:01 It's PEP something, something.
47:03 And it has a horrendous name.
47:04 Six, five, seven.
47:06 And let's see.
47:07 Six, five, seven.
47:08 Thanks.
47:08 Include fine grained error locations and tracebacks.
47:11 Yeah.
47:12 The worst name.
47:13 Even I think I was talking with Mark in the Python code of press print.
47:16 And he was saying like, what it means like fine grained, like, you know, like, is this
47:21 very fine grained?
47:22 Like, so I think we are renaming the PEP to fancy tracebacks.
47:25 I think that's much better.
47:26 Anyway, so this is a project I worked together with Batuhan, Tazkaya, and Omar Azkar.
47:32 So kudos to them as well, because they participated equally on this.
47:36 And the idea is that we were like, we started this project to make, you know, to improve the
47:41 error messages in the interpreter and the general experience.
47:45 Not only for, you know, people, because when people talk about this, they normally refer
47:49 to people starting to learn Python.
47:51 But like, to be honest, most of these things also affect people that are experts.
47:55 Like, I always say that when I implemented the suggestions, I was the first one benefiting
48:00 from them because like, I make a lot of typos and, you know, like, this is odd.
48:03 You mean this?
48:04 So the idea that we have is that most of the time, the lack of, you know, the interpreter
48:09 shows you kind of the position when the error happens, but it's quite limited because
48:13 most of the time you people tend to have due to Python flexible syntax, a huge amount of
48:19 like complexity, even in a single line.
48:21 In the pep, there is a bunch of examples.
48:23 Like you access a bunch of keys in a dictionary and some of them doesn't work or is not there
48:28 or is none or something like that.
48:29 Right.
48:30 And then it fails.
48:31 Or sometimes you have like several function calls or several additions.
48:35 And, you know, it's quite difficult.
48:37 And most of the time fixing these things involve going into a debugger like PDB and then trying
48:43 to inspect every single object and say, okay, this dictionary doesn't have this key at this
48:47 level.
48:48 And like, you know, that sucks.
48:49 Like it's not because like the buggers are cool, but like it's cooler not to use them.
48:55 Right.
48:55 And, you know, we thought, what can we do here?
48:58 And we, we arrived to this idea actually also to mention, everyone involved, this was
49:04 originally inspired by some kind of prototype that car from the pipe I team made very long
49:10 ago when he saw like a kind of minimal version of this.
49:14 And then I said, okay, can we do this?
49:17 And what we do now is that we propagate because the parser, our super cool PEG parser knows
49:23 position of all the tokens and things like that.
49:25 So we are propagating those, that information through the interpreter and we store this information
49:31 now in code objects.
49:33 So aside the fact of this PEG actually is that code objects are slightly bigger, although,
49:37 you know, because code objects don't tend to be a huge percentage of your application.
49:40 It doesn't really matter that much.
49:42 Maybe PYC files are a bit bigger, but you know, you have a lot of this disk space, I'm
49:46 sure.
49:47 And the idea is that, you know, we, we store this information in code objects.
49:51 So when you raise an exception, we say, well, what is the instruction that is raised this
49:55 exception?
49:56 And then once we know which is the instruction that raised the exception, then we go and say,
50:00 okay, what is the position information that generated this instruction?
50:04 And because we propagated it, we know, and then we can say, okay, here is kind of like
50:08 the, like the lines, the columns that this instruction spans.
50:12 So that kind of allows us to underline the specific location, but we go a bit further.
50:17 Oh, sorry.
50:18 Sorry.
50:18 Go ahead, Michael.
50:19 I was going to say, this is super valuable.
50:20 The example you have in the PEP is you have a dictionary, you say bracket of key A, and
50:26 then the thing that comes back is another dictionary.
50:28 So you say bracket B and then another dictionary, bracket C and then bracket D.
50:32 And if you own three 10, the error is just like, if one of those is miss is, is, is none,
50:36 say none type is object.
50:38 Object is not subscriptable, or maybe, you know, does not contain that key or some weird thing
50:43 like that.
50:44 But is it A, B, C, or D?
50:46 You have no idea.
50:47 You're in a debugger printing them out separately or something, but now it just goes, nope, it's
50:51 the C one.
50:51 That's it's the third.
50:53 Exactly.
50:53 Subscript one.
50:54 And that's just, just jump right to it.
50:56 Oh, okay.
50:56 Yeah.
50:57 Also this error, none type is not subscriptable.
50:59 It's kind of like, thanks for the info.
51:02 Like, it's like, you know, water is wet.
51:04 Okay.
51:05 Thanks.
51:05 It's not, it's not super useful.
51:07 No, exactly.
51:08 So tell me when it's going to rain.
51:09 Anyway, we did like the first version of this and then we realized, realized that there was
51:14 some kind of like, you know, it was cool.
51:16 Like most people really like it, but like, especially for instance, with the example, with the dictionary
51:21 that has many dictionaries inside.
51:23 There was some confusion because like, you know, it underlines the whole thing.
51:26 And then, you know, the order of operations and, you know, also with complex mathematical
51:31 expressions, like you do a plus b plus c and the last addition fails.
51:35 It needs to underline a plus b plus c because what happened is that it first added a plus b
51:40 and that gives you something that then you added to c.
51:42 And what happened is that the last addition failed, but that includes a plus b.
51:47 So you need to underline the whole thing.
51:48 If you know the order of operations and I just underline a plus b plus c, you know that
51:53 what will fail is the last one because that's the last one that is executed.
51:57 But it's still confusing because, you know, specifically also with the dictionary, people
52:01 were saying, yeah, okay, but like you're underlining three keys here, which is the one that failed.
52:05 I mean, you know, you can learn by experience that is the last one, but it's kind of like,
52:08 it was not a great experience.
52:10 So we went a step farther.
52:12 So what we do is that once we know the kind of range in the line that shows the problem,
52:17 we reparse that chunk of expression.
52:20 And then we know, okay, so we know now that this expression has this AST.
52:24 And then we analyze the AST and then we say, okay, is this AST something that we can further
52:29 improve the error message?
52:31 Like for instance, is this AST a bunch of key access in a dictionary or a bunch of attribute
52:36 access or a bunch of function calls or maybe binary operations?
52:40 And if it's the case, then we use a specialized, like, you know, on the line, I don't know,
52:45 tildes or squiggles or whatever it's called.
52:47 And, you know, the dictionary ones have this different one that marks which key access it
52:53 was known, the same thing for binary operations and things like that.
52:56 So we do this extra step at the end that, you know, does a bunch of extra work, but it
53:01 tries to improve even upon the kind of just underlining the line just so we can offer even more
53:07 rich information.
53:08 And I'm quite happy.
53:09 I'm very pleased about this.
53:11 Sorry, Mark, but I think it's the best feature of 3.11.
53:14 Yeah.
53:16 This is probably the second stream when I said this, but it's true.
53:18 Totally, totally true.
53:20 100% true.
53:21 So I'm very excited about this.
53:22 I literally use this every day.
53:24 Today I was deploying Python 3.11.
53:25 Well, this week, sorry.
53:27 I was deploying Python 3.11 at Bloomberg and something went wrong.
53:30 And literally, this thing saved my day.
53:32 This thing saved me to just logging into some forsaken machine and understanding what's going
53:36 on.
53:36 What about that?
53:37 So super cool.
53:38 Very happy.
53:39 I hope that everybody that uses this and is happy reach to us and say, I am happy.
53:43 Because normally people reach to us when they are not happy.
53:47 And they say, evil core developers, you break everything.
53:49 But instead of that, you should reach to us and say, nice.
53:52 I did this cool thing.
53:55 You should tweet at Pablo or something, though.
53:57 Don't open issues saying you're happy.
53:59 Exactly.
54:00 Exactly.
54:00 Just tweet a couple of tildes if you care.
54:04 It's in a smiley face, Adam.
54:05 Exactly.
54:05 Tweet happy at python.org.
54:07 I will take that email address.
54:10 Awesome.
54:10 It's that we improve it a bit further.
54:13 One of the things that happen is that, you know, like sometimes if the whole line is wrong,
54:17 because this example you have there, if you, sorry, for the ones listening to the podcast,
54:21 we have here some, we are seeing some output, but doesn't matter.
54:25 Don't worry.
54:25 I will describe it.
54:26 So, for instance, you're calling a function and that's the whole thing that is in the line.
54:30 We used to underline the whole thing.
54:32 So, we'll say, okay, even if the whole line is failing, so it's not like a part of the line is failing,
54:38 the whole thing is failing, we used to underline that.
54:40 And that apparently is still on the pep.
54:42 Maybe I should change that because that is not like that anymore.
54:45 Because someone suggested, I mean, come on, like if it's the whole line is failing,
54:49 underline the whole line is actually not that useful.
54:51 And, you know, you are spending vertical space.
54:54 So, you need to scroll a lot.
54:56 And at the beginning, I say, yeah, but it's inconsistent.
54:59 I don't like it.
55:00 And I push back a bit, but like then, you know, more people say, Pablo, you're wrong.
55:05 And then I say, okay, okay, I'm wrong.
55:06 We improve this further.
55:08 So, you say, but don't take this as an advice.
55:11 Don't tell me that I'm wrong collectively, please.
55:14 But, right, so now if the whole line is underlined, we don't underline it
55:20 because it doesn't really add any new information, right?
55:22 So, only if a part of the line contains the error and not the whole line.
55:26 So, this means that we are not going to, you know, consume a lot of vertical space for no reason.
55:31 And the last thing I wanted to say is that, you know, there is some people somewhere in the universe
55:35 that may care about that extra disk space on their PYC files, or they just really, really hate squiggles.
55:43 I don't know if that is even physically possible, but, you know, there are very different and diverse set of people.
55:48 You are one of those.
55:50 There is a collection of different ways you can deactivate this feature.
55:54 There is an environment variable with a super long name, and there is minus X option when you launch the interpreter.
56:00 So, you can say Python minus X something, something.
56:03 I don't know how it's called.
56:04 I think it's called no the back ranges.
56:06 What about that?
56:07 What an incredible naming.
56:09 And then you set no the back ranges to one, and it deactivates the feature.
56:13 Incredible.
56:14 Like magic.
56:15 It's gone.
56:15 And you can reclaim your PYC files, and you can even generate PYC files without this information
56:21 if when you are compiling PYC files, you set this evil variable variable.
56:25 But don't do that, listeners.
56:27 Don't do that.
56:27 It's evil.
56:28 Don't do that.
56:29 Just use it.
56:29 It's great.
56:30 Yeah.
56:30 There's another kind of type of errors that I think we're going to get is about edge cases
56:37 where the compiler doesn't get the line numbers right, because all these kind of fine-grained
56:42 locations, it's all new.
56:44 And, you know, we're still ironing out.
56:46 The front future.
56:47 There is a front future, I think, that you put like a bunch of things with the front future.
56:51 It just complains on a random place.
56:53 Yeah.
56:53 Today I found that one.
56:55 I've been looking at the compiler and line numbers, location information, and it's a
57:00 bit off here and there.
57:01 And we have received bug reports from other people as well.
57:04 The range here doesn't look right.
57:06 The range here looks too broad.
57:08 So, yeah, we're going to be ironing that out, I guess, for 312s.
57:13 Yeah, it's really nice when people are using betas and release candidates, though, because
57:16 we were able to catch a lot of those before the release.
57:19 Because there were a couple people, I forget exactly the name of the project, but they were
57:23 working on like a code animation tool where it animates your code while it's running.
57:27 And they were using these new ranges to identify AST nodes and stuff.
57:31 And so they did this thing, I guess, where they like run their tool on the entire standard
57:34 library, make sure it's correct.
57:36 And so we got a bunch of bug reports that basically say, oh, you know, this column information
57:40 is off for this weird multi-line attribute access or something.
57:43 If you recall, I think you fixed an error.
57:46 That was super weird because it was like a method access, like, you know, my instance of
57:52 full.
57:52 And if the method access has like some like vowel or something like that, it was wrong.
57:57 And if you added some extra letter, it was fine.
58:00 Yeah, it was like if you split your method access across two lines, if you do like x dot
58:06 method or x dot method or x dot method on three lines or two lines or something, the way
58:13 we trace those lines, we always trace the method when we're actually loading the method, even
58:18 if it's on a different line, it's like where the actual method load started.
58:22 And then we were doing some weird math to like figure out where the dot is.
58:25 So we would try to put it on the same line as the dot.
58:28 And so we just like subtract one from the length of the name.
58:31 So there's all sorts of crazy stuff.
58:33 And that came from the grave because we fixed that.
58:35 And then it was wrong again because like we were like miscalculating the name.
58:39 It's just so easy.
58:41 Oh my goodness.
58:42 Yeah.
58:42 So all sorts of fun stuff like that.
58:44 Yeah.
58:44 Amazing.
58:45 Well, yeah, this is definitely one of the highlight features for sure.
58:49 And also the performance work that you're all doing.
58:51 All right.
58:51 We're getting very, very short on time.
58:53 So I think maybe a super, super lightning round here.
58:56 Let me just say we also got Tomolib support built in.
58:59 We've got the asyncio task groups a la trio nurseries.
59:03 We've got new features for atomic grouping and regular expressions, a self-type.
59:09 A lot of type things have been added.
59:11 So we've got the self-type, bariatric generics, literal strings, which is very interesting.
59:17 Lukash did a talk about that on the release live stream.
59:20 Stuff for type dict and data class transformations.
59:23 So great stuff.
59:24 Now let's just really quickly round out.
59:27 What's the Python 3.11 story for PyScript, Pyodide?
59:31 Is there, do you know, have anyone out there know?
59:34 I don't know.
59:35 I suppose it works.
59:36 I think WebAssembly is now a tier two or tier three supported platform, right?
59:41 So he has been making a lot of improvements to the build process, which, you know, is not easy.
59:45 So kudos to Christian Himes.
59:47 If you're listening, you're great.
59:49 I suppose that PyScript can, through Pyodide, this is how many layers is this?
59:55 So Pyodide through this can leverage all these improvements because I don't know how the whole
01:00:02 layer that word thing is working, but Pyodide has a bunch of patches that, you know, you
01:00:07 need to modify CPython so it builds nicely on WebAssembly platforms.
01:00:10 I don't know the details on that.
01:00:12 I just know that some of them are okayish.
01:00:14 Some of them are not okay and quite difficult to maintain.
01:00:16 And Christian Himes has been making a lot of great effort to, you know, change here and
01:00:22 there and like put a lot of macros and if devs and things like that.
01:00:25 So CPython kind of builds easier.
01:00:28 This probably translates that Pyodide, I hope, kind of, you know, can consume this build in
01:00:34 an easier way with less patches.
01:00:35 And I suppose that translates into PyScript, like just using the Pyodide thing easier.
01:00:41 But yeah, I don't think that there is a huge amount of improvements more than, you know,
01:00:46 we are working towards official support as Brian was mentioning.
01:00:50 We have this next year system.
01:00:52 It's super cool.
01:00:52 And as like an unrelated fun fact, Mike Drapu, one of the early developers of Pyodide,
01:00:57 is actually managing our team at Microsoft.
01:01:00 Oh, it's funny how the circle comes back around indeed.
01:01:04 How the darn tables.
01:01:05 That's right.
01:01:07 All right.
01:01:08 We are out of time, but super exciting.
01:01:11 I wish we had some champagne.
01:01:13 And Pablo, we didn't even bring hats to celebrate Python 3.11.
01:01:16 But I know everyone out there is extremely excited.
01:01:19 People cannot see it, but they have a Python 3.11.
01:01:21 Yes.
01:01:22 Yeah.
01:01:22 It's a great new logo for 3.11 and stuff.
01:01:25 Not for in general, but just for the release.
01:01:27 It's awesome.
01:01:27 All right.
01:01:28 Before we get out of here, let me just ask you one final question and then we'll call it
01:01:31 a show.
01:01:32 Notable PyPI package.
01:01:33 Something you want to give a shout out to.
01:01:34 We'll go top to bottom in the picture here.
01:01:37 Pablo?
01:01:37 Notable PyPI package.
01:01:39 And I'm going to say memory.
01:01:42 Use memory.
01:01:43 The one and only Python memory profiler.
01:01:46 Solve your problems on production today with memory.
01:01:49 That and the underlying errors, you'll be all good.
01:01:52 Exactly.
01:01:52 Yeah.
01:01:52 A combination.
01:01:53 Arit, how are you?
01:01:56 Well, I've had some interaction with the author of bytecode, Ration King, because I was
01:02:00 looking at things to do in the testing and in the interpreter that are kind of like that.
01:02:05 So this is a library you can kind of from Python write bytecode and it's pretty neat.
01:02:10 And it's struggling with zero cost exceptions, but that's what it is.
01:02:15 It's like inline assembly, but for Python.
01:02:17 Yeah.
01:02:18 It's like from a Python script, you can kind of write a bit of bytecode and get it to,
01:02:23 I don't know, do a lot of interesting stuff.
01:02:26 That's awesome.
01:02:26 Brent, how about you?
01:02:27 Well, I'm partial towards specialist, but if I had to choose something else, speaking
01:02:31 of speed, I really like the scaling profiler.
01:02:34 I've been using it a lot of my own projects and it's awesome.
01:02:38 I don't know how it's memory profiling compares to memory.
01:02:40 I'm sure memory is better, but scaling is really nice for measuring the performance across
01:02:46 both Python and C code, which is cool.
01:02:47 Excellent.
01:02:48 Mark?
01:02:48 Well, it's not actually a PyPI package.
01:02:50 I was going to say the sys module, which is pretty much the most fundamental module going.
01:02:55 Come on.
01:02:55 There's all sorts of fun things in there.
01:02:57 You can change the recursion limit and C.
01:02:59 You can muddle it.
01:03:00 If you're interested in how Python works, it's actually quite a sort of fun thing to play with.
01:03:04 Thank you all for all the hard work.
01:03:06 And I know there are many people who did a ton of work as well who are not on the show here,
01:03:11 but you can represent them as well.
01:03:12 So thanks all for being here.
01:03:14 Final call to action.
01:03:15 People want to get started at 3.11.
01:03:16 What do you tell them?
01:03:17 Is it ready for them to get going?
01:03:19 What do you think?
01:03:20 It's awesome.
01:03:21 It's awesome.
01:03:21 And also now 3.11 comes with a bunch of wheels for all your packages because there has been a lot of good work in the people of third-party libraries.
01:03:31 And now that people are using CI build wheel, 3.11 was released with wheels for NumPy and Pandas and a bunch of other things that previously was failing massively because nobody could compile them on their crappy laptop.
01:03:43 But now you don't need that.
01:03:44 You can just download them and it works.
01:03:46 So just use 3.11.
01:03:47 There is no reason.
01:03:48 Yeah, that's excellent.
01:03:49 It's just boring.
01:03:49 That would be a reason.
01:03:50 If you're boring and you don't want to use 3.11, then don't use it.
01:03:53 You didn't break anything.
01:03:54 Not even a package, much less GitHub.
01:03:56 Right.
01:03:57 And we need more benchmarks.
01:03:59 Well, that's true.
01:04:01 Yeah, absolutely.
01:04:02 That's how people can help us make things faster.
01:04:04 There's more benchmarks.
01:04:05 So, yeah, we have a, there's a sort of standard Python performance suite, but it's kind of a bunch of toy programs and so on.
01:04:11 So if you've got something that might make a nice sort of benchmark, you know, sort of self-contained but sort of realistic program, then, yeah, let us know.
01:04:19 All right.
01:04:20 Cool.
01:04:20 Well, thanks again.
01:04:21 Great work on it.
01:04:22 Cam Gearlock out in the audience says, yay, CI Buildwheel.
01:04:24 Yeah, absolutely.
01:04:25 Great stuff.
01:04:26 So thanks again, everyone.
01:04:28 I'm super excited to start using 3.11 myself.
01:04:31 Thank you, Michael, for inviting us.
01:04:32 Yeah, it's great to have you here.
01:04:34 Thank you.
01:04:34 Bye, all.
01:04:35 This has been another episode of Talk Python to Me.
01:04:38 Thank you to our sponsors.
01:04:40 Be sure to check out what they're offering.
01:04:42 It really helps support the show.
01:04:43 Take some stress out of your life.
01:04:45 Get notified immediately about errors and performance issues in your web or mobile applications with Sentry.
01:04:52 Just visit talkpython.fm/sentry and get started for free.
01:04:56 And be sure to use the promo code talkpython, all one word.
01:05:00 Command Line Heroes is an original podcast from Red Hat that chronicles the history of the software industry.
01:05:06 From the origins of the first high-level programming languages to Python's tale, each episode is a fascinating look back at how we got to where we are.
01:05:14 Listen to an episode at talkpython.fm/heroes.
01:05:17 Want to level up your Python?
01:05:19 We have one of the largest catalogs of Python video courses over at Talk Python.
01:05:23 Our content ranges from true beginners to deeply advanced topics like memory and async.
01:05:28 And best of all, there's not a subscription in sight.
01:05:31 Check it out for yourself at training.talkpython.fm.
01:05:34 Be sure to subscribe to the show, open your favorite podcast app, and search for Python.
01:05:38 We should be right at the top.
01:05:40 You can also find the iTunes feed at /itunes, the Google Play feed at /play, and the direct RSS feed at /rss on talkpython.fm.
01:05:49 We're live streaming most of our recordings these days.
01:05:52 If you want to be part of the show and have your comments featured on the air, be sure to subscribe to our YouTube channel at talkpython.fm/youtube.
01:06:00 This is your host, Michael Kennedy.
01:06:02 Thanks so much for listening.
01:06:03 I really appreciate it.
01:06:05 Now get out there and write some Python code.
01:06:06 See you soon.
01:06:27 Thank you.