Monitor performance issues & errors in your code

#296: Python in F1 racing Transcript

Recorded on Monday, Nov 16, 2020.

00:00 Quick name the three most advanced engineering organizations you can think of. Maybe an aerospace company such as SpaceX or Boeing came to mind if you thought about CERN and the lmhc. But in terms of bespoke engineering capabilities, you should certainly put the f1 race teams on your list. These organizations appears 20 or 30 people on a race day shown on TV. But in fact, the number of people back at the home base doing the engineering work can be well over 500 employees. Almost every tiny part you see on these cars as well as the tools to maintain them are custom built. The engineering problems solved are immense. What is surprise you to know that Python is playing a major role here? On this episode, you'll meet Joe Borg who helped pioneer pythons adoption at several f1 teams. This is talk Python to me, Episode 296, recorded November 16 2020.

00:51 Wait, wait, wait, wait. Normally I'd play a little guitar riff right here to kick off the episode. But this episode is about racing and race cars. So let's kick it off with a different kind of instrument. And note, let's jump into a 2017 Indy pro Mazda for a lap around the storied Brands Hatch circuit and let the engine notes be the music.

01:42 Welcome to talk Python to me, a weekly podcast on Python, the language, the libraries, the ecosystem, and the personalities. This is your host, Michael Kennedy. Follow me on Twitter, where I'm at m Kennedy and keep up with the show and listen to past episodes at talk python.fm and follow the show on Twitter via at talk Python. This episode is brought to you by linode and Talk Python Training. Please check out the offers during their segments. It really helps support the show

02:08 and talk Python, we run a bunch of web apps and web API's. These power the training courses, as well as the mobile apps on iOS and Android. If I had to build these from scratch again, today, there's no doubt which framework I would use. It's fast API. To me fast API is the embodiment of modern Python and modern API's. You have beautiful usage of type annotations, you have model binding validation with pedantic and you have first class async and await support. If you're building or rebuilding a web app, you owe it to yourself to check out our newest course, modern API's with fast API over at Talk Python Training. This is the first course in a series we're building on fast API and for just $39. It'll take you from interested to production with fast API. To learn more and get started today, just visit talk python.fm slash fast API, or click the link in your podcast player show notes. Joe, welcome to talk Python to me. Thank you. Nice to be here. Ah, man, it's exciting to have you here. Good to come back to one of my favorite topics, racing, which is always cool. And we're gonna hit the peak of racing in engineering, I think with f1 here, I think so I probably shouldn't admit this as an f1. Guy. But I've always kind of preferred WEC, but f1 certainly up there with the with the high performance sailing, so the World Endurance champion cars, yeah, like Ferraris and stuff like that. Right? Exactly. All the Porsches to otar. I think the biggest one still racing in it. Yeah. Well, now that you've stepped away, got a little distance between f1. But you work for two different f1 teams doing some really neat stuff with Python. And we're gonna dig into that. And I'm super excited about it. Before we get to that, though, let's just start with your story. How did you get into programming and what brought you to Python? I've always been kind of a techie guy since I was a young kid. And my dad was doing a university course when I was like five or six, just about young enough to remember, he got a computer as part of the course. And sort of I was introduced that at quite a young age. It was just, it wasn't anything special. At that point. It was windows 95, or whatever. showing my age, though, I guess. I don't know which way windows 95 was kind of special. I think windows 95. Like it was the first operating system that really felt like welcoming and good. I mean, I think you could debate about some of the early Mac ones, but the early Mac ones were like super bizarre in some ways. And windows three one like that was a neat time, actually. Yeah, oddly, I was introduced to witness we put one after 95 because of school, which seems like a weird way to do it in

04:41 my sort of vaguely remember that there was my dad choosing the themes because I think it came with plus or whatever it was called the big theme park on Windows 95. And we spend more time doing that than about anything else. But then he started showing me how Excel works and how you can do not just the kind of conditions on each cell but then into VBA instead

05:00 Stuff like that. So that's nice. The seed kind of got planted. I never went to university. But I did an apprenticeship with a British Telecom quite big in the UK with that kind of thing. And I actually did network engineering for a few years there and got my qualification. I think that would be an associate's degree in the US. Yeah, time, there's probably not a lot of programming in exactly the network engineer, but there's a lot of scripting and automation. So it's like on the cusp of programming, right, exactly. Okay. So and I much prefer that to sit manually sub nothing stuff, but tests, etc. Like, it was much nicer to be doing mostly like it was just scripts for provisioning routers and things like that. And we had a few systems internally that were that we wrote, because the commercial stuff just wasn't very suited for us. And then I got a quite a lucky break with my the first team I worked for dropped in for an interview, which took several hours, which I kind of assumed was a good thing.

05:54 And they haven't kicked me out yet. Maybe I'm here. Exactly. Which was, it wasn't really a technical interview at all, it was more How would you approach this on a high level. And I think at the time, there wasn't really a developer focus for especially the dynamic side and Formula One. So literally just saying, you know, I'd have a web front end for all of the users and I'd store stuff in a database, and I'd save things on a on a network, file share, etc, that was enough to pique an interest. Oh, and did you learn Python there? Yeah. So it was, I think I had a few months in between interviewing there, and then actually starting the job. So I started learning at home. I was already a Linux user. In my high school, we had a Linux User Group, which, especially in the UK, I think, was a very rare thing. just so happened that one of the IT guys was into it. And a couple of the sixth formers were into it. It was the door call one, I think that time just come out. Yeah. And that's what we were all in, you know, we'd found some old computers that maybe been dropped or whatever. And we were allowed to reprovision them with, with Fedora core one, and get them a new lease of life as it was. So I kind of got Yeah, they will probably blazing, they probably thrown away because they're too slow. Exactly.

07:10 Right. Yeah. It was not, we did a couple of like LAN parties, remember games, probably some arable free game that no one actually played in real life, but just to prove that we could get them running on the network. And so I had a bit of background of sort of scripting and provisioning, especially than those at that point. The biggest thing is learning Python. And so I literally sat down with Python for dummies at that point, and just started basically, yeah, cool. The place to like, start this new venture and learn this new language and kind of like grow into this whole tech world that you are jumping into, right, like sort of, we'll start with almost at that, like, let's go farther. Exactly. I've always sort of wanted to be a software developer. I just needed that kind of prod, I think, to focus on that rather than networking. And it Yeah, I had a similar experience. Yeah, I feel like in the abstract before you get there, like being a software developers is big, daunting thing. And it's there's like so much to know, and you don't really know where to go with it. But then someone comes, he says, could you do this one thing? Could you make this happen with programming, like, I can do that one thing? Sure, I can do that. I'm not a programmer, but that I can do. Let me work on that for a while. And like, eventually, you're like, wait a minute, I can do a lot of stuff. I'm a programmer. How did I get here? Exactly, especially with the first job was mostly converting to start with anyway, the first kind of six months was converting existing stuff into Python, mostly bash at the time. So you've already got at that point, you're not focused massively on sort of architectural re right side of things more just getting comfortable with Python, trying to do everything in a pythonic way. So obviously, with bash, there's going to be a lot of repeated code when you're trying to modularize that, and yeah, that was a really nice way to kind of settle in, I think, yeah. Cool. So I mentioned you had stepped a little bit away from FYI. And these days, what are you doing now? So now we work for canonical people, if they don't know the company canonical, probably have heard of Ubuntu. So canonical, have a kind of corporation that back Ubuntu. And yeah, I'm working for the Kubernetes team, we work on a couple of projects. One is what we call charm Kubernetes, which is a kind of very modular Kubernetes distribution that you can sort of pick and choose how you want your cluster to look and your pick and choose all the different components of how you want to pull it together. That's all done in Python. So we have these, the individual components in our distribution are called charms. And they're literally blocks of Python code to define what to do if it bumped into another component, if one of those components go away, how to react to that, etc. So imagine something like terraform, but with a constant controller that's watching over the infrastructure. Yeah, that sounds super cool. It is nice. But currently working on a new revision of the actual Python framework that's written in to basically make it more pythonic. It was perhaps a bit more kind of scripted.

10:00 before. So that's we're quite excited about that i think that's going to be releasing in a few months. So what's your relationship of that? And OpenStack? Is there any, I used to work for the OpenStack team as a field software engineer. So at that time, I was sort of going to customers and helping them either decide to to go down the OpenStack route or not OpenStack as well through canonical is delivered in the same way through what we call charms. So all the individual components of OpenStack, basically defined in Python, and then connected together with what we call interfaces, which again, is sort of Python defined functions are executed at the time that your infrastructure comes up and then mutates. Nice. So a lot of people listening out there. I would imagine everyone has heard of Kubernetes. Yeah. And probably Docker as well. Maybe less degree OpenStack. Yeah, you know, I've had some guys from OpenStack on before Flavio. Mm hmm. But maybe just give us the elevator pitch of like Kubernetes, Docker, and then what this thing that you guys are building, like, why is that better than just like random Kubernetes or your Docker compose or something like that? Sure. So kind of on a single machine level, you have Docker, and Docker is basically just there to run containers, you can have either a single executable in that container or multiple, but obviously, there's an amount of isolation that goes on with that container, namely, network resource. And just base if you want to orchestrate something a bit bigger. And especially across several nodes, perhaps even several networks, you would use Kubernetes. Kubernetes is basically just there to decide when and where to place your Docker. In this case, containers can be others, but we'll keep it Docker at this point. And then also what to do if there's any issue in either provisioning those or once they've started running. So it's just given at ease, it's out there to basically ensure that what you've asked for remains the case. So if you've asked for, yeah, five nginx servers inside Docker to keep them running. And if one of them fails to replace it with a new one. Yeah, and Kubernetes is pretty good for like rolling out new versions, right? It can do basically zero downtime deployments. Yeah, exactly. So for example, if I had like a web app, I love the actual, let's say, in this case, a Django app. And I release a new version of that, I can actually use Kubernetes, even to start rolling out the new version of Django app, and then get the, for example, nginx load balancer, to start firing 10% of the traffic at the new instance of our of our web app. And if there's a problem, it can just remove that part of your nginx application, or sorry, our Django application, and then continue how it used to be or indeed, if it's successful, we can keep changing that 10% to 100 in time, so it's very flexible like that, you kind of come back to the software side of where you started with like network engineering, right? Yes. Yes. There's definitely been quite a few instances where I've had to sit and remember how to how to work out subnets by hand again, which is something that I was hoping to use again, but certainly, yeah, certainly appreciated. I had to before Yeah, absolutely. All right. So let's talk about this, your Python journey through a couple of f1 teams. But I do want to set the stage just by setting that giving people a sense of the scale, because when I learned about f1, it seemed to me like okay, well, here's a there's like a race organization, and you watch it on TV, or you watch it somehow you'll see like, there's a team of 20 people in the little pit garage and the pit wall. They're like, Okay, well, these are the people that do the racing and the race car, and they talk about engineering, but then as you pay more attention, you're like, oh, and then this is where they actually went back and redesigned the carbon fiber tub so that they could like change some setting. And then they read it this year, like you start to realize like, there's a huge engineering like super advanced engineering organization that looks almost like an aerospace company, maybe, yeah, these days. aerodynamics is very much the biggest factor in Formula One. And as you say, you watch the foreigner one on the television, you may be seen, as you say, 20 engineers and mechanics site either on on the pit wall itself or in the garage, ready to service the car. But in reality, there's another 400 people 500 people back at base who have been working on the tiniest details on that car to make sure that every bit of performance possible is is being extracted. These days, it's not uncommon to see our dynamic departments alone with 100 people working in it, which, you know, as a large user base.

14:32 You know, you've been for a fairly modest software company, I'd say, yeah, yeah. That's quite an incredible feat to realize just that many people are working on, like you said, just the aerodynamics, if that was the part that you were associated with most right? That's right. So I was a economic software engineer slash aerodynamic systems engineer, depending on who was giving me the title at the time. So effectively, I was the main person writing software and sort of ensuring

15:00 The quality of it, but for these ergonomic departments, it did bleed, and especially in the second team into the vehicle dynamics department as well, because obviously, what you're simulating, it's like shocks springs, that kind of stuff. Is that where that would be? Exactly, even usually, nowadays, all of that is driven mostly by Aero. So even the suspension on the Formula One car isn't about having this. It never was about having the smoothest ride, but it's mostly about keeping the car in the correct attitude for to exploit the aerodynamics. So you'll see an f1. Yeah, and I guess it's probably worth also pointing out, like you said, aerodynamics are important. It's an important part. It's, it's an insanely important part. And like, you watch those cars go, and it looks like slot cars, like those toy cars that had like literally a little peg that would stick them to the track, right? There's your zoom, zoom, and it seems like impossible. And that's because of all the aerodynamic force and you listen to them. It's like, multiple times the weight of the car, in aerodynamic force, right? Yeah, I would need to double check this, but I'm pretty sure around the time that I was there, you're talking 4000 kilos, so four metric tons at the kind of 200 mile an hour mark, I know mixing metric and imperial both.

16:10 That kind of speed. It's in that ballpark of downforce, so it's multiple times maybe not quite an order of magnitude, but not far off in downforce. Yeah. And obviously it's not too difficult to just do that in downforce, but you need to do that without adding a huge amount of drag to the car as well. So the real research is right no drag No no, no extra weight not much extra weigh all those kinds of things. Yeah, it's exactly that's where you could just have a huge of wing Yes, like a 747 on it right. It'd be fine. Exactly.

16:39 Massive angle of attack. It sounds it sounds worse. And then it could only do yeah, I guess. Yeah, there's also like some interesting history in Formula One, like one of the very rare cars didn't used to have these aerodynamics. And in the early days, one of the first ones was called a fan car and it was this bizarre car where they put like a rubber skirt on the car and literally put a fan that just sucked the air out and just like it was like a vacuum that to the surface and that that was around for a little while till was yeah band, but it was they actually withdrew it before it was officially banned. There was some odd politics going on. Interestingly, the guy that designed that has just come out with a road car with the same system in place. So if you want to, if you want to buy experience that you can buy it.

17:21 Amazing.

17:23 This portion of talk by enemies sponsored by linode, simplifying infrastructure and cut your cloud bills in half with linode. Linux virtual machines, develop, deploy and scale your modern applications faster and easier. Whether you're developing a personal project or managing large workloads, you deserve simple, affordable and accessible cloud computing solutions. As listeners have talked Python to me, you'll get a $100 free credit, you can find all the details at talk python.fm slash linode. linode has data centers around the world with the same simple and consistent pricing. regardless of location, just choose the data center that's nearest to your users, you also receive 20 473 65 human support with no tears or handoffs, regardless of your plan size. You can choose shared and dedicated compute instances. Or you can use your $100 in credit on s3, compatible object storage, managed Kubernetes clusters. And more. If it runs on Linux, it runs on the node, visit talk python.fm slash linode or click the link in your show notes and click that create free account button to get started.

18:26 Alright, so in this context, I work in these really intense aerodynamic requirements that the cars have today that you started working the first team that you work for at the time you were working for it, it was called forced India. That's right. And now it's called racing point. Yep. If people listen to this episode next year, it's going to be called Aston Martin. I believe that's right. That's right. Yeah, yeah. Okay. So that that group, and at least so far has always been the pink car. So that makes it pretty obvious. Right? Alright, so yeah, tell you said you started there. And this is you had that interview. Tell us about like, you showed up. There's a bunch of bash scripts. Yeah. Then what? So then basically, the first job I had was to sit down and to convert as much of that into Python. There was an aerodynamicist who has since moved on, who was kind of also interested in Python. I'm not sure how he got interested in Python. So he'd done a small amount of work, like sort of proof of concept as it were. Yeah, there's before we get too much of the details, take a step back and just say what kind of problems were you solving because I know a lot of people probably don't know, like the standard workflow of aerodynamicist and so on. Right, sure. So effectively, the name of the game though, is to allow aerodynamicist to draw parts CAD, and then within a certain amount of time be told whether that was a good thing or a bad thing. So obviously, we can do that with a physical wind tunnel, which is is how Formula One teams have been doing it for decades. And then more recently, as technology has caught up, we can do that with simulations, which are called computational fluid dynamics simulations. So when I think back to my math experience,

20:00 is like basically the hardest math seemed to be around fluid dynamics. So that sounds like there's a lot of computation. There's a lot of a lot of things going on there. It's not easy to do that kind of stuff, right? Sure. So like laminar flow, very basic flow is fairly easy to model. It's when you start talking about the turbulence that goes around the car, which is what a lot of teams exploit what all the teams exploited to really get a lot of that downforce. And that's when you sort of get away from realized sort of mathematics into more sort of guesswork, I guess it's come from lots of years of getting as close as possible to having something that's real, but it's still a bit of guesswork. Right. Right. Okay. So in this context, you showed up, and you're working on there's a bunch of bash scripts that like piece together like the CFD, computational fluid dynamics, tools, and maybe data coming out of the wind tunnels, and you're like, bash scripts? Why exactly. Which is, it was very common, most of the guys that set this stuff up from university, and that's how they did the projects at university, right probably didn't come as developers about pragmatic engineering disciplines. Exactly. Yep. So they had the same attitude from university, which was we just need to get this done. We know bash, which is perfectly fair enough, because we're able to get something that was a bit more reliable, because you obviously you change a line in bash, and you've got no real accountability for that change. It could if one part of this sort of pipeline fails, then the whole thing would fail. So it needs to be made more robust. And we want to start sharing code because it was there was these mammoth bash files for each sort of discrete part of the process. And can imagine trying to debug those things. Or if you want to change something, you're like, Oh, we really don't want to touch that part. That part too bad. There were days where we'd sit there, just staring at 1000s of lines of bash, trying to trying to find, you know, very, very small problems. Is there a debugger for bash? I don't think none that I've ever heard of. So we were using Eclipse at the time, the ID, you had the theme, the bash theme, which helped a lot. Just just even for counting brackets. It's syntactically correct. It's sort of exactly. But I didn't think there was a guy specifically a debugger. Yeah. And nothing. I'm sure someone will send us a message in the show notes. Like, here's the debugger. But yeah, it's not easy to work through those things. Yeah, you could run stuff with minus x, obviously, to get a step by step printout of what was going on which we had to resort to quite often. But the level of verbosity that you'd get back would be far too much for what you really needed that, you know, it would have just been a variable that hadn't been set correctly. And yeah, yeah, that'll take a day to debug. Okay, so you are going to cover these over into Python. And you also wanted to share more code started using databases, things like that. Yep. So there was already some use of a SQL MySQL. And in the first instance, database that was being written to, in order an existing web app to be able to display mostly headline numbers could then be drilled down into so once you sort of change the part on the car, it will tell you the kind of effects that those had. And so yeah, it was about really getting more into the database than just, you know, summary as it were, things like we would need to produce images. And that was all filename base to begin with, which is fine until you suddenly have a new variables, for example, on the image. And then suddenly, your whole naming convention falls apart, though, it was basically just lots of little, bringing it up to some grade of, I guess, enterprise grade, rather than the kind of university thesis grade, that it was at Sure. How do people interact with it, it was basically like a website or some kind of gooey app, or what was it. So the the biggest kind of the front page, as it were, was a web app at the first company was all written in PHP. And there was, there was an existing site when I started work there, and sort of ended up doing quite a lot of modifications to that it was all static pages when I first started. So trying to make them more interactive, so that people didn't have to keep hitting refresh every sort of two minutes. And there's a lot of image manipulation, because you're talking about an entire car. So we would make 1000s of images per car, for simulation. And to be able to browse those, you know, statically was not nice either image loading a full image each time you click next, or up or down, or however you're trying to navigate. So yeah, that was the kind of main entry point for someone, if they wanted to drill down at that point, they would usually open some specialized application, where you can basically open the 3d model of the car plus all of the simulation data. So you know, you've, I'm sure people have seen like a picture of like a 3d model of a car in CAD. Now imagine the actual aerodynamic simulation data on top of that, and that you can turn on and off and choose how that's being depicted, etc. Yeah, and that just required quite a lot of hardware to do. So that was that was something that was sort of saved, but when you really needed to drill down, yeah, it still it sounds like a really empowering thing, right? I mean, doing physical testing and building physical parts, plugging them

25:00 into a wind tunnel. Exactly right, you can piece these tools together and dream them up, see what the math says. Exactly. And look at them. Right, it really streamline you know, if you think about it for the wind tunnel, the part would have to be designed in CAD, it would then have to be effectively 3d printed not in the way that your home 3d printer would do. But it's a similar concept that will have to be finished. Because if it was the home printer, like the 150 mile an hour wind would just blow it apart, right? Yeah, and all the kind of small defects, you get kind of render anything you've done useless as well, sadly, okay. And then these model parts have to be finished by hand as well. So they were a team of probably 10 or 20, people literally just finishing small plastic ish parts. And then the added emphasis would have to convince the team leader that this warranted some time in the wind tunnel. So then that part would be physically has to then be affixed to the model in the wind tunnel, that run is then performed. And then you get the data from that. Whereas with the CFD approach, compute simulation approach, at that point, just making a change in CAD, and then getting the result back, it would still take a few hours to run. But in comparison for the time and labor that goes into the physical parts, it's very empowering, as you say, it's a much quicker, it's a way better. Yeah. What kind of compute? Did you guys have just like machines? You could send it off to? Or did you have like grid computing or like high end clusters or so the follow on teams tend to be quite, I say, paranoia is probably the word that comes to mind when it comes to compute. So there is no cloud computing done, which I think is a shame, because we could really exploit that. So the teams have physical clusters on site, due to a quirk in the rules, teams had very specific clusters at the time as well. So there were limits on how many floating point operations we could perform within a time. No way. So I mean, I, I'm sure a lot of people don't know, there's actual limits on how much time you're allowed to do all sorts of testing. How much a wind tunnel time are you allowed to spend per year, how much time on a test track? That's right, all these different things are highly, highly regulated, but down to the gender the floating point operation, literally. And as I'm sure some people are aware, like, my gosh, modern CPUs tend to do multiple floating point operations per clock cycle. But the simulation code that we were using, which most teams use, because it was written quite a while ago, will only exploit a single sum, I think, would exploit to, but it meant that the number that you were giving to the regulating body, it had to be pegged to the CPU, with your CPUs were performing eight floating point operations per cycle. That's what you had to tell the governing body to the most important thing wasn't necessarily the speed. It was like, your resource was number of CPU operations, in a sense. Yeah, exactly. And like, how do you minimize that? Okay. And literally, AMD sponsored one of the teams in the early days of this rule, and they pulled an old, I can't remember which I think it was called a bulldozer spec at the time. If I recall, it was an old Opteron, I think, and literally, yeah, it was one of the old server server things from AMD server CPUs, so nothing special at all off the shelf, it would be almost free because they were trying to get rid of them. And they basically sort of through hardware killed off all but one of those floating point operations per cycle, and then went, Oh, we've got a perfect formula one chip here, and then sold it for obscene amounts of money to the teams. So most teams are running. We're running with those for a while. That's since been rectified, which is good to see. But yeah, it was, I'm trying to recall how many we probably had several 1000 cores, each job would run on a few 100 cores spread across several machines, several 100 gigabytes of memory, if not getting into the terabytes of memory per per simulation. Yeah. So yeah, they were quite hefty, quite hefty jobs. And they would take anything between four to eight hours to process depending on the size of the of the model that you were using. I'm still like, blown away that that's the metric you got to worry about is like the number of CPU operations. Yes, it really just must drive a lot of non intuitive or non obvious decisions or choices or trade offs, right? Like, oh, this paragraph, we could probably do this part in Python. Oh, but there's a lot of operations there. We're gonna write that in assembly or something weird like that. Right? So we were quite lucky in that sense, because the actual simulation software itself, we bought in and so at that point, we just said, This is what it says on the box. So we'll assume this is correct. All right. Although things like any lifetime loading the data in was a big cost because that was on the clock effectively, as soon as the job started the the clocks running. So the actual kind of the software itself, how many? How many clock cycles it was consuming wasn't a big issue. It was just about getting stuff done as quickly as possible. Like that isn't simulation time. Right. You know, if you have a bit of Python sat there.

30:00 Loading some artifact that's taking five minutes. That's a big problem. So anything that was blocking, we had to pay real special attention to how interesting. That's such an interesting constraint. Maybe give us a sense of some of like, the some of the libraries and stuff that you were using there. I mean, before you answer just one of the things that never, and this never ends to blow me away, like I'm always blown away by I guess I should get it out that way, is, on one hand, things feel so different, like this high end custom engineering company that is f1 racing teams, compared to like a grocery store, or a software team that optimizes like, what offers do we send to people? You look at the tools, you look at some of the programming, it sounds real similar a lot of the times, right, yeah. Even though like the special sauce is absolutely different. Internally, it looks a lot similar. All we were really doing was gluing bits of existing software together, and then trying to kind of optimize around that. So we use like nanpi, for example, for handling all of these all the numerical data that comes in after simulation and just trying to, for example, average over it. So these were one thing that is really a I was wondering about when I saw you are making note of this was if you had terabytes of data, yeah. How do you load that up? Right? Like, where does that go? We're using some sort of distributed computing or like processing it in little parts. I mean, dask might make it work right across different machines. But what was the story there. So we had dedicated infrastructure for loading money in cases. So these things will have around a terabyte of Ram Ram. So we would load obviously, the jobs that come off the distributed like actual compute side of the cluster, it would then write out a, let's say, terabyte file, they then were loaded on to another node, which isn't on the regulation clock as it were, because we finished the actual compute part, the actual simulation part. And then we can start actually decimating this data and getting what we want out of it. So this bit can take a bit longer, we obviously still want it to be performant. Because we don't want stuff waiting in the queue. So these were huge nodes, the really fancy Nvidia graphics cards in them pretty big CPUs as well. But the Ram was the big thing. I mean, especially back in the early 2000s. And 10s, like a terabyte of Ram was pretty serious. And yeah, far bigger than the hard disk space and the power supply for all these computers, right? Like, there must have been like a big Yeah, huge power system, even though the cooling and what was a fairly small server room was huge, because these things just lit up all the time. Okay, so you had machines that were like heavy enough, that could just basically load it up anyway, yeah, all onto one load. And then we would use a mixture of some open source applications that are designed to load in these sort of big files. Power View is the kind of main one, we replaced some commercial software with Power View, because Power View is free and open source so we could actually develop against it and be not have to pay for the commercial side of it. And we did actually pay them for support, which ended up being really nice, actually. And so yeah, we would use software like that, as well as some Python libraries like NumPy and matplotlib. And use these two in combination to make lots of images of the car, lots of plots, because, of course, we need to see as many plots as possible. Yeah. And sort of try and average out a lot of the data into something meaningful, because no one can sit back and look at a terabyte of data. We need to pull out a kind of headlines from that, but be averaged over a lot of that data just to get meaningful numbers. Oh, well, it sounds like a really interesting thing that you guys put together there. So you did that for a couple years. Yep. or five years. And then you moved on to another team who decides to change its name? Yes. Because that's your history, apparently.

33:54 to score Scuderia Toro rasa which is now alphatauri. Yes, that's right. All right. So it's the red ball sister team. Exactly. The third also is just Red Bull in Italian, which was nice to the gods goes with you quite a lot. So yeah, the I spent quite a few years at Force India, and we did some great work. There was some really, really good people. And we spent a long time converting what was there into a much more sort of streamlined and enterprise, I keep using that word process. Did you look back at that time and just have like a lot of pride and like, look at that transition? Yes, we help them on the technical side of engineering there. Definitely. And the fact that it's there's still a great team there that are carrying that on, you know, it wasn't a case of me leaving and all stopped. They're still very much all continuing but still making big strides with stuff there. So that's really nice to see. And in fact, I think they've done a few the rewrites that I was hoping to do there, which is why I ended up going to sort of a service because rewrote a lot of the bash into Python, I really wanted to rewrite that PHP web app because kind of, I've done as much as I could by

35:00 slapping a kind of single page application on the top of an existing PHP site. And I really wanted to actually use Django, I'd made the proposal where to replace it with Django, because we were already very invested with Python at that point. So yeah, the the, I got a phone call one night from someone up there. So it's amazing how this works in Formula One, like there's a lot of churn between the teams. So I got a call saying, Oh, we heard that you've translated a lot of stuff from bash to Python, we're looking to replace our processes as well. Would you be interested and sort of said, Well, yeah, be waiting for a while to get the go ahead to do this, that, therefore send DSL I'll come and I'll try it here and said, so yeah, yeah, that's really cool. What what I think is really interesting about that transition is like, you've got to bump up against all the challenges. And, you know, you made a lot of progress at the first company, but it was still stuck in the kind of the way it was before. And here's a chance to say, Alright, if I could reinvent that world and do it the way I now know it should be done exactly. Like that was your chance here, right? which most people I don't really think get a chance to kind of put those two things side by side. So what was that? Like? Yeah, I agree. I mean, I fully appreciate why forcing you didn't just want to start everything from scratch, because everything's much better than it was four years ago. So why do we need to keep going? So yeah, it was it was really a good opportunity. I really had been that sort of planning how I might do it from scratch in my head, especially facilitating Django, like at the time, Django was becoming quite popular. And I'd kind of got fed up with writing in line SQL in Python and PHP at that point. It's not really great either in any place.

36:41 No, luckily, it's everything's fairly sort of secure and tie down in that environment. Very much air gapped, there's no, there's no internet connectivity. So no little Bobby tables. Exactly. Yeah, exactly. Yeah, you'd have to be an aerodynamicist doing it. It'd be interesting to see if they could put SQL injection into a CAD model, it would be quite a feat. Indeed. So tell us about like, when you had this perspective, the chance to kind of redo things tell us tell us about that journey. So yeah, my main goal really was to kind of bring together both the webapp side and the side that was running all of the simulation side. So as we've set it forth in here, it was very much you had a bunch of code that was running these discrete steps through the cluster, and then spat out some data at the end, which was then picked up by the the web front end, what I really wanted to do was make that much more of a, of a single application, probably behemoth. And to leverage a lot of the stuff that Django gave you like, the ORM, for example, is a real interest. From my perspective, you know, having your database written out in code. Yeah, orange sometimes get a bad rap. But I feel like 80% of the time, it's absolutely, like no, no contest, like, it should just obviously be the thing used, and every now and then it maybe it's not the right answer. But that's not the main case, it's so nice to work with those things. I agree, especially when you have a lot of relational data like we would have, of course, we have an overarching object, which is the simulation, you just run. And that's connected to hundreds of discrete fields, and rows, all over different tables, wish to manage in your mind, something that's not committable, as it were, is not nice, having it all nicely laid out in Python. And of course, your Python linter can check that what you're doing is saying passing objects around on the face of it. So yeah, it makes that not just easier, but in my mind a lot a lot safer as well. Yeah, there was some aware they got the day if it's like some internal research are just internal, keeping a record of how many bugs there were when I talked to Lucas Lenka, from his time at Facebook, and Instagram, and they converted a whole bunch of stuff to have type hints, basically, so the system could know what's going on. And they said they dropped the number of bugs that they run into quite a bit down by just having that that kind of how does this hang together? type of analysis in there, like you're talking about? Yeah, I can definitely see that. Like the moment that type hinting became a first class citizen in Python, I was straight on it. Because it does, just as you say, all the silly things that people do it. I mean, not all of them, but it gets rid of a lot of the silly mistakes. You make a lot more than you first think as well. Yeah. Many of them. Yeah, for sure. Yeah, I'm with you. It's fantastic. Yeah. And the ORM is doing that for you on a on a database level. And and it just means you don't have to keep creating DB connections in code all over the place. If you've got if you've got part of your application that needs to access the database. It's just an object to pythons view, though. It just to me, it simplifies so much of that environment that that I was very happy to use it nice to rebuild a lot of that workflow over that PHP sort of static site thing. Exactly. In Django. And that sounds like it was really good experience, right. Like, what was the contrast between like a PHP site and Jenga to me? I think PHP has moved on quite a lot. Now at the time when I was you

40:00 It was mostly just a templating language, or at least we were using it as such right in 2010 is probably was already not brand new at that point, right probably had been exactly around for a little while. So we're talking like 2005 PHP or something along those lines, right? Probably yes. So yeah, okay, it was a massive difference. So it was all of the front end was defined effectively in PHP, I then slapped on a bunch of JavaScript to try and make it a bit more of an sp a, but it never obviously, that's there's a limit to that. I wanted to make this new sort of management site, completely first class SBA. Because when you're dealing with so many different objects on your web page, being updated constantly, the simulation is spitting out new data every few seconds. And if you want to track that, you don't want to be sat there refreshing the page every couple of minutes, right? Yeah, it sounds super cool. What JavaScript framework did you pick in the end journal, which isn't that well known? The guy who wrote it now works at Microsoft, and as I've heard of it, but I've never used it? Yeah. So it's now it's been sort of superseded by orelia, I think is the guy's name. I think his Eisenberg is done. Mm hmm. But just look up orelia. Rob Eisenberg. Is that right? That's the one. Yeah, exactly. Yeah. And the reason why I liked it is because it's not very intrusive. If you know, JavaScript, you'll get along very well with orillia or, or durandal. Because it just looks like JavaScript, you're just making observable variables effectively. Whereas we played around with Angular, for a bit, man, it just felt like I was writing something completely new and didn't really want to spend time learning completely new world effectively. Yeah, for sure. I had the same sort of feeling about Angular. Yeah. So it was like, especially for what we were doing, I found it a great front end toolkit, honestly. And the all the stuff I don't force in here, I'd written pretty much from scratch a lot of jQuery to polyfill, because we had a mix of Internet Explorer on the Windows machines and Firefox on the on all the Linux machines. But mostly, it was all from scratch, which I didn't want to do again, because again, that has limitations pretty quickly, of course. So we didn't use much of the Django templating itself. So we weren't creating that many static pages with Django. We were mostly using the views as an API server. And we would just serve up this SP on first, right? Just basically teach the views to return JSON, and you're good. Exactly. I'm like that. Yeah. Okay. So one thing that you put in the notes here, I think is super interesting is that you had some of the views were like, performance critical. And you they were backed by CPI object, this implementation? What does this tell me about this? So I'm trying to think precisely what it was doing, it was basically just collecting a bunch of numbers, and then either multiplying them together, or doing some function on a bunch of numbers. And yeah, that was taking too long for Python to do on its own. So we ended up basically writing that in C, using pi object. And then obviously, you can just import that right, exactly, into your Django view, which is really, really nice. And it sped up the function massively. And that was a function that was called multiple times in a page load, which isn't the nicest thing to do, but it had to be done. So like the page load speed would was improved dramatically. I think that's a really interesting escape hatch, because Python is so nice for so many things. But there are certain things where it's just kind of slow out, right. And a lot of the libraries that get using those places like NumPy, and whatnot, they, they actually just fall back to see exactly internally, you just don't have to think about it, right. But disability just say, Oh, I'm going to rewrite this in C or in rust or something just for this little tiny bit. Like you don't write the whole thing. And it just this, you know, 10 lines or 20 lines, or whatever it is. Yeah. And that's all it was. I think we wrote it once piled it shipped. And then I think there's one alteration done a month later. And that was it. It just sat there for years. And we never touched it again. Beautiful. Yeah, the reason it's so interesting to me is because I often think about this in the terms of like data science computation, right around those tools. Surely we do. But like in terms of Django, it's just interesting to see it there. It's cool. Yeah, because we were using Django for pretty much everything except the actual driving of the simulations themselves. So we use celery quite extensively for queuing and running micro tasks. So the the simulations at this point, were actually kicking off a bunch of celery tasks to go and extract data, etc, etc. So it was we had two very big, not quite as big as the nodes that I'd mentioned before, but two very big nodes running this Django app across the two of them, and just having Django manage all of that with the fact that we can just import the ORM to dump data into the database, and was really, really nice. sounds super cool. Another thing this time, sounds like you got to displace was a little bit of MATLAB. Yes. So that's the sort of small work I did with the vehicle science side of things. Basically, I think any software developers job in f1 is either replacing bash Excel or MATLAB because, yeah, we're gonna replace one of the three pick one, right? I did. Yeah, I must do all three. Actually.

45:00 Believe it or not quite a few teams as of recently, were running wind tunnels off out of Excel, which is still, whoever wrote the VB to make that work is insane and a genius at the same time. It's a mad genius is on the line for sure. between the two. Yeah, I was I interviewed Dane replicable from Richard Childress Racing the NASCAR team and talks about the stuff he was do with wind tunnels there as well. And, yeah, it was like an insane amount of Excel they were doing over there and NASCAR as well, like, yeah, I mean, if you have the tool, and you don't really know, programming that well, but like, if we put it here, we can like, piece it together. I could see how you kind of put yourself into that corner of like, well, it works over here. But exact Josh, is it slow and hard, right? Again, it was when you had people who don't ever use Excel, who were starting the teams, or at least coming into the teams when technology was picking up, you know, this is what they picked. That's what they knew. Yeah. And you soon outpace that. I mean, you know, obviously, Excel gets quite flaky when you put this level of data in. But then also you have the problems, you've got one instance that you can open. And that's it. So you had the situation where someone in the office next door would open your Excel sheet, and then the wind tunnel would stop adding data to the sheet because someone else had opened it, because Excel took a lock on that file, and it can be written to or something like that, right? Yeah, exactly. It works. Well, it worked. And then we were very happy to overwrite it. I mean, as much fun as it is to like, make fun of Excel. And there's like all these examples of minor Excel errors, leading to really catastrophic decisions, like large investments that were very bad and all sorts of stuff. But it's also I think, it's also worth just admiring. Like you said, Okay, these people gave them up here, and they just knew Excel. Yeah. But they built this, this simulation thing, when they didn't really know programming are a programming language. And they still did it. Like that's pretty awesome. Yeah, no, I agree. but you know, that applies to the bash stuff as well. I don't think I've met many developers that could sit down and write 10,000 lines of bash. You know, and I don't know, I could do it. I might quit before I got.

47:05 And it worked pretty well, for five or 10 years. It's certainly impressive in its own right. Yeah. So a lot of these things are boat. Yeah, like a lot of these are both like, triumphs, but also it's like time to move on. Right. Exactly. Yeah. So I think Matt, I so sorry, the long I sort of derailed you from this MATLAB thing. So tell me about that. Let's say that the MATLAB side, unlike the other two, we wanted to replace bash and XL because of the obvious limitations. But the reason that we wanted to replace MATLAB was I mean, purely the cost and the cost for what you gain from MATLAB, you know, MATLAB can be very good when you're doing very complex things. It can simulate discrete, even bits of electronics if you want it to. So we have people that will make a model for how a tire deforms. And then that becomes a component in MATLAB. And you know, it's nice that you can then share that with other people in the company, right? But a lot of the stuff they you know, do is just analyzing the, the how many times a tire heats up and cools down during a lap, and what window of temperature it sits in, and you just don't need to be paying several $1,000 a year to do to work it out. So well. And it also gets harder to run in situations like say, with Docker, or other stuff, right? Like, yeah, if you want to put it on a server or scale it out, all of a sudden, you're like, well, we got to get approval to run it on 10 machines instead of one is it it's not even necessarily a cost? It's just like, why is this friction here? Yes, I can just as equally do it somewhere else. Yeah, exactly. And you've had times where race engineer might drop their laptop and get a new one set up for the race weekend. But their MATLAB license was tied to the old laptop, so they couldn't get that immediately. So that was a problem. So the job was basically just to work with the vehicle science department to make a few Python scripts effectively, though, just you know, weren't much more than just scripts to replace as much of MATLAB as we could, and really was just me teaching them the basics of sort of NumPy and matplotlib. And then they just took it and ran with it. Yeah, beautiful. And you can do, there's so much of NumPy that replaces matplotlib. And as you say, not only is it something that they can do on any laptop, you can then start actually scheduling it as well. So rather than having to manually run it, every lap or whatever, they can just show you how to do that. Yeah. So you place MATLAB with NumPy and excel with pandas and you're kind of good, right, exactly. Your other races quite literally. you're off to the races. Beautiful. Yeah, so that sounds like a really fun project you did for like three or four years to just go Okay, what if we started over? What if we really built what we wanted instead of what we were able to do? And I did, how did people in your department at the team like react to seeing that transform? So it was a good mix? One of one of the things I really enjoyed about working in f1 is your sat next to your customers, which I know might sound off putting but honestly it was it was great. No, it's super cool if somebody came over, this is beautiful you belt but if you could just just do it slightly just like this. You

50:00 have, you know, well, next release? We're like, Okay, give me five minutes. Yeah, let me make that tweak. And then they're like, yeah, this is fantastic. Exactly. Right. That's a cool experience, I've had that too. And just seeing someone's face light up, when you've gone from a very static web page where you're, you know, having to load literally an X axis by x axis of plots to being able to literally just add a click of a button, change the variable on one axis, change what position in the car that plot is looking at, then say, save that, add another one to the page, compare the two, you know, overlay them, etc. It's great to sit and see that, you know, everyone in Formula One's under a lot of pressure. So anything, anything that you can do to help take that pressure off people, they're going to appreciate it very quickly. Yeah. And anything that helps them find performance, because then they look good. Sadly, I'm not that they're getting a pat on the back for someone designing a great new bit of arrow that's adding time to the car. But it is nice to know that the processes I've put in places enable them to do it. Yeah, super cool. Are you still a fan of f1? Do you watch it these days? Not as much as I just found out that I have I think it's ESPN in the US. So I moved to the US recently. So I'm still getting my bearings on how to find various TV shows and things I want to watch. So I watched a malaria and Portuguese Grand Prix, mainly because they're two very exciting tracks. For me, the tourists, the terrazzo factory is very close to Emma. Alright, so I bought I've walked the track quite a few times, I've been to coffee outside the track many times, I've always wanted to attract data. And sadly, the stars never aligned now would have been fun. But it would have been I probably would have got a bit too excited and ended up crashing. But it was a particularly nice track. So one that I was very keen to get some time on. Let's still let's still hope I still have friends that live near there. So maybe I'll maybe I'll pay and do a proper track day. You might make it back there and get there. Yeah, yeah, I've been back since leaving. I've been back there a couple of times, actually, for sort of many holidays and things and they you can go and rent. Budget racecars there to do a few laps in which I might do a hopefully after the pandemic loser. Yeah, that sounds fantastic. I'd love to do that, too. Alright, so one final question, I kind of want to put all these pieces together and get your thoughts on it is to this racing world. My sense is like, it's pretty high pressure, it's high stress, it demands a lot of your time. And you talked about these events, right? Like the software has to be ready to deliver on a date. And it's not like, well, we'll push the release a week. Right? It's got to be there for that thing, right. And now you're working for a tech company building, almost like an operating system layer for deployment. Yeah. What's that comparison look like? Like, if people are in one, and they want to think about the other like, what your two worlds look like those times. So certainly my work life balances shifted towards life quite a lot. Which is nice. I mean, I it was a great seven or eight years that I spent, I learned a lot and the pressure was a good thing to learn a lot. It was the right amount to help the but it's the kind of the any way I can think to describe it was move fast, but don't break things, basically. So we had to anything that went wrong, we had to fix quick, right? We weren't towards the latter time at Toro. So we tried to get some sort of like real sprint planning, going forward, rather than just kind of picking stuff that was being shouted at the loudest or shouted for the loudest. So we tried to get some more common, there's some stability, but ultimately, if something's broken, unfortunately, the things that would break wouldn't usually be our tools, it would be maybe the company that we'd paid to derive the simulation software, something would change in that and we had to rewrite the function that read data from it. So that stuff had to obviously happen very quickly, when does go wrong, because you just can't waste a minute. Or if there's, you know, a node goes down. Because there's a problem in the data center or something, we we have to mitigate that really, really fast, as well as delivering all of these fancy new toys that we've promised. And they can't break them either. So it is nice. Oddly, we did barely any integration testing. It was never really on our agenda. It was more sort of get as many features out as possible. And I have to admit, I do much prefer now being somewhere where testing is mandatory, you know, just a little more of a safety net. Exactly. It's a lot more reassuring to hit the release button when it's been through at least some agreed upon tests rather than just being like, well, I've tested this on my laptop, you know, my work laptop, hopefully it will work on everyone else's dagli works on my machine. We're good. Exactly. Chip it. Nice, cool. Yeah, if anyone's thinking about going into, you know that industry, then it's going to be it's going to be intense for sure. If you can thrive off that environment, then you're gonna love it. I think a lot of people are in the kind of mindset that they can thrive on it for a while and then they just need to have a bit of a break for a while. Some people have Yeah, put up with it for 2030

55:00 Yours, you know, I've worked with some people have been in f1 for that kind of length of time, and we're still loving it, which is great to see. And yeah, good for them. Right? I, I'm a big believer that careers go in sort of seasons, right? There's part of your life where you might be young and energetic, and you don't have a large family time commitment. And something like this would just be perfect. But then that same person 10 years later might actually hate it, right? They don't want to be away from the family. They don't want to be away from home or like, there's just exactly, it depends where you are not just who you are, I think like where you are in your career. Yeah, exactly. I've just been nice working with different things. So we never really exploited Kubernetes. In either the teams that I worked for, it was something that we were starting to look at, but most of our provisioning was bare metal provisioning. So to have the software layer on top of that, we were talking earlier about the operations per cycle limits, like having a bunch of virtual networks that Kubernetes put together on top of that is just going to suck cycles away. Right. So we were thinking about doing that with the Django app, actually, to put that in an acute mesas cluster across those two nodes. But, you know, in the end, Docker swarm was plenty for that. There's no need to complicate that further. Right. There's always the opportunity to but that's right. There is? Well, it's, that's a super interesting look inside of your time in f1. And the Python, it sounds like Python played a super significant role there. Yeah, it definitely I'm, I'm fairly confident that the majority of teams have pretty deep into Python. Now, purely because it works great. That's the kind of glucose that you need to be driving these, these kind of commercial software's that are used for each stage of simulation, you can then with the same bit of code, start making a bunch of images and and plots, you can have your web server running in it. It's so kind of diverse in that sense, and fairly easy to pick up. I should mention that, although I was the kind of primary software engineer at both themes for the aerodynamics department, at least, you know, I had a lot of input from people who are trying to out anamnesis, but wanted to learn Python. And you know, for the most part found it quite easy. What is this magic that you're wielding over here?

57:07 Exactly. But if, for example, if you wanted a certain picture to be made that we weren't currently making, an aerodynamicist could just sit down, change a few lines of code, push it into a git branch. And we could have that running in a couple of hours, which really powerful that to give someone that the ability just to say, right, I want this to be added to this big chain of of steps. And it it just work. Yeah, super cool. All right. Well, I think we're out of time, but it's definitely a neat look inside. Now, before I let your hire you, though, I got to ask you the two questions that go at the end, of course, sure. You're gonna write some Python code, what editor do you use? So now I use vs. code? I'll admit that I use Bhaijaan through my whole f1 career. Yeah. All right. And I've I've been I've been pulled over to the vs. Code side. Interesting. Yeah. And I think it's down to those two these days. Like, those are the most popular answers. It used to be a lot more variety when I started the show. But now it's, it seems to be settling into those two camps. So yeah. And then notable pi package, something that's cool that you've run across that maybe people should know about. Yeah, so I've been writing some tests recently. And I had a function that I didn't really that would run a part of this code. And basically, it would take a while for the application running behind it to settle. And I didn't really want to put a bunch of sleeps in there, that just seemed bad. And I didn't want to write my own back off loop. So basically, I found there's a package called back off. And you could just decorate methods or functions that you want it to be backed off, and then provide a bunch of parameters, like how many times you want it to try how long you want it to wait before the next attempt, it was really nice to literally put the one decorator around your function. And it's done. Yeah, that's really cool. And you can do things like back off, on exception of these types of exceptions, then just slow down and try it again. And so on. Yeah, it's really nice. Like, especially for writing tests, some kind of like too many requests. You know, is that some 400? error? Like there's someone that says too many requests? That one, let's just slow it down a little. Yeah. So if anyone's dealt with the AWS API and has, yeah, got that the rate limiter from there. This is a great library for that. Okay, cool. Yeah, it's a great one. And I hadn't heard of that before. Very neat. All right, final call to action. I'll give you two angles here. One, if people want to get into some kind of racing career with software development skills, what should they do? And then maybe getting started with your Kubernetes charms termed Kubernetes? stuff? Sure. So if someone wants to get into I mean, Formula One software development side, there's really two approaches, I think most of the aerodynamic and the windtunnel what both the CFD and the windtunnel side will be Python these days. A lot of C# is being used in Formula One now as well. So if there's anyone who's got a taste for C#, anything that's done on Windows will be written in C# now. So for example, the the driver in the loop simulators you know, the actual simulator, the driver sit in will be already

01:00:00 In C#, so yeah, you don't even really have to be that big into f1. I happen to be because I've been racing since a kid since I was a kid. But yeah, as long as you've got some sort of basis in Python and or C# just apply, I mean, just try, I really wasn't that experienced, I got a, you know, a lucky break, and exploited it to as much as I could uniform. And then charm Kubernetes. People are interested in that. Yeah, sure. So we canonical currently have to sort of give out these products that we are working on at the moment. One is called micro cades, which is a very opinionated distribution of Kubernetes. It started off as a single node distribution. So if you just had your dev laptop, and you wanted to test some Kubernetes application that you're writing, you could just literally in one, command, install micro Kate's now with, there's a lot of demand coming from sort of IoT and edge devices, you can now run them in a proper Kubernetes cluster. So we've literally one command to install Kubernetes. And then another one to add it to your cluster. You've got it. So if anyone's use cube ADM, it's similar except for you've got everything in your micro Kate's package, you've got the container runtime and the API server and everything else. Oh, yeah, that's neat. I mean, normally, I think of Kubernetes as being a server side, big server type of infrastructure. Right, exactly. So one of the things that also put me off using Kubernetes, in the fall 19 was basically testing on it was quite tricky, then. So with stuff like micro cades, it's really easy, you can just define the infrastructure that you want to test. And you can just run it in microcodes, whether it be on your local dev machine, or in ci CD. And now if you've got a bunch of Raspberry Pi's, running on some edge platform, you can do the same, it's just a single command to install it, you have everything you need in that single package. And yeah, you just add them to the to the cluster with a single command as well. So to the two commands to install it and, and enroll it. And then if you want a less opinionated, Kubernetes distribution, all of the kind of the the actual code behind it is all written in Python. It's called chunk Cuban 80s. It runs off a product called Juju, which is basically the actual program that deploys and then monitors your Kubernetes cluster, and then enforces that what you've asked for stays the you know, you can choose between the different runtimes. So if you want a Docker or a container D runtime, can swap out your CNI, so flannel, Calico, etc, all with single line commands, and you can do it on a running cluster. And because of this sort of Python infrastructure code that we have running in the background, it's handling all of that for you. You're not having to either reapply it, or gracefully bring down your cluster First, it will all do this on a running cluster. Wow, that sounds neat. Awesome. So yeah, two very different paths for people to check out depending on what they're interested in. So let me just ask you real quick, because you mentioned it, what kind of racing did you do? I did endurance karting. So there was a series called epacket, which is European procart Endurance Championship. So we did 24 hour races across across Europe. No way. 24 hour car races. Oh, my goodness. Yeah. So there were three or four of us per team. We do two or three hours each, and just keep rotating like that. People might think of carts as like things like go ride a little adventure parks. But the racing cars are beasts like those things are really fast and intense. So yeah, that's a it's an interesting world. The cool that, yeah, that they're not the kind of bachelor party things. I think you'd imagine the capable of doing 100 miles an hour. So you do have to have Yeah, you know, yeah, it's not good to be sleeping. It is off the ground. Oh, is that like to another ground with no seat belt? Right? Yeah, even two millimeters, I think.

01:03:37 Oh, my goodness. Well, Joe, this is a super cool. Look at what you've been up to. And thanks for sharing another cool Python story. Like we're much for having me. Yeah, you bet. Bye. This has been another episode of talk Python. To me. Our guest in this episode was Joe Borg. And it's been brought to you by us over at Talk Python Training, and our friends at linode. Simplify your infrastructure and cut your cob bills in half with linode. Linux virtual machines develop, deploy and scale your modern applications faster and easier. Visit talk Python FM slash linode. And click the Create free account button to get started. Want to level up your Python. If you're just getting started, try my Python jumpstart by building 10 apps course. Or if you're looking for something more advanced, check out our new async course the digs into all the different types of async programming you can do in Python. And of course, if you're interested in more than one of these, be sure to check out our everything bundle. It's like a subscription that never expires. Be sure to subscribe to the show. Open your favorite pod catcher and search for Python. We should be right at the top. You can also find the iTunes feed at slash iTunes. The Google Play feed is slash play in the direct RSS feed net slash RSS on talk python.fm. This is your host Michael Kennedy. Thanks so much for listening. I really appreciate it. Don't get out there and write some Python code.

Back to show page
Talk Python's Mastodon Michael Kennedy's Mastodon