Learn Python with Talk Python's 270 hours of courses

#324: Gatorade-powered Python APIs Transcript

Recorded on Thursday, Jul 8, 2021.

00:00 Python is used to solve a large and varied set of problems. One of its core pillars is web API's. Another one is ML and Data Science. Those two important pieces were brought together in an unexpected and yet magically futuristic way by Rod Sandra's team working with the Gatorade Sports Science Institute, they created a patch that you wear while working out once or twice, it analyzes your perspiration combines it with other factors like running distance, sleep quality, and more than it provides recommendations using Python about how to get more effective fitness. This is talk Python to me, Episode 324, recorded July 8 2021.

00:52 Welcome to talk Python to me, a weekly podcast on Python, the language, the libraries, the ecosystem, and the personalities. This is your host, Michael Kennedy, follow me on twitter where I'm @ mkennedy and keep up with the show and listen to past episodes at 'talkpython.fm'. And follow the show on Twitter via '@talkpython'. This episode is brought to you by SENTRY and LINODE. And the transcripts are brought to you by AssemblyAI, please check out what they're offering during their segments. It really helps support the show. Rod Welcome to talk Python to me, thank you super to have you here. There's a lot of things you've been building for different companies through your work. That's going to be really fun to explore. I think one of the really neat things here is you're not just working for one company working on one team the way it works, but you're kind of interacting with a bunch of different projects and types of technologies. So you'll have a good broad perspective on what went well, what didn't what you would change and so on. That's true, by the way, is the one thing that stuck with me since 1997 across different companies, let's say like five different companies, two different like continents, but Python was the one thing that remained the same. That's fantastic. Good. Good choice. Indeed. Well, let's start this conversation by talking about your story how you got into programming in Python, it sounds like you were one of the early adopters, what version of Python that was probably a version one type of thing. It was 1.5.2 it was 1997. At that time, I was almost finishing my undergraduate course in Computer Engineering. And I already engage in my Masters post graduation course, where were you studying? What was your degree? Well, my undergraduate degree was Computer Engineering. But for the master thesis, I was studying computational reflection. In the end, it's the line of research that led to aspects j and other things. But it was all about object oriented protocols. So how you organize object orientation, programming languages, those kinds of things. Okay, cool aspects j that's like Aspect Oriented Programming. Exactly. Aspect Oriented Programming was like a byproduct of the research in computational reflection. At the time we were, academically speaking, we were trying to figure out what can you do with reflection, I think Java was one of the language that made it popular to do introspection and reflection in programming languages. Of course, we had that in list, like since back to the 70s. But Java made like known to the wide world world. So then we became like researching more formally. And I started probing support for reflective programming in programming languages. And that means not only Java, but at the time, Perl, Python, Tickle. And once I saw Python, it felt like a glove on my hands. And then I stuck with it. Yeah. Oh, that's really neat. Maybe you could just talk a little tiny bit about what reflection is for people listening who maybe haven't done a lot of, you know, Java, or, you know, .net, they have it as well , and in Python, we have it in the sense that you can go in and explore the types. And so you're given an object, whatever its class, and like, what is this metaclass and you can like dig into it, and even change it, but I don't typically hear it referred to as reflection in the Python ecosystem, right? Yeah, introspection became a more common term used, but at the time, reflection was about creating programs that reason about programs. So it was about this meta level reasoning. And it was very popular to do non functional aspects. So you want to write your codes , and the code has the functional aspects, what do how it's like transform things. And then the non functional aspects like logging, persistency, monitoring, those kinds of things. People were exploring how to detach, decouple those things from your actual program, and do that on a separate layer. And that was what reflection was all about. It was about having as a input your own codes and as output, changing the behavior of your program, and the way we attacked that at the time was by an interception mechanism. So Python always had that. I think they calls that Dom bildry Hook or something like that, where you could like intercept, like anything that was happening there. We do have the bugging hook in the Sys module that allow us to, like, pause your computation, and see what's happening, other programming languages, and at the time, we're talking about the 90s, the popular ones, were statically compiled programming language, right? c, C++, they generally don't have this behavior. Yeah, exactly sometimes they have runtime type information, when you compile your source code, enabling that

05:48 that's where I remember there was an RTTI in C++, but it was off by default. Exactly. But in the interpreted world, you had everything. right, you had the interpreter during runtime, sometimes you had the compiler as well, which was true for Java, which was true for Python, and Perl, all the code being interpreted dynamically, languages, and then that became more of a thing. Of course, in time, people realize there were other ways to achieve the same goals, which were more effective, because one of the main things with introspection interception is that everything that is happening in your program is can being reified, delivered to this meta level, and then handled as data. So there is like a major break in speeds, right? So it was not the answer for everything, especially when you wanted to have like runtime performance. But for things that were offline batch or analysis, it was not a problem. And one thing that an exercise that I done at the time, which is kind of interesting, was to introduce a debugger during runtime. So because we have this reflection mechanism that works just like this, you could take possession of an object, and then interaction with this object, which goes to its guardian angel, which I call the meta object of this object. And then the meta object decided what to do to lie about what it was happened to just delegate back to that object down there, do something else become a proxy, solve the computation, so you could do anything. And in that case, we could install a debugger just for a given class, or a given instance of your program, and then propagate like a virus for anything, either that class or instance touched. So that was kind of interesting and new dimension of programming, but became more as an academical exercise. And the only thing that I remember that kind of succeeded into the industry was Aspect Oriented program of sorts. Well, that's very interesting. It seems to me a little bit like decorators. Yes, decorators are wrappers. But the thing is, they are not transparent. So whenever you create a decorator or a proxy, what's happening is that the references you have to the original objects now needs to points to the wrapper to the decorator, right? So the key insights for reflectional computation that we were exploring at the time was how can we make this proxy totally transparent. So if you have a pointer, or a reference to an object, before it became reflective, it will still be valid. So you can turn on and off those introspection mechanisms, any perception mechanism, and it's totally transparent. So to achieve that, we had to change the interpreter. So that's how we done it for Java. And that's how I was exploring at the time for Python itself in Perl, to change the interpreter to add the hook in there and do transparent wrapping yeah, it seems like some of the new peps might make it possible to plug in now, rather than actually change it. I know, there's that jit plugin that they're working in that where you can intercept the method, parsing compilation bits and stuff. I don't remember the number. Yeah, I haven't explored, like doing it to modern Python. At the time, I did some exercises using the debug the debug book. Yeah, it kind of works. But it was kind of messy, and super slow. So I kind of stopped, but it'll be interesting to revisit that again. I'm sure. I do. Remember how interesting all the research was around that time. And I feel like a lot of what's happening now it's getting a little more structured, a little less dynamic with things like decorators and other types of wrappers and inner functions and stuff. But yeah, quite an interesting time. So how about now maybe we can talk about what you're doing these days nowadays. I'm a Technology director, at Work & Co. Work & Co is a digital products company where we not only do designs and strategy, but we also do implementation and quality assurance for the projects that we take on. So that's what makes us different from just an agency, a design agency, because the founders of Work & Co realize that it's much better for the clients to hire a single entity to do the full digital products turnkey. And then we can make sure that whatever we design, it's possible to build. So that is the vision for working co. We were very successful, I must say, in few years, Work&Co won many prizes. It became famous for the Virgin America app for booking passages with Virgin America, it was like five or six years ago,

10:48 but,we're doing things for like Apple, Google, Chase sea, you name it. I know, that involves a lot of Python, it probably does it involve other technology, probably other technologies as well, right? Somebody comes and says, Well, we want the thing designed, and we're a Java shop. So make it for Java, whatever happens is usually there are projects that are just design projects that are designed plus front end work, projects that go all the way through to backhand, sometimes backhand is just like a mediator for some infrastructure that the clients already have. But there are other times where we have to do everything, like the full digital platform for the client. And in those cases, Python becomes a key technology for us. Because what Python gives you most in my experience is optimizing for development time. Right? It's people are very much concerned about Python runtime efficiency. But the key problem is development efficiency, right? Most of the time, if it's a little bit slower, like in comparison, a Formula One car and a regular vehicle, one is much faster than the other. We don't need Formula One cars most of the time, right? So it's like fast enough. And that and that is the point. So for many things, Python is fast enough. And but in terms of development time, Python is the Formula One vehicle, it's like super fast, it's super easy to throw things away. It's super easy to explore. It can touch every niche in computer science. And I can have Yeah, yeah, it has exactly this is PyPi, right? For example, you can install this and go exactly. So it was a key technology for us, in those cases that we have an extremely agile cycle of development, lots of changes, because design was sometimes moving along the way. And we need like to rebuild the back end infrastructure like overnight. So because for those scenarios, interesting, I think the conversation around performance in Python is super interesting, because there's just so many layers and variations. Yeah. And how, what are you trying to do? Well,

12:53 if you're trying to do a tight loop that does math, guess what? pythons bad at that. But maybe you shouldn't be doing that. Maybe you should be using NumPy and the CyPi stuff, and then all of a sudden it sees speed again, yes, maybe you're doing some data driven Web API. And SQL might be slow. Well, actually, what you're doing is you're orchestrating exchange of JSON and talking to a database server or cluster. And it's almost exactly the same speed as if it were written in C, because you're mostly waiting on the database and waiting on the network. And there's all of those things. And then on top of it is this thing that you talk about that often gets ignored. There's a really interesting story in was recounted in Mike driscolls, Python interviews book about the competition between Google Video and YouTube, and how Google Video had like 100, C++ engineers, and YouTube, that's a little startup, had 20, Python developers and YouTube was just blown away all the Google engineers, because they could add features faster. And if Google would do something, they could copy it, re implement it really quickly. And so Google fix the problem by buying YouTube. It's just it's still Python some degree there. So I think that's a really good point. Yeah,

14:01 my experience, there are two like factors. One factor that he just mentioned, is having it under control when it grows. Right. So software is like a living thing. You start a project it gets it's born, it starts to grow, right? does it become a Sequoia? Or does it become blackberry bushes? Exactly? This portion of talk Python to me is brought to you by SENTRY. How would you like to remove a little stress from your life? Do you worry that users might be having difficulties or are encountering errors in your app right now. But you even know it until they send that support email? How much better would it be to have the error and performance details immediately sent to you, including the call stack and values of local variables and the active user recorded in that report? With SENTRY This is not only possible, it's simple. In fact, we use sentry on all the talk Python web properties, we've actually fixed a bug triggered by a user and had the upgrade ready to roll out as we got there. So Port email, that was a great email to write back, we saw your error and have already rolled out the fix. Imagine their surprise, surprise and delight your users today, create your Sentry account at 'talkpython.fm/sentry'. And if you sign up with a code 'talkpython2021'. It's good for two months of Sentries team plan, which will give you up to 20 times as many monthly events as well as other features. So just use that code 'talkpython2021' as your promo code when you sign up. So it has this life cycle, right. And at some point, depending on its history, it gets out of control. So if you didn't have like the proper team, the proper guidance, it can become the problem, not the problem you're supposed to solve. But the software itself becomes a problem. And then people tend to let's throw this away and get something else because this is unmanageable. And I don't think it's an intrinsic problem on any programming language or infrastructure. It's more between the matching of dids, the team that was doing this really mastered the technology at the time. If they do, it could be Fortran and COBOL, C++ or Python, you have good results. If they didn't, they might get into these one side one way roads in the opposite direction. They are screwed, right. So this is one thing, the other thing, it's all about flexibility and performance that you mentioned. For example, in this latest project that we are tackling the GX project, we have at all we had CPU bound problems that were tackling IO bound problems. And in both cases, Python was behaving just fine. We have like 16,000 users. Right now, we have three API instances. And when it's a performance problem, we change the algorithm. We introduce caching, we're using MongoDB for scalability, and we have zero performance issues so far. But because we had the software under control, we know what's happening there. Technology is not a mystery, then it's easy to pinpoint What's wrong, and then replace it for a new version fast. And that's it, it's bringing results at a fantastic point, you know, you've got the architectural considerations as well as just the raw technology, right? Oh, yeah, for sure. It's a single thing. That's why I use the metaphor of like, a living organism, because it's on an ecosystem, the, your software is not an island anymore. In the past, we had like this release to the desktop install, and it's on your on your machine. But today, it's interacting with the operating system with the cloud with the user. So it's alive, right? It's a dynamic ecosystem. So it's all about those interactions, and understanding do they mimics on each of these interfaces, and then optimizing for the sanctuary for the flow. Pretty cool. And I want to dive into this first API, because I think it's super interesting. People are going to be really surprised by it, I'm sure. Well, let's go.

18:04 Yeah, I'll be fine. But before I do, I do want to ask you one more sort of big picture question. You're working with Work & Co you're working with all these clients as this digital agency doing this end to end work, which I think makes a lot of sense. What is the trends for Python that you've seen across the last five years or so? What is stood out to you from these conversations? Maybe you had 5,10 years ago, you're having now you're thinking about the future?

18:30 There are two things there my reality, at least. And that doesn't mean like, it's a broad world that we have out there. So there's any case, right, so I'll speak just from my experience in the last three years. So there are two things that come to mind. One of them is this like mediator thing. So because everything is in the cloud, because you're not writing any more software from scratch, do you have to do integration, like gluing, Python, back in the early days, became famous as a glue language. And it was kind of gluing things into desktop, now, by those gluing things in the cloud, because Python is such a versatile language in terms of a huge ecosystem of libraries. Because the language itself allows you to compute like anything, it's became an excellent technology for a group language . And for us, it's critical, because any project we're dealing with, okay, we have to talk to database, for sure. But we all have to talk to a forecasting Weather Service, we have to talk to a push notification service, we have to talk to analytics system platform. So you need to send signals there. So it's these echo system thing. Python is great for that because you find the connectors, you find the drivers, you find examples so that all of that and speeds up the cycle of development and gives you confidence that you will achieve whatever you need. Success, right? In terms of existence, still, we're not talking about just the computer right now. So we did a project for Google the new store in Google, there are some embedded devices there. So that's another niche, embedded systems, IoT things for the television, we did things for Marriott, where it was like embedded on a setup box. So this is one thing that I think we're going to see more and more often, now that we see like CPUs in your lamps, and people playing lamps, right. The other thing, it's the data driven aspect. So one of the flags that I carry at Work & Co is the data driven design. So data driven design is like, okay, we're not doing design just based on the inspirational thing for designers in their creativity, and their capacity for innovation, but anchored in data from the real world. Sometimes that approach goes all the way through conceptual design, into the end products, which we are seeing with these machine learning deep learning base products ML model in production behind flask, there's nothing Yeah, exactly. So that's another trend. That's pythons is great, right? Not only for the business analytical aspects, but also for the pipelines to build

21:22 models in production. The way you described Python as a glue language is super interesting compared to the way it's traditionally been described. Traditionally, glue language meant something like Well, I've got a C library, and I'm talking to Linux. And so I could write this stuff in Python, that'll do some shell stuff with Linux, and that it'll also like pull in the C API and just move that data from here to there and go in it. It's come across a little bit as a second class thing. It's like the very best scripting language you could imagine. And it's kind of not perfect for apps, but Well, we'll we'll do this. But what you described was, we're going to take the database, we're going to take these API's and take the web requests, we're going to glue that together. I mean, that is the application. It's like, thinking, yeah, it's thinking of, well, really what is what is a modern API app, other than a thing that takes a little data from an inbound request, maybe pull some pushes something over to celery, grab something out of the database, calls these API's and bundles it back up as a response and sends it back with a status code. That's the entire application. And yet, this idea of gluing these pieces together is a really interesting way to think of it.

22:35 That's what I learned. It's about like glueing systems and gluing data. And of course, these things are connected. But sometimes you can have gluing data in a synchronous way, right, in batch processing, or a local processing or for analysis or human interaction. But it's still like gluing data from several sources. And there's this other approach, which is like glueing live systems. And of course, data is flowing across them. But also the connectivity aspect is key, right? Can I authenticate and talk to all of these systems in the way they expect to interchange information? The real power is how well does it perform that that combination, not how well does that one millisecond versus two milliseconds in the actual web view method? runtime behave? Right?

23:23 It's Yes. It's super interesting to see it that way. All right. Well, let's jump into the first API that we're going to talk about. Yeah. So this has to do with Gatorade , which is the sports drink company. And they have created this really interesting idea of the quantified self for fitness, here with this thing called the GX sweat patch. Exactly. This is something that blew my mind when I saw it first.

23:49 Yes, this is something that is really the beginning. We launched this in production in March 1 this year, it has been in development since 2019. And in the research inside Gatorade goes even far back. So what they are trying to do, Gatorade has this sub unit called GSSI, aside the Gatorade Sports Science Institute, I had the opportunity to visit one of their physical instances in Sarasota, Florida, in 2020. And they do like amazing work. They're like bridging sports science, like students actual experiments. And they are helping to develop this concept of how do you bring some of the metrics used by high performance athletes, to a more general broad audience, right? We've

24:40 probably all seen pictures of Olympic athletes or you know, pick your favorite sport athletes with a couple of doctors or researchers around with a clipboard and like a big breathing thing and they're on like a workout bike or they're doing and they're like studying all these different things. Right and they get really interesting feedback on well under this situation, this is where you're hitting your limits, in cardio or in, like breathing that altitude or something. And for the rest of us, we just put on shoes and run or you get on a bike or whatever it is. And we just have no idea

25:16 Exactly , in the specific case of the what you're showing on the screen, the Sweat Patch, this thing that you can put in your forearm, and is going to capture your sweat in your micro pores in your skin. And then you lead that sweat into two channels. One channel, which is the orange channel that fills up in is the zigzag form, it's the volume of sweat you're producing, the other channel is a slightly bar on the lower side, that will react in colors to reflect the amount of sodium that is in your sweat. And those two things, they will be super critical to understand how much fluid you're losing from your body, why we're performing some sports activity and the concentration of sodium you're losing as well, because sodium, as everybody knows, it's critical for the performance of muscles, the potassium sodium pump that exists in control muscles. So learning about those things can help you in improve your performance. And of course, this is more impactful if you're a professional athlete, but I think everybody can benefit from it. And the biggest challenge was to take that out of the lab, where you mentioned, Michael, where you have an apparatus that is easier to capture all the signals, and bring that out in the field, where anybody can use under any conditions. And that was the challenge that we took off to help Gatorade to achieve that. And it's had a lot of obstacles for us to overcome in Python was like super helpful, because not only had the tools first, like build API's, but also there's a ton of formulas that we needs to compute, to achieve that in Python had NumPy, pandas and stats, and all the tooling that we needed to make it happen. Let me just describe this really quickly for people who are watching. It's like a band aid, maybe two and a half inches, yeah, like four fingers, three or four fingers, something like that, maybe about the size of the palm of most people's hands, you stick it on to your arm, and it changes these colors and reads out and then you scan it with this iOS app. And it gives you an analysis that you're talking about here. Yeah, exactly. The app, of course serves for other purposes as well. So you can like track your workouts, it does integrate multiple sources of information. Garmin is Strava health kit. And then it creates this timeline of events in your day, that understands if you're like doing multiple workouts a day, very early in the morning, very late on the evening, and how they interfere in each other. Because if they are too close, maybe you shouldn't drink anything between the two of them. If they are too far apart, maybe you need supplemental hydration. So all of those effects, we perform that analysis in Python at the back end, and the app becomes like the avatar that conveys that information to the athlete in real time. Yeah, when I first heard about this, I thought it specifically would just be like, well, Gatorade recommends this, here's how much Gatorade you should buy, you should get the the lime flavor, not the cool or whatever. But it takes in data from like you said, like Apple Health kit and Garmin and these other things as well. How do you get that data out of it? That's a great question. So the plan was to go as broad as possible. We even investigated Fitbit and other providers. But for the MVP, we went for three sources two external sources, Strava API and Garmin API. So if you're a Strava user Garmin user, you can you know web view, connect to Garmin and Strava through our app. And then we created this data feeds from Garmin Strava into our system and like some OAuth back end, sort of API type of thing. Yes, exactly And then it starts popping up in the app you don't need, the app becomes a read only thing. You just look into it, and you perform your exercise. But the information comes by for the backend. So these are the main channels, but you also talk to Apple Health kits, the platform in the iOS that consolidates information from multiple apps, health related, of course. And then in that case, if you have like sleep tracker, or other sports tracker, for example, I use runkeeper. So I go to that application, I allow that application to export information to health kit. And then of course, Apple Health, the app will see it but also any other app registered as a reader for the healthcare platform. And that's how we grab and consolidate all your activity, sleeping and workouts into this single timeline. And then we provide recommendations on top of right. So you might be able to correlate workout with sleep, how well you sleep in or something like that. Yes,

30:11 that's one of the things we do. That was one of the challenges dealing with this events timeline became this, we needed a time series database to do time series analysis, because time is super relevant, we have to handle conflicts, because you may have like, many trackers reporting the same thing. So the same physical events comes as multiple digital events. Sometimes there are applications that break up a single event into multiple events sleep trackers typically do that. So for us, you have a single night of sleep, you go to bed, and you wake up, right. But actually, what happens is multiple cycles of sleep, then you wake up in the middle of night, but we don't handle them logically as multiple sleeping periods, for us logically is the single sleeping period. So this is another thing we have to handle in the system. Yeah, it seems really useful. Actually, the more I hear about it, so let's talk about some of the tech behind the scenes for this one. Oh, there's probably some data science side, there's obviously the API and database side, what do you got going on here? Okay. So in the beginning, we went down to gssi, the gateway sports sciences institute and together with their scientists, we start to create a model, not only for the way the sweats patch works, the sweat patch was developed by a Boston company named epicor, the physical batch, and also did driver that's kind of captures the basic information from the batch that is embedded on the mobile app. And then when you take a picture of the batch, we do some image processing, and we extract the two, the two bits of information they really want. The volume channel is probably something like swift or something on the app, right. It's today and since C# summary but it was moral restriction from PepsiCo, they, they were already using that technology. And it was supposed to be framework cross platform. So later, when we decide to go to Android, it will be a possibility to like reuse that framework. So there was a constraint that we had to accommodate for. But those modules are Objective C and swift, for sure. They are native. Yeah. And so what we do after we have that, we have to translate whatever you read, which is your local, sweat rate how much your forearm is reading a single microbore in your forearm, when we have a statistical model developed by gssi, to translate that into our whole body strength rate. So that is the first like machine learning, statistical learning bits that is embedded on the system. And we need to take into consideration What's the weather like, what's your weight, what type sport you're doing, what's a humidity? Yeah, humidity we're not using right now because it's hard to capture. So we had to create a less accurate model. But for sure, humidity is critical. But it was a product decision to leave that one out because of the facility to grab the information to capture that Right, right. And once we have your whole body sweat rates, that acts as a crystal ball for future workouts. So when you're performing a new workout, you don't need to use the sweat patch. Again, you can use the sweat profile the crystal ball to predict given the conditions of this new workout, how much you're going to sweat, and then based recommendations on that. So of course, if the there are some constraints that if they do not match, you have to do a new sweat test create a new profile for those new conditions. For example, I create a patch for running and I'm doing bike so it's a different activity, it will be best if I create a new profile. Yeah,nice

33:56 This portion of talk Python to me is sponsored by Linode. Visit 'talkpython.fm/linode'. To see why Linode has been voted the top infrastructure as a service provider by both G2 and Trust Radius. From their award winning support, which is offered 24/7, 365 to every level of user the ease of use and setup. It's clear why developers have been trusting Linode for projects both big and small since 2003. Deploy your entire application stack with Linode to one click app marketplace, or build it all from scratch and manage everything yourself with supported centralized tools like Terraform. The Linode offers the best price and performance value for all compute instances, including GPUs, as well as block storage Kubernetes, and their upcoming Bare Metal release. Linode makes cloud computing fast, simple and affordable, allowing you to focus on your projects, not your infrastructure, visit at 'talkpython.fm/linode' and sign up with your Google account, your GitHub account or your email address and you'll get $100 in credit, as 'talkpython.fm/linode' or just click The link in your podcast player show notes, and thank them for supporting talk Python.

35:06 How does the app know what you're doing? Do you tell it just right now I'm doing this. So take a scan. Yes, you have two options. One of them, the app asks the user, when you're scheduling manual your workout, you can see I'm planning to use a sweat patch on this particular workout. So be prepared. That's one option. The other option is after you finish your workout, the app asks you didn't use a sweat patch. So if you did, let's go with flow of scanning. If you're not, there is a fallback mechanism called weighing way out where we can compare your rates and subtract them and see how much fluid you lost. The downside of this is that we do not capture sodium loss so you cannot take that into consideration. So when it gets to the server side, what's the API framework, so the app is talking to the backend all the time, we can't use the back end for front end metaphor of sorts . So the API not only does the reasoning for the whole system, but sometimes it even helps the app a little bit with layout. So the app talks to the back end to get user profile to get the timeline of events. And then it renders those timeline of events captures additional information, for example, your motivation and your fatigue, and reports that information to the backend, the sweat scans, and schedule manually workouts, everything else happens at the backend by everything else what do you mean grabbing weather information. So translating your latitudes and longitudes into a temperature if your workout was out the door. One thing sending push is canceling push notifications, to remind you that you have this recommendation, or there's an upcoming workout, things like that. It also computes the local to whole body transformation. It manages your stretch profiles, and it's triggers some products, recommendation engines, that will tell you not only Gatorade products, but also General Foods That could be suitable for your nutritional needs. So we can say for example, if you you need this amount of carbs, this amount of protein, maybe you should take a little bit of caffeine, or casein. And then we give it a list of generic food like rice coffee. And then you can plan accordingly to fulfill those recommendations. All of that both from the back end in the back end is Flask , we were in doubt in the beginning, being between FastAPI and Flask, I wanted to fight the problem domain, not the technology. So I decided to go fully synchronous. And because in that case, it's super easy to debug, it's much less prone to like problems, I was not concerned with performance. In the beginning, what we learned was that flask was performance enough for all our needs. So that was that went well, we had a separate API, just for integration with Guardian Strava. After we asked to production in March, we realize that the team was choose more to maintain it. And we really didn't need so we merge the two API's into a single monolith. But those two API's, one of them was a synchronous. So we did that a synchronous API with quartz, and that was the one talking to gardening Strava because that one was purely IO bound, not CPU bound, right? You're entirely waiting on Strava and Garmin and the internet. And so you should be able to scale that Yeah, many, many times out, because all you're doing is waiting on their API's. And you're completely at the mercy of their performance. That's true picture as well. Right. So async makes a lot of sense. But it turns out it wasn't needed. Hmm. Yeah, the thing that we realize was that, of course, there are trade offs. So if you go for a job interview, and you present that as like a conceptual problem, I think the answer is no, do it asynchronously do it as a separate API, because then you can scale independently, you have better IO throughput, lower latency, etc. but in the real world, you have to balance all those things with the size of your team, the resources that you have other external conditions. So in the end, we decided to consolidate everything into a single technology in a single stack, because it was simpler if we need to like to onboard new people train, instead of knowing flask and Cards. Now they only needs to know a single framework. And because scalability was not a problem. We were using, like Kubernetes in production, and it's like horizontally scalable. We decided to reverse that and build a monolith single stack classical away. And that's what we have today. Oh, interesting. So you decided you can just solve it by running more worker processes except for container and then just running more containers if you need to? Yeah, exactly. That was a better solve for the condition. So we had

40:00 in the project, I think that makes a lot of sense. You know, there's so often these recommendations of using microservices breaking stuff into a bunch of pieces, having, you know, just the right technology for just this slice of what you're doing. And then you've got your app, your app is talking to the just different services. Now you're trying to coordinate, it just gives me chills even think about it, releasing this through the Apple App approved process, and coordinating that with the versions of multiple API's. That sounds so bad,

40:31 nightmare really fast in, in my mind, my experience is that microservices is an answer for a given team size. So you have like 100 developers, like I don't want 1000 developers like Netflix or Shopify, then it makes every sense in the world to break it up into individual components. Because you have individual teams, Conway's Law, right? The software you build reflects the architecture of the people in the company and how they build the software. But when you have a very tiny little team, the monolith is great. It simplifies everything. So that's like a list lesson relearn that we kind of over engineering, the beginning, trying to go with two API's. And we took the route of bringing the monolith

41:19 I think that makes a lot of sense for small teams. And here's the thing, if you run into performance problems that really needs async stuff, you know, when you consider to use flask and not FastAPI, FastAPI was brand new, and who knows if it would survive another six months? Or if it would go the way of other really promising projects like Gipronto, or something which as far as I could be wrong. As far as I know, it hasn't gotten a ton of attraction, it was really exciting for a while and it just kind of fizzled out, right? Yes, you want to build on that. So I think flask is a totally reasonable choice. But I guess what I was gonna say is, you know, it's not that different. If you need to re translate that if you're going to convert that to FastAPI, like that's the thing you as a team of a couple people could do in a few days. And it would be fine.

42:03 There were other like circumstances. For example, as a big fan of talk Python to me, I watched the episode on FastAPI, I watched the episode on Pydantic . And I knew of the symbiosis between the two of them. But for us, we started with Kerberos as a schema validation technology, which comes out of the eve project from my follow up, by the way, yeah. And we evolved into using marshmallow replacing curbers, we compare it Pydantic . And at the time, we didn't want to like extract Pydantic from FastAPI. And FastAPI with marshmallow. So we went with flask that was like not opinionated about what the schema should look like. And we went through with flask and Marshmallow, nice. Some other interesting building blocks that you highlighted is PINT, p i n t for units. pint is super cool. Tell people a bit about pint. Exactly. So one of the things that we're doing on this particular project was it's supposed to be International. And it's like a heavy on the physics. So we're dealing with rates and concentrations, there's a lot of chemistry going on. We're talking about the metric system, US customary Imperial systems. So converting between units was going to be a big part of the system. And then we find out about PINT, I was like the pioneer community for over 20 years, I shook hands with ridable Rasul in 2005, when I translated the, the, his tutorial to Portuguese, but I never heard of Pint in those training years. But when I had the need, I did a quick research. And I found exactly the solution that we wanted. So, so PINT, uses the excellent object orientation that we have in Python to transparently create this, like new integers and floats, they're just not numbers, but they also have a units together with them. It's so fantastic. I mean, let me described this little example on the pint homepage. So if I wanted to have three meters plus four centimeters, instead of saying three times 100, plus four, or vice versa, you know, divided by 100, you have three times meter plus four times cm, and then what you get back is a quantity, which is 3.04 meters. It's fantastic, exactly. You could even do something horrible, like three meters plus seven inches, if you had to, it becomes even more powerful when you're talking about different kinds of dimensions. For example, if I want a concentration, and I'm going to like to divide mass per volume, and then I need to make sure that my calculation makes sense. So Pint allows you to do that. So you do those conversions. And and the units are preserved. And because the units are preserved, it's easier to test is easier to compute. So it's really a lifesaver that saved us a lot of time while coding is API This is not something that in my world, I do anything with these days, really. But if I did, or I'd be all over pint, that thing's cool. Also, there was UNYT. Yes. I've also just recently heard about, and I don't know really how they can you in yt. But this is also something similar in that regard as well. Yeah, we at the time, and I'm talking about early 2020. We evaluated a couple I don't remember if Unity we checked, but there were three others besides pines, but in the end Pint's sound the most robust one, but I'm checking Unyt again. Yeah, I guess, what's your assessment now that you've been actually using pints? Good, we're super happy. If you have like zero issues with pint and it saved our lifes again, yeah, another one that you'll run into is time. So used Pendulum for that I was even a little bit resistance in the beginning, because I was using dates time before ever. And of course, members of my team, were suggesting, Hey, why don't we use pendulum it has a really nice interface. And the deal breaker for me was that the Pendulum object is, inherits from the regular daytime objects. So they are completely interchangeable. The one thing I didn't want to happen was to have like this two kind of libraries dealing with time. And in that case, daytime is standard in the standard Python library. Something else external would be a cause of concern for me, in my experience, but in that case, it was seamless. So I can use either pendulum or daytime, it doesn't matter, because it's cool. They only had it for the same route. So that was key. And to be honest, it was convenient, especially for time zones. But if remember that that was before, I think it was on Python 3.9 that we had better support for timezones and

46:58 at least in the standard Python, and at that time, I think Pendulum was having a better way to handle with timezone conversion. That was key for us. In terms of the solution, we never know where our athletes going to be in the world. So that was the key reason. And we're happy with pendulum. Yeah, this

47:16 is super neat. You know, I didn't really put it together when I was looking at this before, but the timezone stuff is quite interesting, like something that's always a challenge that I deal with is when I'm working with some of the web apps that I have, or other things, just where is the server versus where's the person accessing the server. And it turns out to be way more annoying than you think. Right? Like it is. Another thing we're going to talk about is using MongoDB, for the back end on this. And so am I and I'm just a super fan of MongoDB. It's been such a nice way to make fast, easy to maintain apps, but it stores stuff like UTC, where's the server, I think is an Eastern Time Zone, and I'm in the Pacific Time Zone. And so if I want to pull something up, so this event is going to happen then or it happened at what time, it's not easy to say, Oh, that was an hour ago, like something so simple as it here's a list of activities, and this one was an hour ago, is challenging. And this is really cool. So you can say instead of daytime now I can say 'pendulum.now' and then pass in the base timezone. And I say, Well, what is now in, you know, inbound timezone, or something like that right would be really, really nice.

48:22 This strategy we took it's kind of traditional strategy is that at all the edges of the system, we convert to UTC. And inside, we're just reasoning on UTC. And only when we can export information, through this membrane of the system back should close to the user, then it translates back to their time. However, it gets tricky when we're talking about not real time, but nominal time, what is nominal time, for example, I wake up at 7am so 7am for me, it's a nominal time, it's not anchored on any place on Earth, it's like, the time that I'm supposed to wake up, the nominal time becomes trickier because of the it's not anchored on a particular place. So we have to deal those kinds of things a little bit differently. But other than that, pendulum it was very ergonomic in terms of their API, it was super easy. You can keep it on your mind. And it's compatible with the time so it was Plus we're having a really neat point. Because if you've got some other API that takes a day time, you can just pass in the pendulum time and it just it is a day time Yeah, right. So you don't have to make Oh, I forgot to convert here. So it's broken, so the lander crashed into the ground because you know, whatever. Exactly. All those units and conversions types of weird Yeah, and the arithmetic, right. So I think that is the other aspect that pendulum makes a little bit easier the arithmetic one does like added they subtract two minutes, this duality between a pointing time and a Delta that we do have Like the time and time delta and all the Python API. But Pendulum also helps you in, in that sense to do those conversions in terms of like scale and arithmetic. So it was really interesting. Yeah.

50:13 One quick thought I'll throw out there that because this was such a surprise to me I just had, it's so undiscoverable. But it's good. Once you know it, it's good to know it. Let's put it that way, at least, that when you're doing with forget pendulum, for many of us doing raw day times in time deltas, through time, deltas always have a seconds, right. So I created a time to have a difference between two times, right? Take a time and add a time don't like it a new time delta, or you get new time that but if you have a time delta, and it's in seconds, which is basically the only option you get, and you want to know, well, I need this in hours, I would always just go to total seconds divided by 60 divided by 60. Again, what you can do is create a another time delta and say time delta hours equals one and then divide one by the other. And then you get a number of hours. Or if you want like weeks, timed out to seven days and you divide your time delta by that and it'll give you the days. And that's really handy. But boy, is it hard to discover to know that that's possible.

51:09 Exactly. And that's where I think Python shines as well. Because since the beginning, one of the arguments in favor of Python, it was when people said Python pseudo code that runs, they were telling you Python brings you closer to the domain of the problem, right? Then that matters, syntax, sugar matters, because the way you express software, it means you will make less mistakes. So when you read it, you fully understand it. Yeah, it's easy to read to do a review, it's easy to do maintenance, it's easy to throw away and write it again, that matters. So it's not just about if you know the logic, if you know the theory, but in the way you express the logic and the theory matters. Absolutely, totally agree. All right, let's talk one more element here on this one. And that's the database cool side of things. So why did you choose MongoDB? What's your experience working with it? That's a great question and an example? That's a good question. So I came while I was at the university, I was working with the database group. And I had a lot of experience with like database, I played with object oriented database in 1993, the French object database call O2, as time went by, we had Zope and ZOdB, I played a lot with Zo DB and even made a tiny little contribution to so I've been exposed to no SQL before no SQL became a thing. But of course, SQL was still there. But what we wanted was, again, development time. So as they read through work& co, my main consideration is can I make this project fits the budgets and this capital? Before I exactly know what we have to do, because details will only be captured at the end of the conceptual design phase. But at that point in time, the money is defined this schedule is set . So it's like, okay, we'll build the pyramids. But I don't know what's going to be like bringing those blocks of rock across the Nile. So because of that, I wanted to go for flexibility first, and going for in no SQL database, it gives you speed in terms of development time, right? So you can put anything in there. Mongo is storing information in the same formats as the applications are consuming them, which today, that object relational impedance mismatch, you just have, it's in Python. Now it's in Mongo. Now it's back in Python, it's the arms of the world, and also always was let down by ORM's. Because in the beginning, when you know nothing, they are great, because you don't need to learn the actual language of the database. After you evolve, they become a barrier, because they are never as sophisticated as the data manipulation language of the database. Right. And that is why they don't support this

54:07 and that my challenge has been more operational with ORMS is, they're pretty good. And a lot of times if you're the right indexes, you know, the speed can be okay, if you make sure you do the joins, instead of the lazy (n+1) type of queries, you'll be okay. But you build up your classes and everything's easy. And then you want to add a field to a class, you want to create a relationship. Well, now you try to run your app, it doesn't just not work well, it fully crashes until you do a database migration. And are you doing that migration and production and staging and dev? And then what's the downtime story as you roll out the changes to your multiple servers that all got to talk to that database, it's like ah, and for me something like Mongo that is just so much more likely to adapt. Like I haven't run them on a database migration or anything like that for years. You know, people talk about Mongo for like its web. scale, there's all this data in that that's great. I mean, like my database probably, seven gigs of data for like Talk Python Training and the podcasts. And that's a non trivial amount of data. But it's not so much about the data. It's about the flexibility, like, it's easy to make it fast. And it doesn't require DevOps at the extreme to do it. Well, you know,

55:19 that was a great point, because managing the database I mentioned DevOps. So because Mongo, they also provide the Atlas service, where you have like Database as a Service. That was key for us. Because then we can like, forget about this. We simply connect to Atlas, they manage our cluster, they have the dashboards, everything, place. It was one click of a button to migrate from one version of Mongo to another version of Mongo. So those things are priceless. When you're doing like quick based development. There was another reason though, in the beginning of this project, PepsiCo asked us to use Azure. So that meant we were doing our Python services within Azure. And we had to do a database within Azure. And we went for Cosmo dB, because Cosmos DB was compatible with Mongo, if at any point in time, we needed to move out of Cosmos dB, we would have Mongo as a fallback. And that's exactly what happened. So a couple of months, there were some political changes within PepsiCo, they decided to go like AWS out of Azure, and then boom, we have MongoDB in Atlas. And we have like the same clustering in AWS for Python. We were totally agree, Ben. Fantastic. So I know we're going to talk about a bunch of API's, we spent all our time talking about this one. But I think this is really a super interesting thing to dive into. We can touch on some of the other ones maybe a little bit. But I guess, before we move off of this, like, what's one of the big takeaways you took from this project here? Oh, that's such a big question. There are like many things, the biggest perhaps, is at the core of the agile mindset. After you implement stuff, it's really when you start to understand the problem. It's really, really hard to kind of design on paper, and then it just works. So what we did for this project was many, many cycles of software rewrite. When the project started back in 19 2019. Like, over the weekend, when they presented me the problem over the weekend, I was able to Friday afternoon, Saturday and Sunday, to work on a tiny little football solution. I had a timeline, I had a recommendation system, I had recommendations, it was kind of mockery a little bit, but I was able to cook that up in three days, show that to the client and say, This is more or less what you're gonna do when they really understood what was in there. And then we threw all that away and did like the new version, like of Troy, many series built one on top of the other. So that's what happened on this API. Like we're doing microservices, like let's do monolith. We're doing Cosmo, let's do Mongo, we're doing cards, let's just do flask, we're doing characters, let's do Marshmallow, being able to throw things away, throw things away. And then kind of reveals refactor the power of refactoring made us to have a system that saw always performing, that's manageable. That's minimal, technical debts. And I think that was the key learning that I was able to fully apply in this long term projects, like two years, that is not common for working culture have this two long projects, usually our products like three months, six months, maybe a year. But in the case of gatorade , it's a retainer project, where for the long run with them, and in this case, you you really need to evolve the platform. It's like a maintenance driven development is that actually give you a different mindset, when you're creating, like, I'm going, I am going to have to live with this bay, we're gonna have to live with it. That is, I think that is the most a difference between what I consider seasoned developers and new developers, we have, like junior developers that they are very competent, they are very capable the master really quickly new technologies. But the difference is to fully understand the impact of decisions, not just based on theory, but based on social events that may or may not happen based on what the future may bring in experience counts a lot in that sense. And the maintenance Driven Development mindset where you know, you're the one that you're going to do the maintenance, change things a lot. You think about stability, robustness test coverage in a completely different way. I'd totally it seems like it absolutely would if you're the one who's got to live with it, because you go to some of these consulting projects, and they're like, Oh, we really want to use this specific database. You want to use this framework, but in this odd way. And you're like, well, they're the customer. They're always right. But if it's gonna be you, you're like, yeah, we better build this, the way that we're going to be happy in a year. You know, there's that reminds me a funny quote, like, always code is if a person who ends up maintaining your code or happiness or you live right,

01:00:15 that's good advice.

01:00:17 Yeah, a little a little bit morbid. But, but do think about that you'll have to live with it. I guess it i think that's talks to that trend. They called boring codes, right? There's a bunch of like speeches and talks in the internet, people talking about no write boring codes, code is easy to understand, easy to do maintenance. It has this mindset that when you're learning things, and you're not concerned about prediction stuff, you are much more bold to try new things, right? Because then the project is done, you move away. So there's no cost, there's no risk, there's no penalty for trying new stuff that may prove inadequate. But when your success measurement is considering the maintenance phase, the long term success of the project, the mindset is a different one, it's what is stable,

01:01:06 what's guaranteed if you're chasing the shiny new thing, you could end up say, Well, I'm gonna build this, this six month project, I'll do that and react. And this one is going to be in view, and this one will be an Angular, and this is going to have this back end and that back end, and then all of a sudden, you're in, like maintaining every JavaScript front end framework in the world. back ends. And yeah, you don't want to be like that. I think the other thing you touched on is really interesting. One of the things that I try to preach a lot is, it often doesn't matter too much how you get started so much as you get started on a project, I see a lot of people saying, you know, they'll get frozen, they're like, well, I just can't decide between Django and flask or FastAPI, or this or Postgres, or Mongo. And they just, they think, and they think and they think, and they don't get anywhere, like you highlighted with your I did it over the weekend prototype. A lot of times those people who are stuck, if they just fully internalize, like, you can just go on a path, and then evolve it with refactoring and change it and throw it away and rewrite it with more knowledge. The time you spent worrying about what to do, you would already have a working example, probably. And then you have so much more information to build from and decide from, I think that's a really important takeaway.

01:02:15 God is in the details. Nobody stumbles in mouthfuls. It's the little things that make your fault. That's a good quote. I

01:02:21 like it. Nice. All right. Well, I think rather than diving into the other API's that we're going to cover, I think maybe just this deep one and two, the DX sweat patches is probably a little bit more constructive and useful. So maybe we'll leave it here for the API is really, really neat, orc, a lot of moving parts, right. You've got your mobile apps, you've got your mobile apps and Xamarin. You've got your ml models. You've got the API's, the other external API integration, the other app integration, a lot of stuff going on here. Right. So I think it's a pretty good case study

01:02:52 is won't be it's that we didn't mention I'd like to really quickly mentioned is that we have a another engine for product recommendations that use constraint problem solving. And this was, were you reusing a module, a colleague of mine called Gustave, Nehemiah, he is a core developer for Python not working for canonical. And he was with me in Europe in 2005. And on Europa in 2005, he presented a Python module that now we're using to solve those constraint Bayes problem optimization, right. And that was really cool, because it's a really powerful mechanism. And the problem you were solving was that you have all of these needs, right needs for current needs for protein needs for hydration needs for electrolytes. And then we have all of these products that have different combinations of those elements. So what is the optimal amount of products and types of products? And what is the minimum amount that he needs to kind of fulfill all your nutritional and hydration needs? and for that we use by the whole strength, there are other engines there. But we wanted to be like fully python solution. And there is another hands, they want to share with people. That is it solves like a big problem. And it's really easy to use. And this is Python dash constraint on Pypi. Yeah, yes, exactly. This one you have on screen quite magical string looks really cool reminds me a little bit of linear programming is the differences. Well, actually, you have linear programming like simplex, where you're kind of going into the plane, and trying to optimize numerically, this is one kind of thing. This other one is more on exploring a discrete space of solutions. So you define your variables, the domain of values for each of these variables, and their constraints that validate is this combination of values a solution to the problem or not? If not, it's refused. So then it's search space. Yeah, so there's like a chess problem, the rook exactly here, which clearly is not a continuous problem, it's a very discrete, the rook can be in one of, you know, 100 places or whatever on a chessboard. And that's it. Very cool. One final thing on this project here, now that you've created it, how many people are keeping it going and working on it? What's the team size at the beginning and the end? And when we talked about the monolith versus micro services, but not really the details there that made you decide the monolith side? That's a great question. So first of all, we all the time we're talking about the GX consumer. So for like you and me, we also have another app that talks to the same API called gx teams. And the GX teams is for practitioners coach, personal, that they are managing a group of athletes. So this is another thing that Gatorade launched guess this month, both apps up to the same API that we call gx rec engine. So just for gx rec engine, our development team, it's to be for people, it's me, John Gomez Nichols, Maya, Rodriguez and Sally Morin. So it's me plus 4, that is the whole back end team for front end for pn strategy, business product, then it's it's kind of spread. But the things that we talked about the Python world, we're talking about five people, you know, that's super common. If we look at the Python developer survey from the PSF, we go down and look at the team size, the average team size 75% of the time was two to seven people.

01:06:29 It's a bitten rule, right? Yeah, if you get out to what I would call the micro service side, you know, that's maybe 20 people, or so in 20 people or more on the team size is only 4% of all software developers doing Python. So keep that in balance. When you hear about, like, you know, how Netflix or Google or Instagram is doing some amazing thing, right? That context doesn't necessarily apply to like your context.

01:06:55 In the breakdown. I worked for global .comm which is the biggest news company in Brazil, we had a audience of 4 million people, like daily going through like war websites, etc. And the teams internally, we were like, four or five people we know about seven, because it becomes a manageable, it's the the communication overhead becomes so high,

01:07:20 that it's not worth it. super interesting. I think we're gonna leave it there. But I'm going to ask you the two final questions. Before I do that. Let me just call it a couple comments from the livestream. Black and White said this video. So great to be frank. Thank you. And Vincent. Hello. Hello. love your show. Keep up the word please. All your guests are on point. Very interesting. Yeah. Thank you all for being here. That's great. All right, Rod, working on these projects and others, you can write some Python code, what editor would you fire up? Well,

01:07:45 that's a great question. So I'm a big fan of PyCharm. So I've been using like PyCharm forever. It has some features that I can love, like class hierarchy, like usages of functions, the debugging, it's super well polished. So I use by PyCharm for professional work. But having said that, I use everything else. So I dabble with like these a surgicals Visual studio. I use vim a lot when I'm on the shell or doing stuff. And of course, he is Jupyter notebooks, as well. And sometimes I code stuff. straight to notebooks is traded on notebooks. And it's like lots of all of those fantastic. I agree with all that and notable PyPI package. Well, we mentioned like a couple there, right? Like PINT and pendulum and Python constraint was that I heard in your show, and I explore a little bit scald RICH. So Brad cannon that mentioned rich, but I'm not sure right now. But that was really awesome as well. Really interesting. So these are like my picks.

01:08:52 Yeah, rich is coming along as a quite an interesting project. It's going, you know, will there's doing such interesting work. And it's just as Super opened up what you can do in the terminal, I think in a much more approachable way. Have you seen textual saying that a textual is like a layout engine for rich? Oh, that is coming on here. So what you can do is you can break up your terminal and to have like a toolbar, like a left docking thing, a footer, and then a main area. And then you render into each of those with Rich and you can do like even arrow keys and only like move the main window section. It's Yeah, there's a lot of cool stuff going there. So super cool. Yeah. It's awesome. All right, well, final call to action. What's your final advice for people buildings API secret about these decisions for their own projects?

01:09:42 Well, what I already have given it's like the hands on but try it out. See how it feels. Get your hands dirty. This is crucial in my mind. The other one is start thinking about maintenance driven development. So when you start because even if it's not you, somebody will have to Give me so at least balance the cost and benefits of exploration and the advantages and downsides of using well established technology. Always look for the zand. The young, the balance on these forces. Good advice.

01:10:16 Definitely a second Rod. Thanks for being on the show. Thank you so much, Michael. Yeah, you bet. This has been another episode of talk Python to me. Our guests on this episode was Rod sendra. It's been brought to you by SENTRY and LINODE, and the transcripts are brought to you by Assembly AI. Take some stress out of your life get notified immediately about errors in your web applications with Sentry. Just visit talkpython.fm/sentry and get started for free and use the promo code talkpython 2021. When you sign up, simplify your infrastructure and cut your cob bills in half with Linode. Linux virtual machines develop, deploy and scale your modern applications faster and easier. Visit talk python.fm/linode and click the Create free account button to get started. Transcripts for this and all of our episodes are brought to you by Assembly AI. Do you need a great automatic speech to text API get human level accuracy in just a few lines of code visit talk python.fm/assembly AI. On level up your Python we have one of the largest catalogs of Python video courses over at talk Python. Our content ranges from true beginners to deeply advanced topics like memory and async. And best of all, there's not a subscription insight. Check it out for yourself at

01:10:16 training.talkpython.fm Be sure to subscribe to the show, open your favorite podcast app and search for Python. We should be right at the top. You can also find the iTunes feed at /itunes, the Google Play feed at /play and the direct RSS feed at /rss on talkpython.fm. We're live streaming most of our recordings these days. If you want to be part of the show and have your comments featured on the air, be sure to subscribe to our YouTube channel at talkpython.fm/youtube. This is your host Michael Kennedy. Thanks so much for listening. I really appreciate it. Now get out there and write some Python code

Back to show page
Talk Python's Mastodon Michael Kennedy's Mastodon