#373: Reinventing Azure's Python CLI Transcript
00:00 Deploying and managing your application after you create it can be a big challenge. Cloud platforms such as Azure have literally hundreds of services. Which ones should you choose? How do you link them together? In this episode, Anthony Shaw and Shane Boyer share a new CLI tool and template they've created for jumpstarting your use of modern Python apps and deploying them to Azure. We're talking FastAPI, Beanie, and MongoDB, Async and Await, Bicep DevOps, Automated CI and CD pipelines and more. Plus, we also get to catch up on other Python work happening that Anthony is involved with. If you're interested in deploying or structuring modern Python applications, you'll find some interesting takeaways from our conversation. This is talk Python to Me episode 373 recorded May 12, 2022. Welcome to Talk Python to Me. A weekly podcast on Python. This is your host, Michael Kennedy. Follow me on Twitter, where I'm @mkennedy and keep up with the show and listen to past episodes at 'Talkpython.FM' and follow the show on Twitter via @talkpython. We've started streaming most of our episodes live on YouTube. Subscribe to our YouTube channel over at 'Talkpython.fm/YouTube' to get notified about upcoming shows and be part of that episode.
01:27 This episode is brought to you by Sentry and their awesome error monitoring product, as well as NordVPN. What you do on the Internet belongs to you, not ad companies. Keep your connection private and safe with Nord.
01:41 Transcripts for this and all of our episodes are brought to you by AssemblyAI. Do you need a great automatic speech to text API? Get human level accuracy in just a few lines of code? Visit 'talkpython.Fm/assemblyai'. Anthony, Shane welcome to Talk Python to Me.
01:56 Hey, how are you?
01:57 I'm doing well. It's great to have you here. Yeah, I'm excited to talk about this re envisioning how Python works on Azure as a developer story that you all are working on. And from the really quick preview I've seen, it looks really exciting. You must be excited to share it.
02:13 Yeah, it's been an adventure to figure out what's the fastest way to get a developer kind of up and running on Azure and in the cloud without having to learn a whole set of new things. That's kind of the goal here.
02:25 Sure. And Shane, you and I, when we first started talking, we were reminiscing back to Azure in the early days when there were only a couple of services.
02:34 Right.
02:38 Silverlight? Silverlight oh, my goodness. Right? Those were the days. What was that? 2008 ish? Maybe a little earlier.
02:45 Evening yeah, that sounds about right.
02:48 Silverlight I think we had three actual products, and this was long before I started at Microsoft. And SQL Server was its own portal, so it had its own little space. And now we're like, well over 100 different things that we can do in the portal.
03:03 Yeah, I don't know how many folks who are listening have actually gone to the portal and pulled it up and sort of just browsed it. But between Azure and AWS, it's just like a paradox of choice. There are layers of oh my gosh, the screen is full of icons. Oh, I opened up one and that was a subsection of now we have like deploy features that fill the screen. It's quite the challenge to get up and going, right?
03:27 Yeah. I think if you ask any web developer, hey, you've got code? How do you run this on the cloud? And it depends. Truly takes full meaning when it comes to that. And before, maybe it was just a slider bar for scaling, right? Like, oh, I want maybe two or three. And now it's like, oh, well, it's based on CPU and this and how the moon is moving. And there's just so many different ways that we can scale a web app or any part of our architecture now.
03:56 In so many areas in which the pressure might be exerted that it needs to scale rather than just CPU.
04:02 Before we get to it though, just like quick introduction for you, I guess. Anthony, people know you, you've been on the show so many times, it's fantastic to have you back.
04:10 Yeah, it's great to be here again.
04:11 Yeah, for sure. It's always good to have you on the show. And when you're not on the show, we're often talking about you, about some project that you're doing, some pet that lives in your IDE or some comic sands, all sorts of fun things. Yes.
04:23 Rumors of a golden jacket somewhere.
04:25 Yeah.
04:26 I didn't realize how close you were to the golden jacket. That's amazing. We have to work on that. So just give us a quick catch up and then Shane can introduce yourself. What have you been up to, Anthony? Yeah, anthony, have we spoken since you moved to Microsoft? I think we have, I'm not 100% sure, but let's assume we haven't. Tell people about what you're up to these days.
04:44 Yeah, so I'm kind of working on advocating for Python within Microsoft and then working on advocating Python outside of Microsoft as well. So I'm still doing a lot of open source work, but then within Microsoft, I guess trying to integrate Python more into our products and stuff like that and also get the Python community and things like that more into how we work and find out more about how Python is being used across the company and how we can do better as well. So I've been focused on performance and security. They're kind of two things I'm always interested in, but also like modern Python applications and how they kind of come into play as well. So yeah, so many things, I couldn't possibly list them all over the last, the first year I've been in Microsoft over a year now and I made less than it was like 50 things I think I done in the first year. But yeah, it's been a whirlwind, but really fun.
05:37 It has. And you just came back from PyCon. I want to give us a quick report from being on the scene. Did I see you on the big screen, big stage, giving a talk there?
05:45 Yeah, it's my first time doing that. That was fun.
05:48 Congratulations.
05:49 It's terrifying.
05:51 Yeah. So I gave a short update on behalf of Microsoft. I was also on the diversity and inclusion I'm on a diversity inclusion work group at the Psf and we had like a panel discussion on the stage and then I also gave a talk, like a full talk at PyCon.
06:06 Who was you talking?
06:07 It was on performance anti patterns.
06:09 Yeah. That's it. From your perf lent project.
06:12 Yeah. So basically gave background to the performance lender project that I've been working on and what things I'm looking for in code and why they slow it down. And then just trying to demonstrate what the differences to people on 3.9 or 3.10. So like a simple one line code change can make 60% difference in terms of how directly the code runs.
06:32 And then you can get into the debate again with people that no less comprehensions are just loops, they don't make it any faster. Like, okay, can we run the benchmarks again? And let's have another yes, that's great. So that's good work.
06:44 Last thing, what's the status of Perfluence? Is it a thing people are using already? Is it still under development?
06:50 Yeah, it's definitely very early. Beta raises a lot of false positives at the moment, but it's raised some really interesting things on production code basis that I run it against. So, for example, our serverless platform is Azure functions. That's all, the Python serverless is all written in Python and it uses GRPC for communication. So I'm actually running the performance linter against that code base to look at ways that we can make it faster. And there's a list of stuff that I'm working through with the engineering team. So yeah, trying to put that to prove instead of just making it theoretical.
07:24 That's a really good test case actually. And performance, if you can improve performance of the fabric of the cloud, then you've made it better for everyone, right?
07:32 Yeah. So there's like a couple of loops I was looking at which probably get executed hundreds of millions of times a day. So I'm like, okay, what if I can improve that by 10%? Then that's going to make a big difference.
07:42 Yeah, absolutely. Awesome. Well, it seems like you're having a good time there. I'm happy to see you found a new home. Indeed. Shane, welcome to the show. Tell people about yourself. Thanks.
07:51 It's a hard act to follow.
07:53 I know.
07:53 I did perfect my last job, so I appreciate any Perf improvements. So yeah, I've been at Microsoft now six years. It'll be six years in the summer, so in about a month or so and it feels like some days, it's six days. Some days it feels like 60 days, others at 60 years. So, yeah. What do I do at Microsoft? I run an end to end Developer experiences team for Azure and Dev. We do work closely with Anthony and other folks on his team too. Around just finding out what is hard about running Python and the other languages on Azure. On our Tools vs. Code. Visual Studio. And how to get your code on the cloud and all the things that come along with it. Everything from docs to the actual components. The services and what's that full story and kind of where are those pain points? And then working with those service teams to find out what makes sense to you, what feels like a Python developer should feel. One of the early things that Anthony brought forward was, these are great. It's great that we have logs, but it's not how I want to see logs. Right. So I think that makes sense. And again, like you talked earlier, that's the fabric of a developer, like when something goes wrong, we want to fix what it should look like so I can find that problem fast. And those are things that we dig into and report up and help solve on our team.
09:16 Yeah, there's this story of when Scott Guthrie was put in charge of Azure. Scott Guthrie being the guy at Microsoft who was really responsible for a lot of the developer experience and took a bunch of the people on the team and had them all sit down and say, okay, get an app on Azure. And it was apparently a real struggle. A lot of people didn't succeed. And I was like, this is the problem. We need to fix this. And I think that made it a lot better in some ways. But it sounds to me like you're kind of doing a microcosm of that with Python, with the two of you.
09:45 Yeah, and we do it for every type of developer, every language stack. And Python is important to us for the very reasons that, like Anthony mentioned, some of our core components are actually written in Python.
09:58 We appreciate that part of what we're doing and how those applications are written. And again, the perf, it's a very classic story that you bring up because it's often referenced and probably a core reason why my team exists now. It does because we have to solve those problems.
10:15 Yeah, sure. Well, like I said, it's fabric, right? And if the fabric is scratchy or itchy, you don't want to wear it, and that's a really big problem. Right. You want to make this as smooth and seamless for people to get it right without bouncing off the walls too badly. For sure. Now, we're going to talk about this project that you all are working on, which is super exciting about structuring Python applications and deploying them to Azure. Before we do, though, there's some other interesting folks working with you at Microsoft these days. A lot of core developers like Microsoft's doing a lot of stuff with Python these days. In terms of the number of core developers obviously taking over GitHub is like a big step into the whole opensource steps that you're all taking. But sort of the direct contribution to Python is super interesting. And the most significant one, I guess, that we could talk about is when was this? A little while ago. We had this big announcement that back in 2018, Guido Van Rossam retires as BDFL and that was it. The steering council was created. Governance thing was up in the air, but then figured out and seems to be really nailed and then hung out at home for a while. Covered here, you couldn't really travel, do too much. Like, you know what, I kind of want to go back and do some interesting stuff. So now I done creator Guido Van Rossum joins Microsoft and talk to him about that some and whatnot, but still very interesting. You guys are working with him. Most recently, I spoke to him and Mark Shannon about the Shannon plan and making CPython five times faster. So Anthony, you want to give us sort of an update on the stuff you see going on? I know you might not be directly involved.
11:54 Yeah, we were actually testing and doing some of the PyCon, so other Microsoft booths the team that they're talking about. So Guido, Mark Shannon is now seven people on that team. All core developers are working full time.
12:09 Guido. Guido is part time, but all the others working full time on the Shannon plan and a whole bunch of other concepts. And what they're doing is basically making changes to see Python core to make it faster. Targeting Python 311, which will be out in October this year, 312 and 313. Some of the ideas are actually penciled for 313.
12:31 Right. This is like a five year plan that Mark had laid out of. If we could make it 1.5 X better each year, compounding it's good. We'll get fast.
12:40 Yeah, exactly. So some of the fruits are actually coming out in 311. So we were actually doing some live benchmarks and stuff at PyCon on different workloads and things like that. I'm seeing 25% performance gain on most amazing workloads, which is awesome, and it chases up to 60%. So it depends very much on what your workload is. But yeah, that's 311. But I think some of the bigger changes are coming in 312. So basically the core team of people working full time now on CPython itself and not a fork of CPython, they're working directly on CPython with the core development team.
13:16 As amazing as the stuff that was done over at say like cinder, instagram, right? Really interesting stuff, but it was kind of like we forked it. Here's a sort of interesting thing we built. Take it or leave it. Take some ideas. Off it goes. It's really different to say we're like the next time you just apt upgrade or brew upgrade your Python. It just gets better. Chocolatey upgrade, however you do it, right?
13:37 Yeah, definitely. So, yeah, the 311 changes already, I think, are going to benefit everybody. And getting people onto the newest version of Python is definitely going to help everyone in the long term anyway, because it's got a lot of other cool features.
13:49 It's pretty remarkable that after 30 years, you can make one of these big step changes of that significant of a performance improvement.
13:57 Yeah, definitely. It's great to have a dedicated team working on this, but they're not the only people working on it. There are engineers from all over and other core developers working on contributions and stuff like that. But it's good that we can sponsor full time team to work just on a specific area. So Guido is kind of coordinating that. And then a lot of the ideas come from Mark Shannon's plan that was on the podcast last year.
14:20 Do you know any of the story around, like, the no GIL type of stuff? There was Eric Snow's subinterpreters. There was, like, Sam Gross's actual no Gil stuff.
14:30 Yeah, it was actually an open space at Python on that specific topic and performance in general. And Sam Gross is there, as well as the Cinder team, the team that works on PyOdide and a lot of other core developers. And that was discussed in detail. I know that from what I've heard, I think Sam Gross is still working on his no GIL branch and trying to break it down into smaller chunks that can be merged, like smaller pieces that can be merged individually because there's quite a number of changes in order to get that whole thing done. But it's still carrying on because it was targeted against what was now an older version of Python. Then as Python continues or something like.
15:13 That.
15:15 And they're trying to get it to three nine, but as Python continues to march forward, it gets harder and harder to upstream those things.
15:22 Yeah, still very exciting. So the reason I ask is the work that Mark and team are doing is sort of orthogonal to that no GIL work. Right. This is a lot of stuff he's working on is just make it run faster, single core, and then if you could unlock it for multicore and each core, it's a really nice multiplicative thing. You could easily see Python 20 40 times faster if you could say, well, you can scale it across ten cores and it got four times faster.
15:48 Yeah. So Eric is still working on his subinterpreters. Mark is conceptually looking through Edge specification, and they're working through specialized compilation as well at the moment, which is partially coming out 311, but then more of that coming out in 312.
16:02 Amazing.
16:03 So, yeah, it's going to just leaps and bounds, I think, in terms of performance difference.
16:07 So exciting. Last thing to ask on this topic and then we'll get to the main topic, Pyjion. Pyjion is somewhere involved in this performance thing, your jet thing that we've had you on the show before to speak about. And so is that involved in any way or is it sort of a parallel story?
16:21 Yeah, I'm sharing some of that with the team. So things that I learned in Pyjion that worked, what made a difference, and especially in the gym, like where there were gains to be made, might desire, really is that the learnings from Pyjion can be part of the future of CPython and then Pyjion isn't required. So if CPython gets its own jet and if some of the other stuff that Pyjion could do was part of CPython, then I think that's a win win because you don't have to install something separately if you just get the performance gains out of the box. And that's a win for everyone.
16:52 Yeah, absolutely. Cool. That's really encouraging to hear all those improvements coming. Awesome, thanks. Well, let's start off our conversation here by just talking about deploying to the cloud, right? Your goal really is to make deploying to Azure awesome, but let's just take a step back and talk about deploying to the cloud. When people talk about deployment, let's just say they have a FastAPI, flashdjango whatever app has a database, they've developed it, and usually it's a huge gap to go from, well, I got it to work on my machine using SQL Lite and the tutorial. Now I need it to run and all of a sudden you need to learn about SSL and servers and Nginx and all these things like, whoa, whoa, whoa, I don't even know Linux. This is like a big step to take.
17:42 This portion of Talk Python to me is brought to you by Sentry. How would you like to remove a little stress from your life? Do you worry that users may be encountering errors, slowdowns or crashes with your app right now? Would you even know it until they sent you that support email? How much better would it be to have the error or performance details immediately sent to you, including the call stack and values of local variables and the active user recorded in the report? With Sentry, this is not only possible, it's simple. In fact, we use Sentry on all the Talk Python web properties. We've actually fixed a bug triggered by a user and had the upgrade ready to roll out as we got the support email. That was a great email to write back. Hey, we already saw your error and have already rolled out the fix. Imagine their surprise, surprise and delight your users. Create your Sentry account at Talk python.fm/sentry. And if you sign up with the code, Talk Python all one word, it's good for two free months of Sentries business plan, which will give you up to 20 times as many monthly events, as well as other features, create better software, delight your users, and support the podcast. Visit talkpython.fm/sentry and use the coupon code. Talkpython.
18:57 How do you guys shane, how do you think about sort of the spectrum of options and how people are doing it?
19:04 It's interesting because when you are creating that on your own machine, that's the environment that you have to worry about. And even when we are deploying to on premise machines, at least we could walk over and touch the four U rack that was there. We knew kind of what it was running on.
19:21 Right. A lot of times you plugged into somebody had a database for you, you ask them to create the database.
19:28 They ask you plead for a database and they give you a connection string and then that's that right.
19:34 Here's your connection string and you're on your way. I think the thought of now is, how do I set this? How do I provision it? How do I deploy my code to the stuff that I've now provisioned there? How do I make all the connections between my front end and my middle tier and my back end stuff? How do I secure that with all of my environment variables and connection strings and monitoring is there as a developer for me, I go, that's great, I want to do that one time. And then really I just want to change my code and check in my code. I just kind of want it to run for me. I want to get to that point. And I think even if I have a very few components, I've had meetings at companies that lasted three or four weeks just talking about how are we going to set all this stuff up? And the promise of the cloud is, hey, we can do this super fast. And sometimes that's not so much true. It's still very challenging.
20:34 Yes. Previously it was, I need a server in our data center to be provisioned and we got to order it from the Hell or wherever, wait for it to go. And now it's really easy to go to the cloud and get it. But there's a lot of decisions to make.
20:47 Are we getting VMs? And then it's my job to run shell scripts to set up NGINX and other things.
20:55 What's the topography of that? How do I set that up for possibly what if we need to scale the web and or whatever? Maybe we use docker, maybe we use a platform as a service.
21:05 That could be a long conversation because ultimately it's somebody's responsibility. If it doesn't work outright, they're going you're going to have to come in on the weekend and fix it. Yeah. Or be at least responsible to make sure it keeps running. Right? Yeah.
21:18 Who do I call when it breaks?
21:19 Yeah, exactly. Okay, so where are you seeing people who you're talking to a lot going. We've got, I think, on one far end, like at the very far end, like, if you turn it to either negative one or eleven, depending on which side you consider this to be on. Bare metal, that's very rare these days. But VMs and then Docker Kubernetes platform as a service, maybe somewhere functions functions live in there. Yeah, VMs usually 100% alone, but I.
21:45 Think VMs are still very popular with some companies who are just trying to get to the cloud. Right. It's very easy to kind of park your car in somebody else's garage. Right? I think that's okay.
21:58 Yeah, it solves. The biggest problem is how do I get a reliable Internet connection that's fast and a server and network infrastructure that I don't have to take care of.
22:05 Yes, I think those are still very it's a very viable option for some folks. The PaaS option is, again, most companies can still run a very sophisticated system on paths. The one thing that I'm seeing right now is that companies are wanting to even small companies or even they hear Kubernetes, they hear the promise of it. It's scalability, it's responsive, it's self repairing, down time scale to zero, like all the buzziness that comes with it. And there's the means that go around with the tiny box on a tractor trailer. Like I put my blog on Kubernetes. You don't need it, but everybody wants it, and I'm not sure why. And then it's just cost prohibitive in both manpower and management and cognitive load and all of those things. So there's that aspect of it. We want to find a place that is somewhere in between. Like, what if I could have all the promise of Kubernetes but not have to learn Kubernetes? And that's another thing that we're talking about with things like Azure container apps and being able to have kind of the best of both worlds.
23:13 Right. And looking forward I don't want to get into it yet, but just to give people a preview is you guys have built a CLI tools for python and some templates that kind of help people realize that goal much more quickly than just, all right, well, I guess I'm going to set up a Kubernetes cluster and nodes and all that kind of stuff. Yeah. Okay. Well, Docker is an interesting one. To do Kubernetes, you got to do Docker.
23:37 You got to do containers. At least there's the look how easy it is to run Docker. I just get the image. Docker run off it goes. Unless you've got multiple tiers like many apps do. I have got a database layer, maybe a background worker service for emails and other long running jobs, and then all of a sudden coordination app becomes really hard.
24:01 Yeah, I'd say that kind of where people start off with containerisation is the python app itself. So the Python code, whether that's in, like, a WSKI application or using ASCII or something. So that's like Django Flask FastAPI. So running that in a container is a great place to start, but hardly ever is that the whole application just Flask and Django alone. You need some sort of web app on the front end, like Http server, like NGINX or something. And then you need the distribution to wgsi. So you need gunicorn, a unicorn or Hypercon or one of the other corns to connect between the Http front end and the back. And then once you've got that in place, you're like, okay, I need to configure my SSL certificates and my DNS and stuff. So you can do that. But I think people start to jam everything into one container, and that's where I kind of get to be.
24:58 So it absolutely explodes. Like, all right, it just won't take it anymore. Right. But preserve that. Just call run on it as long as you can. Right. People, I imagine, are trying. Yes.
25:07 And they're not supposed to be persistent. Like, containers are supposed to be immutable, but you can attach storage to them, which is where it gets tricky with databases, because really, running something like postgres and Docker, you can, but it's not going to be particularly fast. And you've got all these extra challenges of if the image stops, then what did you just lose? So, yeah, I think containerization is great to get some of the python environment complexities. Like, you've got a virtual environment to configure. How was that installed? What version of python? So it's like all the bits of python that's specific to getting a python up, running consistently in one place and another. So docker is great for that, containerising is great for that. But you often find yourself needing more than one container, which is where things start to get complicated, because then it's like, okay, I've got redis in there.
26:00 I want to run NGINX in one container, I want to run my up in another. So then how do you, like, coordinate all that stuff, right?
26:10 And just how do I keep them connected? Right? Because in regular non docker world, you just say my redis string connection string is this. My database connection string is that NGINX says I route traffic over either this Unix socket or through this TCP socket. But those are not stable as these docker images come and go separately. Right. It gets tricky to connect them still.
26:35 Yeah, there's a connection and the coordination of it and things like Docker compose, I think, helps with that there. Anthony mentioned a very valid point around databases and containers. I think when container development started to kind of hockey stick a little bit, I can't tell you how many times I answered the question, should I run my database in a container? And I was like, well, no.
26:58 And then it was like, well, why not? I was like, okay, here's the 15 reasons why you should never do that. I go.
27:04 To my framework and write in the tutorial that shows me how to run postgres.
27:07 Yeah.
27:08 So I get started, what happens when it dies?
27:12 And they go, oh, no, I lose my data. Yes, don't do that. But they serve very well for Emulating, those big cloud managed services like Redis and Postgres and stuff like that. They would typically run in a managed service instead of trying to have your entire world, if you will, running on your local machine. And then the other part of that is how many is too many?
27:36 The micro services type of scenario of, are you going to run 200 individual containers on your local machine? There is a cap where it's just too much.
27:48 Yeah, for sure. And we're even seeing some sort of swinging of the pendulum, I guess you would call it, back to articles like, give me my monolith back, life just got too hard. And now my personal philosophy, and I'm not suggesting anyone else has to adopt it, but when I think about these things like microservices versus monoliths and docker and Kubernetes versus more simple things, is I try to keep the complex parts in the areas that I'm really good at and not push them to areas that I have a little experience with. Like, I don't have a great DevOps background, so I don't want to push down into the the complexity to DevOps and keep the code simple because I can handle complex code, but I can't handle complex DevOps. Not right now, anyway. So for me, I kind of try to think of the balance of what works for me.
28:34 I literally saw an example where somebody was saying, I manage all of my configuration in its own repo and then that sucks into my DevOps pipeline. I was like, what is happening?
28:47 I'm not even going to talk about that. I'm sure that works for you, but like I said, unless you really understand that level of complexity and if you.
28:58 Specialize in that area, then maybe that's exactly your secret sauce. Yeah, but if you don't don't like seeing someone else be doing that and go, I should just do that because it's working for them. Maybe, but it's not a clear I should just go that way. I think it's the story. Yes.
29:13 And I think just like in a coding world, we can use things like interfaces and polymorphism to the Nth degree and for a simplistic programmer examples, they're going, Why are you doing that? Because I can just do it in a single file. Thanks.
29:28 Exactly. Why do we have like, dependency injection registry 50 lines long? Like, I really just don't it's just.
29:35 Not it's hello, world man.
29:36 Yeah, exactly.
29:38 It's just a management script. Right, all right, so maybe that probably sets the stage a little bit for the work that you two have been doing. This project we're going to talk about but one more predecessor bit of history. So one of the notable things about the Azure CLI that is the CLI that everyone uses when they're not working in the crazy, bladed, very full management portal is built in Python, right?
30:02 Yes, It Is.
30:03 Yes. But it's not that that actually makes any difference for Python people. It's just interesting detail, but that one is not focused as much on helping developers get their code out as maybe helping IT DevOps side of the world. DevOps On Azure, right.
30:18 I would say it's primary goal is referred to as kind of a management ops plane functionality. There is some capabilities in there for pushing up Simplistic web applications. There's a web app command where I can kind of get a simple page up and there's some static web apps capabilities within that command. But when you get into a full kind of job to be done for a developer focused type of activity, that does not serve that type of persona, sure.
30:50 All right. Well, that brings us to your project. Does your project have a name? Just so people know, at the time of us talking about this, this is not yet released, but at the time people are going to be listening to it, it will be released.
31:03 Yeah, sure.
31:04 I'm kind of behind the scenes. Maybe I can pull up your screen here and start from there.
31:09 We'll call it like Lowercase Azure. Developer CLI okay. Because if it's uppercase, I think that means it has a name. So we'll say it's the lowercase Azure Developer CLI. It's a standalone install command as AZD or AZD, depending on where you're from the world.
31:27 Anthony.
31:28 Much of the other rest of the English speaking world?
31:31 Yeah, everywhere with the US.
31:33 And its primary goal is to make it easy for developers to get up and running with both infrastructure and code in Azure, based on at least initially, we've got some out of the box templates to help establish kind of a getting started kind of to do app, which, at least in this particular example, we have a to do application that's got a Python FastAPI middle tier with a ReactJS front end, and then the back end is supported with Azure Cosmos DB with Mongo API.
32:12 Right. So the way it's going to go now is it will have the Mongo API can be pointed at Cosmos. DB Your document database in Azure, right? Correct, Correct.
32:24 Let me ask another really quick question on that. What's the interaction with the Mongo API? Is there an ODM they're using? Is it just Py mongo or rather motor or something like that?
32:34 Yeah. This app was built with an odd it was built with beanie.
32:37 I Love Beanie. I converted the talk python and python bytes him over to it. The really big one left for me is the training site, which is massive, but it's getting some beanie on it as well.
32:48 Yeah. So For Like A Fully Async as The App On FastAPI Beanie is a great option because it's like Async from end to end, and it uses the Async Motor client for talking to mongo super fast. So that's what we built for the To Do app, which is like the demo application.
33:08 This portion of Talk Python to me is sponsored by NordVPN. I've been a pain and happy NordVPN customer for over a year now. So when they approached us to become a sponsor of the podcast, I was excited because it's a product I've already been recommending. I use NordVPN almost universally throughout the day on all my devices, whether it's my Mac, my iPhone, or my iPad. I enable the Auto Connect feature and Nord keeps my connection protected and ad free. I'm sure you've heard that VPNs can keep your traffic private on public networks, and that's true. But let me tell you why I use NordVPN privacy and malware protection. First, privacy ad companies are slowly eroding. Our privacy shadow profiles are being built for you. And being built for me by combining tracking scripts ISP data and through data brokers. If these were just being used for commercial ads, it'd be one thing, but we've all heard stories about how groups have been targeted to affect negative social outcomes. Think Cambridge Analytica with Nords built in, network wide ad blocking and IP hiding. You'll limit the data that all of these players get to collect on you. What's so sweet about using Nord for this is it works across all of your apps. Not just a browser plug in, but even native apps on your phone can't contact or load most ads. These same ad networks have been hijacked to deliver malware. Nord also includes network level malware production as an added layer of safety. And Nord has a great offer for you. Use Talk Python FM NordVPN to get a massive discount on a two year plan that includes a free month. Nord is also riskfree. There's literally no risk to you with their 30 day money back guarantee. Give it a try, and if, like me, you love it, great. If you don't, they'll issue a refund and you can pretend the entire situation never happened. Say no to being manipulated by ad company and enjoy the free and open Internet on all of your devices. Visit talkPython.Fm/nordvpn to get your subscription started today.
35:11 Yeah. So let me see if I can summarize this for folks before we dive into more detail. Basically, you guys have built this full stack, I guess full stack fits, full stack FastAPI, document database, JavaScript, front end app that sort of natively integrates in the ways that you would expect it to an Azure. Not just you can get it up there and get it to run, but it's got different sections. It uses a hosted database, it integrates with CI CD, it has tests that plug into all those kinds of things and so on. And so you can take that and sort of publish that to Azure. But then, of course, you can just use it as a prototype to say, well, we don't need to do we need this other thing, so we'll swap out whatever. Yeah, right. Something like that. Yeah.
35:54 I would say there's a couple of key components. We do all of our commands. Again, it's command line based. We focus on a CLI first approach to this for a couple of reasons. It feels natural for a lot of developers who kind of are on the terminal constantly, but also allows, if VS Code wants to build an experience on top of it. If PyTorch wanted to build an experience on top of it, they can because they just call into those same hooks.
36:21 But also, can we consume the CLI as a python library?
36:26 Well, that's a good question because we are also looking at making this an extension inside of the core Azure CLI. So we have actually wrapped this as a python extension okay. For them.
36:39 Interesting.
36:40 Yeah.
36:40 You can always subprocess it around all.
36:42 Day you want, but it's written as in go.
36:44 Okay, got it. Yeah.
36:45 So it's a it's super lightweight. It's like five and a half megs. It's really small.
36:50 And once you have the binary, you have it. That is one of the true beauties of going.
36:56 Yeah. The other parts of it, like you mentioned, if there are pieces of the app, if I back up one section here, is that a lot of the samples that we come across, they are a Hello World. They're very simplistic app. And once you kind of go through the process, when you're all done with it, you're like, okay, this is great. I built my hello world.
37:17 App.
37:17 Now what?
37:18 Right?
37:18 This is an opinionated structure that allows you to swap out components, build upon it. Like, I can take out the FastAPI if I want to use Flask or Django or whatever. I can swap that out and do it swap in postgres if I'd like. We have infrastructure as code. Right now, we're using Bicep to do that. And then the future will support things like TerraForm and other IIC providers. And that's just how we would swap out any of the infrastructure.
37:45 Right.
37:45 This particular sample, we are targeting the Azure container apps as a host, as our target host. But we do support paths. And in the future, also things like Kubernetes.
37:56 Yes. And also in terms of how cloud native is it if you don't scroll away just yet, come back.
38:03 Yeah, no worries. In terms of just how cloud native it is, how much does it reach into all those things? Basically four areas that are interesting Azure container apps. Right. So you've got Anthony, let me know. But it sounds like you've got maybe an NginX type of container, and then you've got one that runs UV Acon FastAPI workers.
38:25 Yes. There are two containers in this example, but yeah. As a container apps is more where you've got a collection of containers that form an application. Like, if you put that in a docker compose or something, and then we kind of spin those up for you and manage that for you. So you don't have to think about plan about things like Kubernetes and it does SSL certificates and DNS and everything else for you.
38:48 Nice. You have to worry about let's encrypt and stuff.
38:50 Yeah, it does all that for you.
38:52 Nice. And then hosted Cosmos DB.
38:55 Yes. The Cosmos DB is the document database on Azure. And when you deploy, you can choose which API you wanted to have. You can pick the Cosmos API or you can pick a Mongo API. So if you pick the Mongo API, then you can use your existing Mongo tools and clients with it.
39:12 Like Beanie and so on. Yeah, exactly.
39:15 That would just work.
39:16 Okay. And then monitoring Azure Monitor. This is like Sentry to type stuff, right? Like, is it up? Is it running into errors? Does it also do performance or just sort of error?
39:26 Yeah, it will do all of your calls. Basically. It does tracing between all of the different containers or different components of the app. You can look at telemetry between those calls. How long does the call taking to the database?
39:39 You can look at the individual calls, see where the errors are, trace those down to like it was a git call on the to do collection, and actually look at those and then introspect those inside of Azure Monitor. It's pretty detailed.
39:52 Yeah, that's really nice. I use that stuff all the time for my sites. If I run into a problem, probably the first place I go is the actual log. But if it's not super clear right away, I'm like, all right, let's go to the monitoring and see the local variables and for sure and see what was going on for real. And then the last one is Secrets. It is nice to just check in your API keys into GitHub. I don't understand why I heard you're not supposed to.
40:19 No, I understand why you're not supposed to.
40:21 Yeah. The key vault is really great in the sense of yeah, it is the sauce, if you will, where we keep the connection string for the Mongo database. And then within the actual FastAPI app, we can then connect to the key vault to pull that out securely. And then really, the nice thing about Key Vault is if we need to change it, we can just change that one key and not have to kind of redeploy all the other apps, which is great. And then from a local development story there, we use environment variables to have that locally as opposed to passing it around or keeping it in a GitHub repo. Of course.
40:59 Got it.
41:00 The app is kind of like built in a way that we said if we were building a production app. This is how we do it. So, like Shane said, the example app is opinionated because we've picked how we've configured Python version environments and how I've done the testing and how the ASCII configuration works and stuff like that. But it's done in a way that it's, okay, this is a production style web app that we put together, and here's how you would deploy using this new azd CLI. So the new azure. Dev CLI. And the other important thing is that you don't have to learn all these new concepts. So it's not like we've said, okay, we've got our own configuration language that we're going to throw at you. We've got our own YAML files you need to write or stuff like that. It's tried to keep it as native as possible. So in the Python application, in the web app, then there's a docker file and there's a piproject, Taml. And if you want to run the docker file locally, you can do that.
41:59 One of the opinions that you're choosing is like used Poetry, for example, right?
42:02 Yeah. So using Poetry to manage the dependencies and make those, I guess, pinning dependencies and making them between creating things like lock files. But also if you work on the demo repo in Vs code, you can run and debug the app locally as well, so you don't have to figure out all the extra complexity. So, yeah, we kind of really thought, okay, let's write a production type application using all the normal tools we would use, which is like docker files and pipelogic Taml requirements files. And then on the front end app, it's in React. So we've got our normal project and NodeJS configuration and stuff, right?
42:41 All the NPM stuff.
42:42 Yeah. And then what would the developer need to describe that in a way that they can be deployed up to the cloud and trying to make that as simple as possible?
42:51 Sure. Yeah. So Shane, you spoke about biceps as a way to get your things up, and I think it might be worth touching a little bit on the Bicep story. Sure. That's usually arm wrestling for me, but I'm thinking Bicep is like ansible or TerraForm, but it's one of these as kind of Azure native thing, right, for DevOps.
43:13 Most Azure DevOps folks would understand if we said, hey, what's your Arm template look like?
43:18 Azure Resource Management.
43:19 Management.
43:20 Template management.
43:21 Lots of JSON, lot of JSON, thousands of lines of JSON. Not easy to write, read, or kind of understand. Bicep is a simpler format and kind of self describing almost. So we use that right now to describe the resources that we're going to provision and deploy our app to. And in this particular case, in this template, we have a number of templates, but in this template, we're putting together a container registry. We're provisioning the container apps, environments, the web apps, the Mongo database, a lot of things that if you did those individually, it would take a lot of time to do. So we're doing that all as a part of the one single line command to do that. So we're looking at implementing other IIC providers like TerraForm and Polomy as well. And if that was make you happy in your place, we're not hiding anything in what we're doing. We're more of an orchestrator of the tools. Instead of hiding some secret commands to make all this happen, we like folks to kind of see what the steps are to do it. We're just going to do the steps for you. Simple. Press the button. Press the easy button.
44:33 Yeah, absolutely right. That's great. So maybe Anthony would be a good time for us to sort of talk us through some of the code and the projects because I think that will give people a sense of what they're getting in terms of what this app looks like.
44:47 Yeah, so the demo app that we put together, it's got two main containers on the front end, which is the React JS web app, which is running under node 16, and then a FastAPI API, which does basically the middle where between the front end and the database in the back end. So the React JS one is an app that we wrote to demonstrate a lot of functionality and to do management app, basically.
45:15 But in terms of the FastAPI.
45:16 App, the canonical example that people may try.
45:19 Yeah, exactly. And the FastAPI one is the one I worked on with the team. And that's really kind of looking at, okay, if we did a modern Python application, how would we write it and how would we deploy it? And like I said, using poetry for requirements management and stuff, but you could use whatever. This is an example. You don't have to use poetry, but I'm just showing the latest approach and the latest design with the application. And then if you want to swap out or change bits of it, obviously you can do that. The project itself got a Pi project yaml. We're using FastAPI Unicorn and then Beanie is the ODM and then a nice package.
45:58 Really just tell people real quick about just what Beanie is. Just so that I've had room and on the show before, but maybe not everyone knows.
46:06 Yeah. So if you're working with FastAPI, often you would describe models that the API reads or rights or reflects using something like Pydantic. So these are kind of your data classes. So Beanie basically allows you to write Pydantic style models, data classes and then read and write those from a Mongo database. So this app is basically written in a way that the To Do list items, the tasks and stuff like that are all reflected in a models file. And then Beanie does the work of actually putting those in the database. So we have a to do list. We can also do things like to do items. And each of those are a document, but they're written in a way that's very similar, basically identical to how you'd write a Pydantic model. Beanie also allows you to write just.
46:55 Slightly different base class. Yeah, yeah.
46:57 Beanie also allows you to lazily reflect Beanie models into Pineapic models. So when you're working FastAPI, you can get all that nice functionality of using Pydantic, but you get a lot of the performance of basically trying to keep it as close to the actual document in Mongo as possible. So, yeah, that's one big challenge people have to overcome when they use stuff like Pydantic, which is like, when do you put stuff into Pythonic models? Like if you're reading 1000 rows in the database and you're just going to give that straight to the user, there's no point in reflecting all that into Python tick and then sending it back out again, right?
47:34 Doing all the conversions or whatever. Craziness yeah, just slow it down. Okay, cool. And this is a really good choice because it matches the native Mongo API and it matches FastAPI on at least two levels. Tied and Take models are all about driving the data exchange and the Open API specification, which is fantastic. But then also Beanie is an Async ODM, so it allows you to fully leverage the scalability of FastAPI.
48:00 I think it's great choice.
48:01 It's a nice configuration and it's nice to run as well. It's pretty responsive. And then what we did on the app itself in FastAPI, a couple of things that you have to do configuring cores, which is always fun. And then we've put tracing in the app as well.
48:16 I just ran into a cores error on just an HTML file I opened up. Like, there's no server, I can't do cores. Please don't do this.
48:25 Yeah, it becomes a bit of a challenge.
48:27 It does.
48:28 So on FastAPI. We've been doing a lot of work over the last year on a project called Open Telemetry. It's a cross company open source collaboration to create basically a tracing and eventing framework across multiple languages. So you can use open telemetry in Goes Python and basically install does it.
48:50 Connect into the thing that Shane was talking about with the Azure monitoring?
48:54 Yes, it does. And it also connects into a whole bunch of other monitoring tools. It's not the Azure monitoring library for Python. It is a agnostic library.
49:03 Nice.
49:03 Which it's got support for FastAPI. It also has support for lots of other Python components. So when you get the actual logging data, for example, if your app crashed or somebody made a request which gave a 500 error in Azure monitor, you'd get the full stack trace and you'd get all the events that led up to that as well. So it's not just a log file. Basically, we're actually putting stuff in the Python app to get all the tracing information. You can also use it to see like, performance regressions and slow pages or slow requests. So in Azure Monitor, you can actually go and see what are like the slowest requests I've had to the application and what was the cause of that. Yeah, and none of that stuff is proprietary. It's all basically using Open Telemetry, which is open source, but we have a special source is the exporter. So we export open Telemetry events to Azure Monitor.
49:53 Okay. Yeah. This all looks super nice. And the reason I wanted you to talk through this is the project looks really nice. It looks like an app that I would like to use as a starting place for my final destination, rather than just, oh, cool, there's a main.py app.py. It's all just jammed in there. It feels like a good starting point.
50:14 Yeah. And then, like I mentioned, debugging is set up already. So yeah. In Vs code, you can either debug the React app or the API, the FastAPI app, and that will run the whole application in locally.
50:27 Does that run just on your local machine or does that fire up the containers?
50:32 It just runs on your local machine.
50:34 Okay. Yeah.
50:35 It would run fast. API locally.
50:36 I give that a thumbs up. Yeah. So if you wanted to, for example, debug the front end and it needs to get just go start the back end and then go Debug the front end. Something like that. Right?
50:45 Yeah. Just trying to keep it super simple.
50:46 Yeah, no, that's good.
50:47 And then we also wrote tests for both components. So yeah, the To Do app comes with its own unit tests for FastAPI and then for the front end as well. And then all of that set up in Vs code, they're all Pytest tests. So if you just want to run Py test over at, you can. But yeah, asynchronous FastAPI tests are a bit fiddly to set up the first time. So we've done all that as a demo as well.
51:12 Yeah. This is great. In one area that we haven't talked about yet, shane, when you deploy this and we can talk about how to do that in just a second, is that it automatically sets up at least with one of those CLI commands, CI CD, continuous integration and continuous delivery or deployment. And these tests that Anthony is talking about, these automatically just start running on check ins for you. Right. That whole lifecycle is connected here.
51:39 Yeah. Anthony, I don't know if you could maybe scroll up and touch on the GitHub actions that are included there. So with every template that we're providing out of the box, we include the GitHub actions in order to run those so on, the builds will actually provision deploy and we would include the test run as well as a part of the container build. If it's targeting containers or if it's pause, we would have the test command, which is not in this particular one, but it would be like AZD test would be the command that would run. It would run through all the testing that are in there, depending on the platform.
52:18 Gets us to that point. Like I said, as a developer, I just want to check in code and know that my tests are going to run. If they pass, it deploys to the environment that's specified and it gets me to a happy place as a developer.
52:32 Right.
52:34 You don't have to know about this stuff. And some degree that might not be 100% true. Right. Like, if your code is running somewhere, you need to have some level of understanding, even if you don't have to directly touch it. Yeah, but I think one of the big benefits is for a lot of people, you can start running there and you can kind of grow into a better, deeper understanding. You don't have to swallow the whole I learned all of the Linux configuration all in one shot just to get it to even start.
52:59 Yeah, I think it's important to mention a couple of times is that even though we have a command, like in order to get this whole architecture that Nancy just walked through, but I wanted to get this into Azure, I would just run ACD up and then pass in the name of the template repo and it would then deploy all of that and run it for me.
53:22 Yeah, let's talk about like we've got the app and they talked about running and developing it locally. Right. Now what? I actually want to get it up and running. I want CI CD, I want all the things yeah.
53:33 So if I started from nothing, if I was just opened up Vs code or my command line or whatever I'm in in a terminal, I could just run AZD up and then pass in a template. And in this particular case, it would be like to do Python ACA Mongo, and that would clone that repo. It would then start to provision those resources on Azure based on your login to Azure, and then use the Bicep infrastructure definitions to create that target host if it's pass, or Azure container apps, and then build and deploy the API, the front end, and then make all those connections and so on as we walk through. How that's all put together.
54:16 Right, so it's worth thinking about that those Bicep DevOps commands and configuration, if you want a slight variation on what this gives you, you change the Bicep and then AZD Up just uses your slight variation, right?
54:29 Well, yeah, exactly. One of the services that is very common to use in our apps nowadays is Redis. If I want to add Redis and make a couple of changes, I could just put that definition in my Bicep code, add in the environment variables that are necessary to expose my app and call up and then. We would then push them into key vault, provision the service, redeploy the code, and hopefully if we typed it all right, it would happen. Right. So that would be the way to do that for sure. If I run the AZD pipeline command and help that establish my GitHub repo and kick off those workflows and the GitHub actions at that point, I could just make those changes to the Bicep files and check those in and then the workflow would kick off that process for me.
55:16 That's cool. Can I start from code and then do this? Or do I do the template to create the code and the GitHub repo, if I already have a GitHub repo, for example?
55:24 Yeah, that's a good question. We have some documentation and walkthroughs on how to what we call DevOps your project. And basically it will walk you through how to set up that infra folder. That infra folder will contain the Bicep definitions. And we've got an Azure YAML file which will hold a couple of the kind of naming structures that we have as an opinionated way to name things. And then also set up that target host. Again, is that Pass or is it App service? Or is it container apps or AKS? So a little bit of set up and then you can start using Ad D up or AZD Deploy. This one is deploy the app to then take your code and push it onto the platform.
56:08 Okay, that sounds really good. What about the talk about the continuous delivery part? So I've got this created. I've got a GitHub repo. It's up and running. How do I associate a domain name, by the way? First?
56:20 Well, the domain name, we would push it onto the Azure and then create.
56:25 That your app to a UID Azure.
56:29 Or something like that, blah, blah, blahzure, website net.
56:32 Then that would be part of that configuration inside of Azure Portal or through the management plane where you'd actually kind of go through of associating your domain name with whatever your entry point is. In this case, it's going to be the React from End.
56:48 Right.
56:48 So I would go into that particular app service and set that up with your DNS and such there.
56:53 Yeah, you probably want the API if you want to surface an API out of FastAPI. And then you want the React friend in, obviously for most people. Yeah.
57:02 And you can add if you want to get into things like that, one of the pieces you could add is things like Azure front door or API management or something like that in front of those components, too.
57:14 Okay. But that step is like a separate step. You go in there and you can do it because how often do you really want to have a thing messing with your DNS? As little as possible.
57:23 The one time shot. That's all I want to do.
57:25 Yeah. Please wait 24 to 48 hours. For this to propagate, like no, it's.
57:30 Funny, I haven't had a DNS Knockwood, I haven't had a DNS change take longer than a few minutes nowadays.
57:37 But yeah, it is a lot better than it used to be.
57:40 I just changed all of our email and stuff around and there's been a lot of MX records and the verification keys.
57:48 No matter how many times you do it, you're sure you did it wrong.
57:51 Yeah, that's for sure. So back to my original training, I was like, let's just sort of wrap this up with the continuous delivery. I've got the app up. Now we know how to get the domain associated with it and whatnot. Presumably go and buy a domain where we buy domains pointed at it, let it map over, but then I make some changes and I get push a thing. What happens now?
58:12 Yeah, if you set up your CI CD pipeline, it would then run through that same process we showed in the GitHub actions here and talk through it. It would run your test, do the deployments. We do support multiple environments, so we can help set up like a dev or a QA environment as well, other.
58:31 Than just a single like a staging sort of thing that people can yeah, and then you could set up some.
58:36 Processes within Azure, like, hey, this passes, let me do an IP switch, or however you manage that in the platform based on your scenario. But yeah, we get to that point where we're just checking in code and.
58:48 Having processed what's the branching structure look like. If I just push domain, is that going to go live and I got to work on a dev branch to not do that? Or is there like a prod branch?
58:58 Yeah, you can set it up in your GitHub action right now. The template is just going to work.
59:03 If I don't do anything, what happens?
59:05 It's main branch.
59:06 Main branch goes straight to production. I love it. You all are just care free to do it live. The users are the touchers. Let's go do it live.
59:16 Okay, got it. But you would just tweak your GitHub action YAML file and change your branch name or something.
59:22 Yeah, you could set some conditionals in the GitHub action based on the environments that are coming in.
59:26 Cool. Anthony, what were you going to say?
59:28 Yeah, it assumes a single branch strategy.
59:31 Or you can tell it to generate the template for you and you can put that template wherever. It's pretty easy nowadays to say with GitHub which branch and stuff this should apply for, or this pipeline should only run on poor requests.
59:46 My recommendation to people is that you keep main highly protected. You don't let people push directly to main and it can only be merged into and then it has to be reviewed and stuff. I think keeping a clean main branch is good strategy anyway.
01:00:01 You can have a feature branch or release branch separately to that. So probably the main branch would be your sort of live dev environment. And then maybe you want a feature branch or main release branch separately to that, but using the same templates. So all you're really changing is the targeted environment names.
01:00:19 Yeah. Okay. That sounds like good advice. All right, guys, well, we're getting a little short on time now. This looks like a really interesting project. I love the technical choices on the back end that you made to sort of create building blocks for people. I guess we could wrap it up real quickly with we've got this more DevOpsy management. It like CLI that people have used previously if they're doing python stuff and they kind of want this container hosted world. This is probably the recommended way, at least from you all. Yes.
01:00:48 If you want to get started and fantastic. Anything else you want to show us?
01:00:54 No, nothing from my side template that.
01:00:57 You can kind of build upon that you want.
01:01:00 Oh, no.
01:01:02 This is a great way to get started.
01:01:03 I thought he was just being getting tired because it's late where he is, but no. All right, well, Anthony, I'm sure it's not going to come as a big surprise given all of your current work and stuff, but I'll ask you the final two questions first and then we'll hopefully get Shane back shortly. Good to write some pylon code. What editor using these days? Still definitely vs code, but tell people about the font.
01:01:24 Comic sans mono. So it's a comic stands font but in mono space.
01:01:29 Awesome. Is it a nerd font?
01:01:30 I think there's a nerd font flavor of it. I haven't configured my terminal to use comic mono yet because I think that'd be going a bit far.
01:01:38 It looks madness.
01:01:39 It looks better than you think.
01:01:40 It looks good. It looks way better than you would think. Comic sands look. And it's really weird to be I.
01:01:46 Had to make a DNS joke.
01:01:48 You did. You took yourself offline, but you're back and just in time to answer the question. If you write some python code, what editor are using these days?
01:01:56 I use vs code.
01:01:57 Right?
01:01:57 I don't know. I use vs code for taking notes.
01:02:01 It's markdown or everything's in markdown or whatever code.
01:02:06 Yeah, all my notes these days are in markdown. If not like a Google doc zohodoc, something like that. It's definitely in markdown.
01:02:12 I think it was like three years ago, I was in a meeting with Chris Diaz, who is the kind of owner of vs code, and he pulled up his screen and he started taking notes of vs code. And I was like, I'm an idiot. I should be doing that.
01:02:25 Yes. Just go to the bottom right. Change. That little language.
01:02:31 Notepad is dead to me. Yeah.
01:02:33 The problem I have at the moment is I've probably got too many extensions. I just realized this morning I have 99 now, three figures.
01:02:44 There might not be room in the UI to just display that number. It might stop at two digits. Just kidding. Yeah. That's awesome.
01:02:51 I tell you, I didn't even the first bit of here's some irony. The first time I ever wrote Python, it feels like 100 years ago, but was actually to write a Sublime add in to enable the .net IntelliSense for net core. So I was on the Omni Sharp team to write the add ins for that. So completely not Python related, but I was using Python to enable .Net and Sublime back in the day.
01:03:18 That's cool. Yeah. So very meta. Using the editor to write the editor. All right. And then notable PyPI package. Anything you want to give a shout out to. I mean, we definitely mentioned a bunch.
01:03:28 Of fun ones, but yeah, I'd say Beanie Perflant, which is one of mine, but yeah, check out Perfluent if you want to. And check out Beanie as well. It's a really nice approach to document databases in Asynchronous front ends.
01:03:42 Yeah, especially if you're doing FastAPI.
01:03:44 I was going to say I used to struggle with document databases and Pydantic, and Beanie made my life a whole lot better.
01:03:52 Yeah, I think we all concur, Beanie. Definitely a good call. Yeah, absolutely. All right, guys, final call to action. People want to get started with this. Once it's out, what do they do when they're listening to this? It will be out. So not to confuse folks, I've got.
01:04:07 A link that folks can go and check this out. It'll obviously be in the show notes. It's a short link, it's an Aka.Ms, and it's try-aca-python and they can see the project template, sign up for our preview, check out the repos, et cetera.
01:04:23 Cool.
01:04:23 Yeah, it looks like a neat project, and definitely people are doing Azure. It supercharges you into a ton of best practices.
01:04:31 Yeah, for sure.
01:04:32 Well, nice work, and thanks for joining me to talk about it.
01:04:34 I appreciate the time. Thanks.
01:04:35 Yeah, you bet. Bye. Bye.
01:04:38 This has been another episode of Talk Python to me. Thank you to our sponsors. Be sure to check out what they're offering. It really helps support the show.
01:04:46 Take some stress out of your life. Get notified immediately about errors and performance issues in your web or mobile applications with Sentry. Just visit 'talkpython.fm/sentry' and get started for free. And be sure to use the promo code talkpython all one word. Say no to being manipulated by ad companies, and enjoy the free and open Internet. Get NordVPN on all your devices. Set, auto, connect and relax. Visit 'Talkpython.FM/NordVPN' to get your risk free subscription started today when you level up your Python. We have one of the largest catalogs of Python video courses over at Talk Python. Our content ranges from true beginners to deeply advanced topics like memory and Async. And best of all, there's not a subscription in site. Check it out for yourself at Training.talkpython FM be sure to subscribe to the show, open your favorite podcast app and search for Python. We should be right at the top. You can also find the itunesfeed at /itunes, the GooglePlay feed at /play and the Direct rss feed at rsson talkpython.fm
01:05:52 We're live streaming most of our recordings these days. If you want to be part of the show and have your comments featured on the air, be sure to subscribe to our YouTube channel at talkpython.fm/youtube. This is your host, Michael Kennedy. Thanks so much for listening. I really appreciate it. Now get out there and write some Python code.