00:00 What is the status of serverless computing and Python in 2024?
00:04 What are some of the new tools and best practices?
00:06 Well, we're lucky to have Tony Sherman, who has a lot of practical experience
00:11 with serverless programming on the show.
00:14 This is "Talk Python to Me," episode 458, recorded January 25th, 2024.
00:20 (upbeat music)
00:25 Welcome to "Talk Python to Me," a weekly podcast on Python.
00:38 This is your host, Michael Kennedy.
00:39 Follow me on Mastodon, where I'm @mkennedy, and follow the podcast using @talkpython,
00:44 both on fosstodon.org.
00:47 Keep up with the show and listen to over seven years of past episodes at talkpython.fm.
00:52 We've started streaming most of our episodes live on YouTube.
00:56 Subscribe to our YouTube channel over at talkpython.fm/youtube to get notified about upcoming shows
01:02 and be part of that episode.
01:04 This episode is brought to you by Sentry.
01:06 Don't let those errors go unnoticed.
01:07 Use Sentry like we do here at Talk Python.
01:09 Sign up at talkpython.fm/sentry.
01:13 And it's brought to you by Mailtrap, an email delivery platform that developers love.
01:18 Try for free at mailtrap.io.
01:22 Tony, welcome to Talk Python to me.
01:24 - Thank you.
01:25 Thanks for having me.
01:26 - Fantastic to have you here.
01:27 Gonna be really fun to talk about serverless.
01:30 You know, the joke with the cloud is, well, I know you call it the cloud,
01:34 but it's really just somebody else's computer.
01:36 But we're not even talking about computers, we're just talking about functions.
01:39 Maybe it's someone else's function.
01:40 I don't know, we're gonna find out.
01:41 - Yeah, I actually, I saw a recent article about server-free.
01:45 Recently, somebody trying to, yeah, move completely.
01:48 Yes, yeah, because as you might know, serverless doesn't mean actually no servers.
01:53 - Of course, of course.
01:55 Server-free, all right.
01:56 So we could just get the thing to run on the BitTorrent network.
02:01 Got it, okay.
02:01 - Yeah.
02:02 - I don't know, I don't know, we'll figure it out.
02:05 But it's gonna be super fun.
02:06 We're gonna talk about your experience working with serverless.
02:10 We'll talk about some of the choices people have out there and also some of the tools that we can use
02:15 to do things like observe and test our serverless code.
02:19 Before that though, tell us a bit about yourself.
02:22 - Sure, so I'm actually a career changer.
02:26 So I worked in the cable industry for about 10 years and doing a lot of different things
02:34 from installing, knock at the door cable guy to working on more of the outside plant.
02:40 But it just, at some point, I was seeing limits of career path there.
02:46 And so my brother-in-law is a software engineer and I had already started going back to school,
02:54 finishing my degree and I was like, okay, well maybe I should look into this.
02:57 And so I took an intro to programming class.
03:00 It was in Python and that just led me down this path.
03:05 So now for the past four years or so, been working professionally in the software world,
03:10 started out in a QA role at an IOT company.
03:14 And now, yeah, doing a lot of serverless programming in Python these days.
03:20 Second company now, but that does some school bus safety products.
03:24 - Interesting, very cool.
03:26 - Yeah, yep.
03:27 But a lot of Python and a lot of serverless.
03:29 - Well, serverless and IOT, feel like they go pretty hand in hand.
03:34 - Yes, yep.
03:36 Yeah, another thing is with serverless is when you have very like spiky traffic,
03:42 like if you think about school buses that you have a lot coming on twice a day.
03:47 - Exactly, like the 8 a.m. shift and then the 2.30 to 3.00 shift.
03:52 - So yeah, that's a really good use case for serverless is something like that.
03:57 - Okay, are you enjoying doing the programming stuff?
04:01 So the cable stuff?
04:02 - Absolutely.
04:03 Sometimes I live in Michigan, so I look outside and look at the snow coming down
04:08 or these storms and yeah, I just, yeah, I really, some people are like, you don't miss being outside?
04:14 I'm like, maybe every once in a while, but I can go walk outside on a nice day.
04:19 - You can choose to go outside.
04:21 You're not ready to go outside in the sleet or rain.
04:24 - Yeah.
04:25 - Yeah, absolutely.
04:26 We just had a mega storm here and just the huge tall trees here in Oregon
04:31 just fell left and right.
04:33 And there's in every direction that I look, there's a large tree on top of one of the houses
04:39 of my neighbors, maybe a house or two over.
04:42 But it just took out all the, everything that was a cable in the air was taken out.
04:46 So it's just been a swarm of people who are out in 13 degree Fahrenheit,
04:50 negative nine Celsius weather.
04:52 And I'm thinking, not really choosing to be out there today probably.
04:56 Excellent.
04:57 Well, thanks for that introduction.
04:59 I guess maybe we could, a lot of people probably know what serverless is,
05:02 but I'm sure there's a lot who are not even really aware of what serverless programming is, right?
05:08 - Yes.
05:09 - Let's talk about what's the idea, what's the zen of this?
05:13 - Yeah.
05:14 So yeah, I kind of made the joke that serverless doesn't mean there are no servers,
05:18 but, and there's, hopefully I don't butcher it too much, but it's more like functions as a service.
05:25 There's other things that can be serverless too.
05:28 Like there's serverless databases or a lot of different services that can be serverless,
05:35 meaning you don't have to think about like how to operate them, how to think about scaling them up.
05:40 You don't have to spin up VMs or Kubernetes clusters or anything.
05:46 You don't have to think about that part.
05:48 It's just your code that goes into it.
05:51 And so yeah, serverless functions are probably what people are most familiar with.
05:55 And that's, I'm sure what we'll talk about most today.
05:58 But yeah, that's really the idea.
06:01 You don't have to manage the server.
06:05 - Sure.
06:06 And that's a huge barrier.
06:07 I remember when I first started getting into web apps and programming and then another level
06:14 when I got into Python, because I had not done that much Linux work, getting stuff up running, it was really tricky.
06:21 And then having the concern of, is it secure?
06:24 How do I patch it?
06:25 How do I back it up?
06:26 How do I keep it going?
06:28 All of those things, they're non-trivial, right?
06:31 - Right.
06:32 Yeah, yeah.
06:32 There's a lot to think about.
06:33 And if you like work at an organization, it's probably different everywhere you go too,
06:39 that how they manage their servers and things.
06:42 So putting in some stuff in the cloud kind of brings some commonality to it too.
06:46 Like you can learn how the Azure cloud or Google cloud or AWS, how those things work
06:53 and kind of have some common ground too.
06:55 - Yeah, for sure.
06:58 Like having, also feels more accessible to the developers in a larger group,
07:04 in the sense that it's not a DevOps team that kind of takes care of the servers
07:08 or a production engineers where you hand them your code.
07:11 It's a little closer to just, I have a function and then I get it up there
07:15 and it continues to be the function, you know?
07:17 - Yeah, and that is a different mindset too.
07:19 You kind of see it all the way through from writing your code to deploying it.
07:24 Yeah, without maybe an entire DevOps team that you just kind of say, here you go, go deploy this.
07:32 - Yeah.
07:33 In my world, I mostly have virtual machines.
07:37 I've moved over to kind of a Docker cluster.
07:40 I think I've got 17 different things running in the Docker cluster at the moment,
07:45 but both of those are really different than serverless, right?
07:48 - Yeah.
07:49 - Yeah, so it's been working well for me, but when I think about serverless,
07:53 let me know if this is true.
07:55 It feels like you don't need to have as much of a Linux or server or sort of an ops experience
08:03 to create these things.
08:05 - Yeah, I would say like you could probably get away with like almost none, right?
08:09 Like at the simplest form, like with like AWS, for instance, their Lambda functions,
08:16 you can, and that's the one I'm most familiar with.
08:19 So forgive me for using them as an example for everything.
08:22 There's a lot of different serverless options, but you could go into the AWS console
08:29 and you could actually write your Python code right in the console, deploy that.
08:36 They have function URLs now.
08:38 So you could actually have like, I mean, within a matter of minutes, you can have a serverless function set up.
08:44 And so, yeah.
08:45 - AWS Lambda, right?
08:47 That's the one. - Yes, yep.
08:48 - Lambda being, I guess, a simple function, right?
08:50 We have Lambdas in Python.
08:51 They can only be one line.
08:53 I'm sure you can have more than one line in the AWS Lambda.
08:56 - Yeah, there are limitations though with Lambda that are definitely some pain points
09:02 that I ran into, so.
09:04 - Oh, really?
09:04 Okay, what are some of the limitations?
09:05 - Yeah, so package size is one.
09:09 So if you start thinking about all these like amazing packages on PyPI, you do have to start thinking about
09:17 how many you're gonna bring in.
09:19 So, and I don't know the exact limits off the top of my head, but it's, yeah, pretty quick Google search
09:26 on their package size.
09:28 It might be like 50 megabytes zipped, but 250 when you decompress it to do a zip base,
09:35 then they do have containerized Lambda functions that go up to like a 10 gig limit.
09:41 So that helps, but.
09:42 - Interesting, okay.
09:43 - Yeah, yeah, those ones used to be less performant, but they're kind of catching up to where they're,
09:49 that was really on something called cold starts, but they're getting, I think, pretty close to it,
09:56 not being a very big difference whether you dockerize or zip these functions,
10:02 but yeah, so when you start just like pip install and everything, you've got to think about
10:07 how to get that code into your function and how much it's gonna bring in.
10:13 So yeah, that definitely was a limitation that I had to quickly learn.
10:19 - Yeah, I guess it's probably trying to do pip install -r effectively.
10:24 - Yeah.
10:25 - And it's like, you can't go overboard with this, right?
10:28 - Right, yeah, yeah.
10:29 When you start bringing in packages, like maybe like some of the scientific packages,
10:35 you're definitely gonna be hitting some size limits.
10:38 - Okay, and with the containerized ones, basically you probably give it a Docker file
10:42 and a command to run in it, and it can build those images before and then just execute and just do a Docker run.
10:49 - Yeah, I think how those ones work is you store an image on like their container registry,
10:55 Amazon's, is it ECR, I think.
10:58 And so then you kind of point it at that and yeah, it'll execute your like handler function
11:06 when the Lambda gets called, so.
11:08 - This portion of Talk Python to Me is brought to you by Multi-Platform Error Monitoring at Sentry.
11:15 Code breaks, it's a fact of life.
11:17 With Sentry, you can fix it faster.
11:20 Does your team or company work on multiple platforms that collaborate?
11:24 Chances are extremely high that they do.
11:27 It might be a React or Vue front-end JavaScript app that talks to your FastAPI backend services.
11:32 Could be a Go microservice talking to your Python microservice, or even native mobile apps
11:38 talking to your Python backend APIs.
11:41 Now let me ask you a question.
11:43 Whatever combination of these that applies to you, how tightly do these teams work together?
11:47 Especially if there are errors originating at one layer, but becoming visible at the other.
11:53 It can be super hard to track these errors across platforms and devices,
11:57 but Sentry has you covered.
11:58 They support many JavaScript front-end frameworks, obviously Python backend, such as FastAPI and Django,
12:04 and they even support native mobile apps.
12:07 For example, at Talk Python, we have Sentry integrated into our mobile apps
12:11 for our courses.
12:12 Those apps are built and compiled in native code with Flutter.
12:15 With Sentry, it's literally a few lines of code to start tracking those errors.
12:19 Don't fly blind.
12:20 Fix code faster with Sentry.
12:22 Create your Sentry account at talkpython.fm/sentry.
12:25 And if you sign up with the code TALKPYTHON, one word, all caps, it's good for two free months of Sentry's business plan,
12:32 which will give you up to 20 times as many monthly events, as well as some other cool features.
12:36 My thanks to Sentry for supporting Talk Python to me.
12:39 - Yeah, so out in the audience, Kim says, "AWS does make a few packages available directly
12:47 just by default in Lambda." That's kind of nice.
12:49 - Yeah, yep.
12:50 So yeah, Bodo, which if you're dealing with AWS and Python, you're using the Bodo package.
12:57 And yeah, that's included for you.
12:59 So that's definitely helpful in any of their, transitive dependencies would be there.
13:04 I think Bodo used to even include like requests, but then I think they eventually dropped that
13:11 with some like SSL stuff.
13:12 But yeah, you definitely, you can't just like pip install anything and not think of it,
13:19 unless depending on how you package these up.
13:21 So. - Sure.
13:22 Sure, that makes sense.
13:23 Of course they would include their own Python libraries.
13:25 Right?
13:26 - Yeah, and it's not a, yeah, it's not exactly small.
13:29 I think like Bodo core used to be like 60 megabytes, but I think they've done some work to really get that down.
13:37 So.
13:37 - Yeah, yeah, that's, yeah, that's not too bad.
13:40 I feel like Bodo core, Bodo three, those are constantly changing, like constantly.
13:45 - Yeah, yeah, well, as fast as AWS ad services, that they'll probably keep changing quickly.
13:52 - Yeah, I feel like those are auto-generated maybe, just from looking at the way the API looks at it,
13:58 you know, the way they look written.
14:00 And so. - Yeah, yeah.
14:01 That's probably the case, yeah.
14:04 I know they do that with like their, their infrastructure is code CDK, it's all like TypeScript originally,
14:11 and then you have your Python bindings for it and so.
14:14 - Right, right, right, right.
14:15 I mean, it makes sense, but at the same time, when you see a change, it doesn't necessarily mean,
14:19 oh, there's a new important aspect added, it's probably just, I don't know,
14:23 people have actually pulled up the console for AWS, but just the amount of services that are there.
14:30 And then each one of those has its own full API, like a little bit of the one of those.
14:34 So we regenerated it, but it might be for some part that you never, never call, right?
14:38 Like you might only work with S3 and it's only changed, I don't know, EC2 stuff, right?
14:44 - Right, yep, exactly.
14:46 - Yeah, indeed.
14:47 All right, well, let's talk real quickly about some of the places where we could do serverless, right?
14:53 You've mentioned AWS Lambda.
14:55 - Yep.
14:56 - And I also maybe touch on just 1 million requests free per month.
15:00 That's pretty cool. - Yeah, yeah.
15:02 So yeah, getting like jumping into AWS sometimes sounds scary, but they have a pretty generous free tier.
15:08 Definitely do your research on some of the security of this, but yeah, you can, a million requests free per month.
15:16 You probably have to look into that a little bit because it's, you have your memory configurations too.
15:22 So there's probably, I don't know exactly how that works within their free tier, but you're charged like,
15:28 with Lambda at least it's your like invocation time and memory and also amount of requests.
15:34 So yeah.
15:37 - I'm always confused when I look at that and go, okay, with all of those variables,
15:41 is that a lot or a little, I know it's a lot, but it's hard for me to conceptualize like,
15:45 well, I use a little more memory than I thought.
15:47 So it costs like, wait a minute, how do I know how much memory I use?
15:50 You know, like. - Yeah.
15:51 - What does this mean in practice?
15:52 - And it's actually, yeah, it's built by how you configure it too.
15:55 So if you say I need a Lambda with 10 gigs of memory, you're being built at that like 10 gigabyte price threshold.
16:04 So there is a really, a really cool tool called PowerTooth or AWS Lambda PowerTuner.
16:13 So yeah, what that'll do is you can, it creates a state machine in AWS.
16:19 Yeah, I think I did send you a link to that one.
16:21 So the PowerTuner will create a state machine that invocates your Lambda
16:29 with several different memory configurations.
16:31 And you can say, I want either the best cost optimized version or the best performance optimized version.
16:37 So, and that'll tell you, like, it'll say, okay, yeah, you're best with a Lambda configured at,
16:44 you know, 256 megabytes, you know, for memory.
16:48 So, sorry, yeah, for the link, it's, this is PowerTools.
16:54 This is a different amazing package.
16:56 Maybe I didn't send you the PowerTuner.
16:58 I should, okay, sorry.
16:59 - It's news to me.
17:01 I'll look and see. - Okay, sorry, yeah.
17:03 And they have similar names.
17:04 - Yeah, there's only so many ways to describe stuff.
17:07 - Right, yeah, okay.
17:08 They have it right in their AW, yep.
17:10 - And it is an open source package.
17:11 So there's probably a GitHub link in there, but yeah.
17:14 And this will tell you like the best way to optimize your Lambda function,
17:19 at least as far as memory is concerned.
17:22 So, yeah, really good tool.
17:24 It gives you a visualization, gives you a graph that will say like, okay, here's kind of where cost and performance meet.
17:31 And so, yeah, it's really excellent for figuring that out.
17:36 Yeah, at least in AWS land.
17:39 I don't know if some of the other cloud providers have something similar to this,
17:43 but yeah, it's definitely a really helpful tool.
17:48 - Sure, yeah.
17:49 Like I said, I'm confused and I've been doing cloud stuff for a long time
17:52 when I look at it.
17:53 - Yeah, so, well, there's some interesting things here.
17:55 So like you can actually have a Lambda invocation that costs less with a higher memory configuration
18:04 because it'll run faster.
18:05 So you're, I think Lambda bills like by the millisecond now.
18:09 So you can actually, because it runs faster, it can be cheaper to run.
18:14 So. - Well, that explains all the rust that's been getting written.
18:17 - Yeah, yeah.
18:18 - There's a real number behind this.
18:21 I mean, we need to go faster, right?
18:24 Okay, so, I think maybe AWS Lambda is one of the very first ones as well
18:29 to come on with this concept of serverless.
18:32 - Yeah, I don't know for sure, but it probably is.
18:36 And then, yeah, your other big cloud providers have them.
18:38 And now you're actually even seeing them come up with a lot of like Vercel has some type
18:46 of serverless function.
18:47 I don't know what they're using behind it, but it's almost like they just put a nicer UI
18:53 around AWS Lambda or whichever cloud provider that's potentially backing this up.
18:58 But yeah.
18:59 - They're just reselling their flavor of somebody else's cloud, yeah.
19:04 - Yeah, it could be because, yeah, Vercel obviously they have a really nice suite
19:09 of products with a good UI, very usable.
19:12 So, yeah.
19:13 - Okay, so Vercel, some of them people can try.
19:16 And then we've got the two other hyperscale clouds, I guess you call them.
19:20 Google Cloud has serverless, right?
19:22 - Yep.
19:23 - Okay, so.
19:24 - I'm not sure which ones, they might just be called Cloud Functions.
19:27 And yeah, Azure also has.
19:31 - They got Cloud Run and Cloud Functions.
19:33 I have no idea what the difference is though.
19:35 - Yep, and yeah, Azure also has a serverless product.
19:39 And I'd imagine there's probably like even more that we're not aware of, but yeah,
19:45 it's kind of nice to not think about setting up servers for something, so.
19:52 - I think maybe, is it FaaS?
19:53 Yeah, Function as a Service, let's see.
19:55 - Yeah.
19:56 - But if we search for FaaS instead of PaaS or IaaS, right?
20:01 There's, oh, we've got Almeda, Intel.
20:04 I saw that IBM had some.
20:06 Oh, there's also, we've got Digital Ocean.
20:10 I'm a big fan of Digital Ocean because I feel like their pricing is really fair
20:14 and they've got good documentation and stuff.
20:16 So they've got functionless, sorry, serverless functions that you can, I don't use these.
20:24 - Yeah, I haven't used these either, but yeah.
20:27 And yeah, as far as costs, especially for small personal projects and things
20:32 where you don't need to have a server on all the time, they're, yeah, pretty nice if you have a website
20:39 that you need something server side where you gotta have some Python, but you don't need a server going all the time.
20:44 Yeah, it's-
20:45 - Okay, like maybe I have a static site, but then I want this one thing to happen
20:49 if somebody clicks a button, something like that.
20:51 - Yeah, yeah, absolutely.
20:53 Yep, you could be completely static, but have something that is, yeah, yeah, that one function call that you do need, yeah.
21:00 - Exactly.
21:01 And then you also pointed out that Cloudflare has some form of serverless.
21:05 - Yeah, and I haven't used these either, but yeah, I do know that they have some type of,
21:11 functions as a service as well, so.
21:15 - I don't know what frameworks for languages, they let you write them in there.
21:19 I use bunny.net for my CDN, just absolutely awesome platform.
21:25 I really, really love it.
21:26 And one of the things that they've started offering, I can get this stupid, completely useless cookie banner
21:30 to go away, is they've offered what they call edge compute.
21:35 - Oh, yeah, okay.
21:37 - What you would do, I don't know where to find it, somewhere maybe, but basically the CDN has 115,
21:44 120 points of presence all over the world where, this one's close to Brazil,
21:49 this one's close to Australia, whatever.
21:52 But you can actually run serverless functions on those things, like, so you deploy them,
21:57 so the code actually executes in 115 locations.
22:01 - Yes, yeah.
22:02 - Probably Cloudflare or something like that as well, but I don't know.
22:05 - Yeah, AWS has, they have like Lambda at edge, at the edge, so that's kind of goes hand in hand
22:13 with their like CDN CloudFront, I believe, yeah.
22:17 So they have something similar like that, where you have a Lambda that's gonna be,
22:22 perform it because it's distributed across their CDN.
22:26 - Yeah, CDNs, that's a whole nother world.
22:28 They're getting really advanced.
22:30 - Yeah, yeah.
22:31 - Yeah, so we won't, maybe that's a different show, it's not a show today, but it's just the idea of like,
22:38 you distribute the compute on the CDN, it's pretty nice.
22:42 The drawback is it's just JavaScript, which is okay, but it's not the same as--
22:47 - Right, yes, yeah.
22:49 - Wonder if you could do HighScript.
22:51 - Oh, yeah, that's an interesting thought, yeah.
22:54 - Yeah, we're getting closer and closer to Python in the browser, so.
22:57 - Yeah, my JavaScript includes this little bit of WebAssembly, and I don't like semicolons, but go ahead and run it anyway.
23:04 - Yeah.
23:05 - Out in the audience, it looks like CloudFlare probably does support Python, which is awesome.
23:10 - Yeah, yeah, there's so many options out there for serverless functions that are, yeah,
23:16 especially if you're already in, if you're maybe deploying some static stuff
23:21 over CloudFlare or Brazil, yeah, it's sometimes nice just to be all in on one service.
23:29 - Yeah, yeah, it really is.
23:30 Let's talk about choosing serverless over other things, right, you've actually laid out two really good examples,
23:37 or maybe three even with the static site example, but I've got bursts of activity.
23:43 - Yeah, that's definitely--
23:44 - Right, and really, really low, incredibly, incredibly low usage other times, right?
23:51 - Yeah, yeah, you think of like, yeah, your Black Friday traffic, right?
23:54 Like you, to not have to think of like how many servers to be provisioned
24:00 for something like that, or if you don't know, I think there's probably some like,
24:06 well, I actually know there's been like some pretty popular articles about people
24:09 like leaving the cloud, and yeah, like if you know your scale and you know,
24:16 you know exactly what you need, yeah, you probably can save money by just having
24:22 your own infrastructure set up, or, but yeah, if you don't know, or it's very like spiky, you don't need to have a server
24:31 that's consuming a lot of power running, you know, 24 hours a day, you can just invoke a function as you need, so.
24:39 - This portion of Talk Python to Me is brought to you by Mailtrap, an email delivery platform that developers love.
24:48 An email sending solution with industry best analytics, SMTP and email API SDKs for major programming languages
24:56 and 24/7 human support.
24:59 Try for free at mailtrap.io.
25:02 - Yeah, there's a super interesting series by David Heinemeyer Hansen of Ruby on Rails fame
25:09 and from Basecamp about how Basecamp has left the cloud and how they're saving $7 million
25:16 and getting better performance over five years.
25:18 - Yeah, yeah.
25:19 - But that's a big investment, right?
25:21 They bought, they paid $600,000 for hardware, right?
25:26 - Yeah, yeah.
25:27 - Only so many people can do that.
25:28 - Right, and you know, you gotta have that running somewhere that, you know, with backup power and, yeah.
25:36 - Yeah, so what they ended up doing for this one is they went with some service called Geft,
25:42 cloud hosting, which is like white glove, white, so white labeled is the word I'm looking for,
25:49 where it just looks like it's your hardware, but they put it into a mega data center.
25:54 And there's, you know, they'll have the hardware shipped to them and somebody will just come out
25:58 and install it into racks and go, here's your IP.
26:00 - Right, yeah.
26:01 - Like a virtual VM or a VM in a cloud, but it takes three weeks to boot.
26:09 - Right, yeah, yeah.
26:12 - Which is kind of the opposite, it's almost, I'm kind of diving into it because it's almost
26:16 the exact opposite of the serverless benefits, right?
26:20 This is insane stability.
26:22 I have this thing for five years.
26:25 We have 4,000 CPUs we've installed and we're using them for the next five years
26:30 rather than how many milliseconds am I gonna run this code for?
26:33 - Right, exactly, yeah, yeah, yeah.
26:35 It's definitely the far opposite.
26:37 And so, yeah, you kind of, you know, maybe serverless isn't for every use case,
26:42 but it's definitely a nice like tool to have in the toolbox and yeah, you definitely,
26:47 even working in serverless, like if you're, yeah, eventually you're gonna need like maybe
26:52 to interact with the database that's gotta be on all the time, you know, it's, yeah, there's a lot of,
26:57 it's a good tool, but it's definitely not the one size fits all solution, so.
27:02 - Yeah, let's talk databases in a second, but for, you know, when does it make sense to say,
27:07 we're gonna put this, like if let's suppose I have an API, right, that's a pretty,
27:11 an API is a real similar equivalent to what a serverless thing is, like,
27:16 I'm gonna call this API, things gonna happen, I'm gonna call this function, the thing's gonna happen.
27:19 Let's suppose I have an API and it has eight endpoints, it's written in FastAPI or whatever it is.
27:24 It might make sense to have that as serverless, right?
27:27 You don't wanna run a server and all that kind of thing.
27:29 But what if I have an API with 200 endpoints?
27:32 Like, where is the point where like, there are so many little serverless things,
27:35 I don't even know where to look, they're everywhere, which version is this one?
27:38 You know what I mean?
27:38 Like, where's that trade off and how do, you know, you and the people you work with
27:42 think about that?
27:43 - Yeah, I guess that's a good question.
27:47 I mean, as you start like, you know, getting into these like micro services,
27:52 how small do you wanna break these up?
27:54 And so there is some different thoughts on that.
27:58 Even like a Lambda function, for instance, if you put this behind an API,
28:03 you can use a single Lambda function for your entire REST API, even if it is,
28:12 you know, 200 endpoints.
28:13 So- - Okay.
28:15 - Yeah. - So you put the whole app there and then when a request comes in,
28:18 it routes to whatever part of your app?
28:20 - Theoretically, yeah.
28:21 Yeah, so there's a package called Power Tools for AWS Power Tools.
28:28 AWS Lambda Power Tools for Python.
28:30 Yeah, I know, yes.
28:31 Yeah, I know the similar name.
28:32 Yeah, so they have a really good like event resolver.
28:36 So you can actually, it almost looks like, you know, Flask or some of the other Python web frameworks.
28:44 And so you can have this resolver, whether it's, you know, API gateway and in AWS
28:49 or different, they have a few different options for the API itself.
28:54 But yeah, in theory, you could have your entire API behind a single Lambda function,
29:02 but then that's probably not optimal, right?
29:04 So you're, that's where you have to figure out how to break that up.
29:09 And so, yeah, they do like that same, the decorators, you know, app.post or, yeah.
29:17 Yeah, and your endpoints and you can do the, with the, have them have variables in there
29:23 where maybe you have like ID as your lookup and it can, you know, slash user slash ID
29:29 is going to find your, find, you know, a single user.
29:33 So, and their documentation, they actually address this a little bit.
29:37 Like, do you want to do, they call it either like a micro function pattern
29:43 where maybe every single endpoint has its own Lambda function.
29:48 But yeah, that's a lot of overhead to maintain.
29:50 If you had, like you said, 200 endpoints, you have 200 Lambdas.
29:54 - You gotta upgrade them all at the same time so they have the right data models and all that.
30:00 Yeah, that's really.
30:01 - So yeah, so there's definitely some, even conflicting views on this.
30:07 How micro do you want to go?
30:09 And so I was able to go to AWS reInvent in November and they actually kind of pitched this hybrid.
30:19 Maybe like if you take your like CRUD operations, right?
30:21 And maybe you have your create, update and delete all on one Lambda that's with its configuration for those,
30:30 but your read is on another Lambda.
30:33 So maybe your CRUD operations, they all interact with a relational database,
30:37 but your reader just does like reads from a Dynamo database where you kind of sync that data up.
30:45 And so you could have your permissions kind of separated for each of those Lambda functions.
30:50 And people reading from an API don't always need the same permissions as updating, deleting.
30:57 And so, yeah, there's a lot of different ways to break that up and how micro do you go with this?
31:04 - Definitely. - How micro can you go?
31:05 - Yeah. - Yeah, 'cause it sounds to me like if you had many, many of them,
31:09 then all of a sudden you're back to like, wait, I did this because I didn't want to be in DevOps
31:14 and now I'm different kind of DevOps.
31:17 - Yeah, yeah.
31:18 So yeah, that Python, that package, the Power Tools is, does a lot of like heavy lifting for you.
31:27 At PyCon, there was a talk on serverless that the way they described the Power Tools package
31:34 was it, they said it like codified your serverless best practices.
31:39 And it's really true.
31:40 They give a lot, there's like so many different tools in there.
31:43 There's a logger, like a structured logger that works really well with Lambda.
31:48 And you don't even have to use like the AWS login services.
31:53 If you want to use like, you know, Datadog or Splunk or something else, it's just a structured logger and how you aggregate them
32:01 is like up to you and you can even customize how you format them.
32:04 But it's, works really well with Lambda.
32:08 - Yeah, you probably could actually capture exceptions and stuff with something like Sentry even, right?
32:14 - Oh yeah.
32:14 - Python code, there's no reason you couldn't.
32:16 - Right, exactly.
32:17 Yeah.
32:18 Yeah, some of that comes into, you know, packaging up those libraries for that.
32:23 You do have to think of some of that stuff, but like Datadog. - Log this log.
32:27 - Yeah.
32:28 Yeah, Datadog, for instance, they provide something called like a Lambda layer
32:32 or a Lambda extension, which is another way to package code up that just makes it a little bit easier.
32:38 So yeah, there's a lot of different ways to attack some of these problems.
32:43 - A lot of that stuff, even though they have nice libraries for them, it's really just calling a HTTP endpoint
32:48 and you could go, okay, we need something really light.
32:51 I don't know if requests is already included, or, but there's some gotta be some kind of HTTP thing
32:54 already included.
32:55 We're just gonna directly call it, not.
32:57 - Sure.
32:58 - And then we'll just do all these packages.
32:59 Yeah.
33:00 - Yep.
33:00 - Yeah.
33:01 - Yeah.
33:02 This code looks nice.
33:03 This Power Tools code, it looks like well-written Python code.
33:07 - They do some really amazing stuff and they bring in a Pydantic too.
33:13 So yeah, like being mostly in serverless, I've never really gotten to use like FastAPI, right?
33:20 And leverage Pydantic as much, but with Power Tools, you really can.
33:24 So they'll package up Pydantic for you.
33:28 And so you can actually, yeah, you can have Pydantic models for validation on these.
33:36 It's like a Lambda function, for instance, it always receives an event.
33:41 There's always like two arguments to the handler function, it's event and context.
33:45 And like event is always a, it's a dictionary in Python.
33:50 And so they can always look different.
33:53 And so, yeah.
33:56 So, 'cause the event, yeah.
33:58 So if you look in the Power Tools, GitHub, their tests, they have like, okay, here's what an event from-
34:07 - API gateway proxy event.json or whatever, right?
34:11 - Yes, yeah.
34:12 So they have, yeah, examples.
34:14 Yes, yeah.
34:15 So like, you don't wanna parse that out by yourself.
34:19 - No.
34:20 - So they have Pydantic models or they might actually just be Python data classes,
34:26 but that you can say like, okay, yeah, this function is going to be for, yeah,
34:32 an API gateway proxy event, or it's going to be an S3 event or whatever it is.
34:37 You know, there's so many different ways to receive events from different AWS services.
34:42 So, yeah, Power Tools kind of gives you some nice validation.
34:47 And yeah, you might just say like, okay, yeah, the body of this event, even though I don't care about all this other stuff
34:53 that they include, the path headers, queer string parameters, but I just need like the body of this.
35:00 So you just say, okay, event.body, and you can even use, you can validate that further.
35:06 The event body is going to be a Pydantic model that you created, so.
35:10 - Yeah, there's a lot of different pieces in here.
35:12 If I was working on this and it didn't already have Pydantic models, I would take this and go to JSON Pydantic.
35:19 - Oh, I didn't even know this existed.
35:21 That's weird, okay.
35:22 - Boom, put that right in there and boom, there you go.
35:25 It parses it onto a nested tree, object tree of the model.
35:30 - Very nice, yeah.
35:31 - But if they already give it to you, they already give it to you, then just take what they give you, but.
35:34 - Yeah, those specific events might be data classes instead of Pydantic, just because you don't,
35:40 that way you don't have to package Pydantic up in your Lambda.
35:43 But yeah, if you're already figuring out a way to package Power Tools, you're close enough that you probably
35:49 just include Pydantic too, but.
35:51 - Yeah.
35:52 - Yeah, and they also, I think they just added this feature where it'll actually generate OpenAPI schema for you.
36:02 I think, yeah, FastAPI does that as well, right?
36:04 So, yeah, so that's something you can leverage Power Tools to do now as well.
36:10 - So, excellent, and then you can actually take the OpenAPI schema and generate a Python.
36:14 - Client board on top of that, I think.
36:16 - Yeah, yeah.
36:17 - So you just, it's robots all the way down.
36:19 - Right, yeah.
36:20 - All the way down.
36:21 - Yeah, yeah, yeah.
36:24 Yeah, I haven't used those OpenAPI generated clients very much.
36:30 I was always skeptical of them, but yeah, in theory.
36:34 - I just feel heartless, or soulless, I guess, is the word, like, boring.
36:37 It's just like, okay, here's another star org, star star KW orgs thing, where it's like,
36:42 couldn't you just write, make some reasonable defaults and give me some keyword argument, you know,
36:46 just like, it's all top field.
36:47 But if it's better than nothing, you know, it's better than nothing.
36:50 - Right, yeah, yeah.
36:51 So, but yeah, you can see like Power Tools, they took a lot of influence from FastAPI and--
36:58 - It does seem like it, yeah, for sure.
36:59 - Yeah, yeah.
37:00 So it's definitely really powerful and you get some of those same benefits.
37:05 - Yeah, this is new to me, it looks quite nice.
37:07 So another comment by Kim is, tended to use serverless functions for either things
37:12 that run briefly, like once a month on a schedule, or the code that processes stuff coming in on an AWS SQS,
37:19 simple queuing service, queue of unknown schedule.
37:23 So maybe that's an interesting segue into how do you call your serverless code?
37:28 - Yeah, yeah.
37:29 So as we kind of touched on, there's a lot of different ways from like, you know,
37:34 AWS, for instance, to do it.
37:36 So yeah, like AWS Lambda has like Lambda function URLs, but I haven't used those as much.
37:43 But if you just look at like the different options and like power tools, for instance,
37:47 you can have a load balancer that's gonna, where you set the endpoint to invoke a Lambda,
37:54 you can have API gateway, which is another service they have.
37:59 So there's a lot of different ways, yeah, SQS.
38:03 So that's kind of almost getting into like a way of like streaming or an asynchronous way of processing data.
38:11 So yeah, maybe in AWS, you're using a queue, right?
38:16 That's filling up and you say like, okay, yeah, every time this queue is at this size or this timeframe,
38:23 invoke this Lambda and process all these messages.
38:27 So there's a lot of different ways to invoke a Lambda function.
38:33 So if it's, I mean, really as simple as you can invoke them like from the AWS CLI or,
38:41 but yeah, most people are probably have some kind of API around it.
38:44 - Yeah, yeah, almost make them look like just HTTP endpoints.
38:47 - Right, yeah.
38:48 - Yeah, Mark out there says, not heard talk of ECS, I don't think, but I've been running web services
38:55 using Fargate serverless tasks on ECS for years now.
38:59 Are you familiar with this?
39:00 I haven't done it.
39:02 - Yeah, I'm like vaguely familiar with it, but yeah, this is like a serverless,
39:08 yeah, serverless compute for containers.
39:10 So I haven't used this personally, but yeah, very like similar concept where it kind of scales up for you.
39:19 And yeah, you don't have to have things running all the time, but yeah, it can be Dockerized applications.
39:25 Now, in fact, the company I work for now, they do this with their Ruby on Rails applications.
39:29 They Dockerize them and run with Fargate.
39:34 So.
39:35 - Creating Docker containers of these things, the less familiar you are with running that tech stack,
39:42 the better it is in Docker, you know what I mean?
39:44 - Yeah, yeah.
39:45 - Like I could run straight Python, but if it's Ruby on Rails or PHP, maybe it's going into a container.
39:51 That would make me feel a little bit better about it.
39:53 - Yeah, especially if you're in that workflow of like handing something over to a DevOps team, right?
39:57 Like you can say like, here's an image or a container or a Docker file that will work for you.
40:04 That's maybe a little bit easier than trying to explain how to set up an environment or something, so.
40:11 - Yeah.
40:11 - Yeah, Fargate's a really good serverless option too.
40:15 - Excellent.
40:16 What about performance?
40:17 You know, you talked about having like a whole API apps, like FastAPI, Flask or whatever.
40:23 - Yeah.
40:24 - The startup of those apps can be somewhat, can be non-trivial basically.
40:27 And so then on the other side, we've got databases and stuff.
40:31 And one of the bits of magic of databases is the connection pooling that happens, right?
40:36 So the first connection might take 500 milliseconds, but the next one takes one.
40:40 As it's already open effectively, right?
40:42 - Yeah, yeah.
40:43 That's definitely something you really have to take into consideration is like how much you can do.
40:48 That's where some of that like observability, some of like the tracing that you can do
40:53 and profiling is really powerful.
40:55 Yeah, AWS Lambda, for instance, they have something called cold starts.
41:03 So like, yeah.
41:05 So the first time like a Lambda gets invoked or maybe you have 10 Lambdas that get called
41:12 at the same time, that's gonna, you know, invoke 10 separate Lambda functions.
41:17 So that's like great for the scale, right?
41:19 That's really nice.
41:22 But on a cold start, it's usually a little bit slower invocation because it has to initialize.
41:27 Like I think what's happening, you know, behind the scenes is they're like,
41:32 they're moving your code over that's gonna get executed.
41:35 And anything that happens like outside of your handler function, so importing libraries,
41:43 sometimes you're establishing a database connection.
41:46 Maybe you're, you know, loading some environment variables or some, you know, secrets.
41:52 And so, yeah, there's definitely, performance is something to consider.
41:57 Yeah, that's probably, you mentioned Rust.
42:01 Yeah, there's probably some more performant, like runtimes for some of these serverless functions.
42:06 So I've even heard some people say, okay, for like client facing things,
42:13 we're not gonna use serverless.
42:15 Like we just want that performance.
42:17 So that cold start definitely can, that can have an impact on you.
42:21 - Yeah, on both ends that I've pointed out.
42:25 The app start, but also the service, the database stuff with like the connection.
42:29 - Right, yeah, so yeah, relational databases too.
42:32 That's an interesting thing.
42:34 - Yeah, what do you guys do?
42:35 You mentioned Dynamo already.
42:36 - Yeah, so Dynamo really performant for a lot of connections, right?
42:41 But a, so Dynamo is a, you know, serverless database that can scale, you can query it over and over
42:48 and that's not going to, it doesn't reuse a connection in the same way that like a SQL database would.
42:55 So that's an excellent option, but if you do have to connect to a relational database
43:02 and you have a lot of invocations, you can use a, like a proxy, if you're all in on AWS.
43:11 And so again, sorry for this is really AWS heavy, but if you're using their like
43:15 relational database service, RDS, you can use RDS proxy, which will use like a pool of connections
43:22 for your Lambda function.
43:24 - Oh, interesting.
43:24 - So that can, yeah, that can give you a lot of performance or at least you won't be, you know,
43:32 running out of connections to your database.
43:34 So another thing too, is just how you structure that connection.
43:39 So I mentioned cold Lambdas, you obviously have warm Lambdas too.
43:43 So a Lambda has its handler function.
43:47 And so anything outside of the handler function can get reused on a warm Lambda.
43:52 So you can establish the connection to a database and it'll get reused on every invocation that it can.
43:58 - That's cool.
43:59 Do you have to do anything explicit to make it do that?
44:01 Or is that just a...
44:03 - It just has to be outside of that handler function.
44:06 So, you know, kind of at your top level of your file.
44:10 So, yeah.
44:11 - Excellent, yeah.
44:12 It makes me think almost one thing you would consider is like profiling the import statement almost, right?
44:18 - Yeah.
44:19 - That's what we normally do, but there's a library called import profiler
44:24 that actually lets you time how long different things take to import.
44:27 It could take a while, especially if you come from, not from a native Python way of thinking
44:33 in like C# or C++ or something.
44:36 You say hash include or using such and such, like that's a compiler type thing that really has no cost.
44:44 - Yeah.
44:45 - But there's code execution when you import something in Python and some of these can take a while, right?
44:49 - Yes, yeah.
44:50 So there's a lot of tools for that.
44:52 There's some, I think even maybe specific for Lambda.
44:55 I know like Datadog has a profiler that gives you like this, I forget what the graphic is called.
45:02 Like a flame graph. - Flame graph?
45:03 - A flame graph, yeah.
45:04 That'll give you like a flame graph and show like, okay, yeah, it took this long
45:07 to make your database connection, this long to import Pydantic.
45:12 And it took this long to make a call to DynamoDB, you know, so you can actually kind of like break that up.
45:21 AWS has X-Ray, I think, which does something similar too.
45:24 So yeah, it's definitely something to consider.
45:28 Another, just what you're packaging is definitely something to watch for.
45:34 And so I mentioned, yeah, I mentioned using Pants to package Lambdas and they do, hopefully I don't butcher
45:45 how this works behind the scenes, but they're using Rust and they'll actually kind of like infer
45:51 your dependencies for you.
45:52 And so they have an integration with AWS Lambda.
45:57 They also have it for Google Cloud Functions.
46:00 So yeah, it'll go through, you say, here's like my AWS Lambda function.
46:05 This is the file for it and the function that needs to be called.
46:09 And it's gonna create a zip file for you that has your Lambda code in it.
46:15 And it's gonna find all those dependencies you need.
46:17 So it'll actually, by default, it's gonna include like Bodo that you need.
46:23 If you're using Bodo, if you're gonna use, PyMySQL or whatever library, it's gonna pull all those in and zip that up for you.
46:34 And so if you just like open up that zip and you see, especially if you're sharing code across your code base,
46:41 maybe you have a shared function to make some of these database connections or calls.
46:46 Like you see everything that's gonna go in there.
46:50 And so, yeah.
46:52 And so how like Pants does it is it's file-based.
46:55 So sometimes just for like ease of imports, you might throw a lot of stuff in like your init.py file
47:02 and say like, okay, yeah, from, you know, you add all, kind of bubble up all your things
47:07 that you want to import in there.
47:09 Well, if one of those imports is also using OpenCV, and you don't need that,
47:18 then Pants is gonna say like, oh, he's importing this.
47:21 And because it's file-based, now this Lambda needs OpenCV, which is a massive package that's going to,
47:29 it's going to impact your performance, especially in those cold starts.
47:33 'Cause that code has to be moved over.
47:36 So. - Yeah.
47:37 That's pretty interesting.
47:38 So kind of an alternative to saying, here's my requirements or my pyproject.toml.
47:44 - A lock file or whatever. - Yeah.
47:46 - That just lists everything the entire program might use.
47:48 This could say, you're gonna import this function.
47:51 And to do that, it imports these things, which import those things.
47:53 And then it just says, okay, that means here's what you need, right?
47:57 - Right, yeah.
47:58 Yeah, it's definitely one of like the best ways that I've found to package up Lambda functions.
48:04 I think some of the other tooling might do some of this too, but yeah, a lot of times it would require
48:10 like requirements.txt.
48:12 But if you have like a large code base too, where maybe you do have this shared module for that,
48:19 maybe you have 30 different Lambda functions that are all going to use some kind of helper function.
48:24 It's just gonna go and like grab that.
48:26 And it doesn't have to be like pip installable.
48:28 Pants is smart enough to just be like, okay, it needs this code.
48:31 And so, but yeah, you just have to be careful.
48:34 Yeah, yeah.
48:35 And there's so many other cool things that Pants is doing that they have some really nice stuff for testing
48:41 and linting and formatting.
48:43 And it's, yeah, there's a lot of really good stuff that they're doing.
48:48 - Yeah, I had Benji on the show to talk about Pants.
48:51 That was fun.
48:52 - Yeah.
48:53 - So let me go back to this picture.
48:55 Is this the picture?
48:56 I have a lot of things open on my screen now.
48:59 There.
49:00 So on my server setup that I described, which is a bunch of Docker containers
49:04 running on one big machine, I can go in there and I can say, tail this log and see all the traffic
49:10 to all the different containers.
49:11 I can tail another log and just see like the logging, log book, log guru, whatever output of that,
49:17 or just web traffic.
49:18 Like there's different ways to just go.
49:20 I'm just gonna sit back and look at it for a minute.
49:22 Make sure it's chilling, right?
49:24 If everything's so transient, not so easy in the same way.
49:28 So what do you do?
49:29 - Yeah.
49:30 So yeah, Power Tools does, they have their structured logger that helps a lot.
49:36 But yeah, you have to kind of like aggregate these logs somewhere, right?
49:39 Because yeah, you can't, you know, a Lambda function you can't like SSH into, right?
49:44 So yeah.
49:45 - You can't, it's gonna take too long.
49:47 - Yeah, yeah.
49:48 So yeah, you need to have some way to aggregate these.
49:53 So like AWS has CloudWatch where that will like by default kind of log all of your standard out.
50:00 So even like a print statement would go to CloudWatch just by default.
50:07 But you probably wanna like structure these better with most likely and, you know, JSON format,
50:13 just most tooling around those is going to help you.
50:16 So yeah, the Power Tools structured logger is really good.
50:20 And you can even like, you can have like a single log statement, but you can append different keys to it.
50:27 And it's pretty powerful, especially 'cause you don't wanna like, I think like, so if you just like printed something
50:36 in a Lambda function, for instance, that's gonna be like a different row on each of your,
50:41 like by like the default CloudWatch, like it'll be, how it breaks it up is really odd
50:48 unless you have some kind of structure to them.
50:50 - Okay. - And so, yeah.
50:52 So definitely something to consider.
50:55 Something else you can do is, yeah, there's metrics you can do.
51:00 So like how it works with like CloudWatch, they have a specific format.
51:04 And if you use that format, you can, it'll automatically pull that in as a metric.
51:11 And like Datadog has something similar where you can actually kind of like go in there.
51:15 You can look at your logs and say like, find a value and be like, I want this to be a metric now.
51:20 And so that's really powerful.
51:23 - Oh, the metric sounds cool.
51:24 So I see logging and tracing.
51:27 What's the difference between those things?
51:29 Like to me, tracing is a level, just a high level of logging.
51:33 - Yeah, tracing, and hopefully I do the justice differentiated too.
51:41 I feel like tracing does have a lot more to do with your performance or maybe even closer to like tracking
51:48 some of these metrics, right?
51:49 I've used the Datadog tracer a lot and I've used the AWS like X-ray, their tracing utility a little bit too.
52:01 And so like those will show you.
52:04 So like maybe you are reaching out to a database, writing to S3. - Almost like a APM
52:08 application performance monitoring where it says you spent this much time in a SQL query
52:14 and this much time in identic serialization.
52:18 Whereas the logging would say, a user has been sent a message.
52:22 - Right, exactly.
52:23 Yeah, yeah.
52:24 Tracing definitely is probably more around your performance and yeah, things like that.
52:29 - It's kind of insane that they can do that.
52:31 You see it in the Django debug tool or in the pyramid debug tool, but they'll be like, here's your code
52:37 and here's all your SQL queries and here's how long they took.
52:39 And you're just like, wow, that thing is reaching deep down in there.
52:42 - The Datadog one is very interesting because like it just knows like that this is a SQL connection
52:49 and it tells you like, oh, okay, this SQL connection took this long.
52:52 And it was like, I didn't tell it to even trace that.
52:55 Like it just like, it knows really well.
52:58 Yeah, so like the expectation.
52:59 - It's one thing to know a SQL connection is open, it's another to say, and here's what it sent over SSL by the way.
53:04 Like how'd you get in there?
53:05 - Yeah, yeah.
53:06 So especially.
53:07 - It's in process so it can do a lot.
53:10 It is impressive to see those things that work.
53:12 All right, so that's probably what the tracing is about, right?
53:14 - Yes, yeah, yeah.
53:15 Definitely probably more around performance.
53:17 You can put some different things in tracing too.
53:20 Like I've used it to say like, we talked about those like database connections to say like,
53:25 oh yeah, this is reusing a connection here.
53:29 'Cause I was trying to like debug some stuff on, am I creating a connection too many times
53:33 so I don't wanna be?
53:34 So yeah, you can put some other useful things in tracing as well.
53:38 - Yeah, and Pat out in the audience.
53:40 Oops, I'm moving around.
53:41 When using many microservices, like single execution involves many services basically,
53:46 it's hard to follow the logs between the services and tracing helps tie that together.
53:51 - Yeah, yeah, that's for sure.
53:53 - All right, let's close this out, Tony, with one more thing that I'm not sure
53:57 how constructive it can be.
53:59 There probably is some ways, but testing, right?
54:02 - Yeah, yeah, that's definitely.
54:05 - If you could set up your own Lambda cluster, you might just run that for yourself, right?
54:10 So how are you gonna do this, right?
54:12 - Yeah, to some extent you can, right?
54:14 Like there's a Lambda Docker image that you could run locally and you can do that.
54:19 But if your Lambda is reaching out to DynamoDB, I guess there's technically a DynamoDB container as well.
54:27 Like you could, it's a lot of overhead to set this up, but rather than just doing like, you know, flask start
54:35 or, you know, whatever the command is to like spin up a flask server. - I pressed the go button
54:38 in my IDE and now it's.
54:41 - Yeah, so that's definitely, and there's more and more tooling coming out,
54:46 you know, that's coming out for this kind of stuff.
54:49 But if you can like unit test, there's no reason you can't just like, you know,
54:55 run unit tests locally.
54:58 But when you start getting into the integration test, you're probably getting to the point where
55:03 maybe you just deploy to actual services.
55:07 And, you know, it's always trade-offs, right?
55:11 Like there's costs associated with it.
55:13 There's the overhead of like, okay, how can I deploy to an isolated environment?
55:18 But maybe it interacts with another microservice.
55:20 So yeah, so there's definitely trade-offs, but testing is. - I can see that you might
55:26 come up with like a QA environment, almost like a mirror image that doesn't share any data.
55:33 - Yeah. - But it's sufficiently close, but then you're running, I mean, that's a pretty big commitment 'cause you're running
55:38 a whole replica of whatever you have.
55:41 - Right, yeah.
55:42 And so yeah, QA environments are great, but you might even want lower than QA.
55:48 You might want to have a dev or like a, one place I worked at, we would spin up an entire environment for every PR.
55:58 So you could actually, yeah, like when you created a PR, that environment got spun up
56:05 and it ran your integration tests and system tests against that environment, which, you know,
56:10 simulated your prod environment a little bit better than running locally on your machine.
56:15 So certainly a challenge to test this.
56:19 - Yeah, I can imagine that it is.
56:21 - Yeah, and there's always these like one-off things too, right, like you can't really simulate
56:27 like that memory limitation of a Lambda locally, you know, as much as when you deploy it
56:32 and things like that, so.
56:33 - Yeah, yeah.
56:34 That would be much, much harder.
56:37 Maybe you could run a Docker container and put a memory limit on it, you know, that might work.
56:41 - Yeah, yeah, maybe.
56:43 - You're back into like more and more DevOps to avoid DevOps.
56:46 - Right, yeah, yeah.
56:48 - So there it goes, but interesting.
56:50 All right, well, anything else you wanna add to this conversation before we wrap it up?
56:54 About out of time here.
56:55 - Yeah, I guess, I don't know if I have it, hopefully we covered enough.
57:00 There's just a lot of like good, yeah, there's a lot of good resources.
57:03 The tooling that I've mentioned, like Power Tools and Pants, just amazing communities.
57:09 Like Power Tools has a Discord, and you can go on there and ask for help,
57:12 and they're super helpful.
57:14 Pants has a Slack channel, you can join their Slack and ask, you know, about things.
57:19 And so those two communities have been really good and really helpful in this.
57:24 A lot of good talks that are available on YouTube too.
57:27 So just, yeah, there's definitely resources out there and a lot of people have, you know,
57:31 fought this for a while, so.
57:33 - Yeah, excellent.
57:34 And you don't have to start from just create a function and start typing.
57:38 - Yeah, yeah.
57:39 - Cool, all right, well, before you get out of here though, let's get your recommendation for a PyPI package.
57:45 Something notable, something fun.
57:48 - I probably, you know, we've talked a lot about it, but Power Tools is definitely one
57:54 that is like everyday getting used for me.
57:56 So the, yeah, Power Tools for Lambda and Python, they actually support other languages too.
58:03 So they have like the same functionality for like, you know, Node.js, you know, for like TypeScript and .NET.
58:08 And so, yeah, but this one definitely leveraging Power Tools and Pydantic together,
58:17 just really made like a serverless, a lot of fun to write.
58:21 So yeah, definitely doing great things there.
58:25 - Excellent, well, I'll put all those things in the show notes and it's been great to talk to you.
58:29 Thanks for sharing your journey down the serverless path.
58:34 - Yep, thanks for having me.
58:35 - You bet.
58:36 - Yeah, enjoy chatting.
58:37 - Same, bye.
58:38 This has been another episode of Talk Python to Me.
58:42 Thank you to our sponsors.
58:44 Be sure to check out what they're offering.
58:45 It really helps support the show.
58:47 Take some stress out of your life.
58:49 Get notified immediately about errors and performance issues in your web or mobile applications with Sentry.
58:55 Just visit talkpython.fm/sentry and get started for free.
59:00 And be sure to use the promo code, Talk Python, all one word.
59:03 Mailtrap, an email delivery platform that developers love.
59:07 Try for free at mailtrap.io.
59:11 Want to level up your Python?
59:12 We have one of the largest catalogs of Python video courses over at Talk Python.
59:17 Our content ranges from true beginners to deeply advanced topics like memory and async.
59:22 And best of all, there's not a subscription in sight.
59:24 Check it out for yourself at training.talkpython.fm.
59:28 Be sure to subscribe to the show.
59:29 Open your favorite podcast app and search for Python.
59:32 We should be right at the top.
59:34 You can also find the iTunes feed at /itunes, the Google Play feed at /play,
59:39 and the direct RSS feed at /rss on talkpython.fm.
59:43 We're live streaming most of our recordings these days.
59:46 If you want to be part of the show and have your comments featured on the air,
59:49 be sure to subscribe to our YouTube channel at talkpython.fm/youtube.
59:54 This is your host, Michael Kennedy.
59:56 Thanks so much for listening.
59:57 I really appreciate it.
59:58 Now get out there and write some Python code.
01:00:01 (upbeat music)
01:00:19 Thank you for watching.