00:00 How do we host and run server software? Let's consider the progression we've been on over the
00:04 past 15 years or so. We've gone from software and operating systems that we manage running on
00:10 hardware that we own and babysit, to virtual machines, to infrastructure as a service on the
00:15 cloud, or even platform as a service on the cloud. And then on from there, we've moved to containers,
00:21 usually Docker, maybe running on someone else's cloud. After that, maybe we want to put these into
00:28 microservices, which are conglomerates of these containers working together managed by something
00:33 like Kubernetes. Well, where do we go from there? I can't tell you what the final destination of this
00:39 whole progression is going to be, but I believe we have reached a leaf node in this hierarchy with
00:44 our topic today. On this episode 118 of Talk Python to Me with Ryan Scott Brown, we're going to explore
00:50 serverless computing. It's an interesting paradigm shift, and I hope you enjoy the conversation.
00:55 It was recorded on May 24th, 2017.
01:23 Welcome to Talk Python to Me, a weekly podcast on Python, the language, the libraries, the ecosystem,
01:28 and the personalities. This is your host, Michael Kennedy. Follow me on Twitter, where I'm at
01:33 mkennedy. Keep up with the show and listen to past episodes at talkpython.fm, and follow the show on
01:39 Twitter via at Talk Python. This episode is brought to you by Rollbar and us at Talk Python Training.
01:45 Be sure to check out what we're offering during our segments. It really helps support the show.
01:50 Ryan, welcome to Talk Python.
01:52 Hey, Michael. Glad to be on.
01:53 Yeah, it's going to be super fun. We're here to talk about running code out on the internet with
01:59 no servers, which seems kind of impossible and awesome all at the same time. That's going to
02:04 be exciting.
02:04 Yeah, I mean, there are still servers. We'll get around to that.
02:07 Of course. Of course. Yeah, of course. Awesome. Before we do, though, let's hear your story. How'd you
02:12 get into programming in Python?
02:13 So I had kind of a pretty vanilla path. I actually started with my interest in networking. So I was
02:20 majoring in systems administration and network admin and actually got brought into the programming
02:27 world by a Perl class that I took. And then there was an open source club at my college that everyone
02:34 was using Python to write programs for the One Laptop Per Child project. And that was my introduction to
02:40 Python 2.
02:40 Oh, that's really cool. What school is that?
02:43 Rochester Institute of Technology.
02:44 Oh, yeah. Okay. That's a great school. I know a couple of people that have gone there and it sounds
02:49 like a pretty neat program.
02:50 Yeah, they have actually an open source software minor now. They didn't when I went, but the program's
02:56 kind of slowly evolved and grown over time.
02:58 Oh, my gosh. I've never heard of an actual official university program that is focused on open source.
03:07 Tell us just a little bit about that. Do you know anything about it?
03:09 What does a person who takes that degree do or that minor do?
03:13 It's not a full degree. So usually it'll be in software engineering or in computer science.
03:18 In my case, I actually TA'd for some of the classes that then were added to the minor. So we had a bunch
03:24 of pilot courses first, and then you get a certification from the university that you're a real minor, and then
03:30 you can have real students enrolled.
03:31 Awesome.
03:32 All of the courses focused around either open source software or the idea of open communities
03:38 in different ways. Like we had some Wikipedia editors that would come in and talk and be
03:43 TAs. We had contributors to the BSD project, one of whom lives in Rochester and would teach
03:50 one of our classes on open source software. And for course credit, you could do things like contribute
03:55 to existing open source projects.
03:57 All right. So that sounds like a really cool program that you were in in the university.
04:00 Now, what do you do now?
04:02 Yeah. So now I actually work for Red Hat, who obviously makes all of their money on open source
04:07 software and support and consulting around that. And the team that I work on is the Ansible
04:13 team. So we make a engine and a kind of language that people write in so that they can orchestrate
04:20 all of their servers and cloud services and network devices and all that from a simple YAML
04:26 language. So that's not a full fledged programming language. So you don't need the kind of experience
04:30 you'd need to write things like chef recipes, which are Ruby. And so you'd have to know Ruby
04:35 to write chef recipes. So we try and make that as accessible as possible.
04:39 Yeah. I've heard really good things about Ansible and that sounds fun. And Ansible is a thing that
04:45 recently came to Red Hat, right? Like recently acquired that project?
04:49 Yep. About a year ago. Well, a little more than a year ago now, Red Hat acquired Ansible
04:55 Works, which was the company that was behind Ansible, the open source project. And so now
05:00 Red Hat sells Ansible Tower, which is a web interface that previously Ansible Works sold on top of
05:06 Ansible. And then the Ansible core is still open source and just as kind of community focused as
05:11 ever.
05:12 Nice. Which side did you come from? Were you working on Ansible before or were you at Red Hat
05:15 and then it came there? I was using Ansible, but I was working on OpenStack, which is a
05:21 series of open source projects also all in Python.
05:24 Yeah. That's quite a big Python project actually.
05:26 Yeah. It's actually, I think 50 big Python projects.
05:30 Yeah, exactly. It's an understatement to call it large, right?
05:34 Huge contributors both to obviously the Python community in terms of bringing things back like
05:40 PyPI and other things that become things in the regular Python community, as well as to
05:47 the core language. There are a lot of core Python contributors that work on OpenStack.
05:50 And so split their time between making Python better and then making things on Python better.
05:55 Yeah, it's definitely a symbiotic relationship there. That's great.
05:58 So let's talk about this idea of serverless programming. I feel like we started out with just
06:05 having machines in data centers, maybe co-locating some of our machines in a data center to virtual
06:11 machines, to things like Docker and things getting sort of smaller and smaller. And now we've reached,
06:19 I'm not sure you can get much smaller than a single functions almost running on the internet,
06:24 right?
06:24 Yeah. I mean, that's sort of the direction that we're moving. And it's been a direction in programming
06:30 for a very long time from when you had a system that just ran one job at a time, like on punch cards. So
06:36 you toss in a stack of punch cards and it would churn through them and then churn out results also on
06:42 punch cards. You didn't have multitasking at all. So one program was exactly what was running. So you
06:48 were, you had the whole CPU to yourself and then you have multitasking operating systems. So you had
06:53 multiple people providing code that would run in different processes that would be isolated in a certain
06:57 way. And the operating system would handle scheduling those. And you can sort of think of
07:03 function as a service, which is things like AWS Lambda and mostly called serverless, quote unquote. You can
07:11 think of those as like a multi-user scheduling, but where you have untrusted entities sharing
07:18 compute resources. So you're getting smaller and smaller slices.
07:21 Right. Oh, so that's really interesting to track that trend over time, right? From just
07:26 sort of OSs to, to running these functions on other people's computers. So obviously there
07:33 really are servers out there, like you said, but when it's not your responsibility, it's not
07:39 your problem to like babysit these servers. So for example, yesterday, I think I logged into
07:46 some of my servers with SSH and I had to update like the unattended install software package on Linux.
07:54 So even though of course your code runs on servers, when you don't have to think about it or manage it
07:59 or balance it, it really makes a difference, right? Yeah, exactly. It's serverless in the same way that
08:04 wireless doesn't have any wires involved. There are still wires. They just aren't directly to your device.
08:10 Right. Well, and you know, in wireless, you never care about the wires. Somebody has to care about the
08:16 wires, but it's opaque to you. Right. And so as far as you're concerned, wireless is wireless.
08:21 Same analogy here. As far as you're concerned, you get particular promise from your provider. Like
08:27 we will always have such and such version of such and such underlying dependencies like libc and things
08:34 like that, that you would statically link to and a such and such version of a Python interpreter
08:39 available and such and such version of whatever else you need. And you would write your code such that
08:45 you can rely on that such and such version being there, but you don't install that. You don't deal
08:52 with the sort of provisioning that you would do to install. Let's say you need a specific Python 3.
08:58 So if you only can run on 3.5 or up, you don't have to worry about making sure that's there. You just tell
09:03 your provider, give me this runtime, please. Right. And it just works and you have to worry about it. That's great.
09:08 So can you compare this to like, say working with like event driven programming versus like hooking up
09:15 to cloud events that, you know, something like Erlang? Yeah. And this is kind of the other side of
09:20 serverless. The one side is run this code in the cloud for me. And the other side is hook my code
09:26 up to these other things. So real world or virtual world, I guess, events that you want your code to
09:33 active on. So let's say you're in your podcast world. You have something that you want to happen
09:39 every time a sponsor contacts you. You could hook up a way that when the sponsor fills in the contact
09:45 form, it invokes a Lambda function that then maybe calls you if it's super important to get back to
09:51 them immediately or any number of other things. But the idea is the event source half and then the run
09:58 my code somewhere that I don't have to manage half is what kind of makes up serverless.
10:02 Right, right. It's easy to focus on here's a function run it, but really the diverse triggers
10:07 that trigger those functions are super important. Yeah, exactly. And it maps to a lot of event driven
10:13 concepts that we might be familiar with in Erlang where you have processes that are managed by a
10:18 supervisor. So in serverless, that would be individual functions that are managed by your cloud provider,
10:25 which under the hood deals with scheduling those across diverse hardware and all that stuff.
10:31 And then in Erlang, you also have an event routing system where every process has input and output
10:37 kind of event hooks. And you can get that similar thing in Lambda with using APIs to send data out.
10:45 So you might send data out over like a MailChimp or a Mandrill, or you might send data out into a
10:50 database. That would be your kind of output. And then your input can be triggers from things like
10:56 DynamoDB. It can be HTTP events, any number of things. I'm not going to list them all because
11:01 this would be a long podcast. Yeah, yeah, absolutely. And you know, to bring it back to like, what could I do?
11:05 You know, you're saying something happens. Like maybe I've, I'm going to upload my MP3 file that I want to
11:12 release for the week to S3. And that needs to be, you know, maybe have like the artwork and the little,
11:19 like the description embedded into the MP3 header. And then it needs to be moved over to the content
11:26 delivery. I got to like flip it to publish all these different things. Like I could possibly set that up
11:31 serverless, you think? Yeah. And individually, you would have something like the S3 event source. So you would tell
11:37 a Lambda function that when a new object appears in S3, and then S3 would notify your function when
11:43 that happened, it could download from S3 your file. It would put in the, I forget what the name of the
11:49 MP3 metadata format is. Yeah. It would dump in whatever author metadata, show notes. I don't know if that
11:55 goes in there. Yeah. And then it could save it to another S3 bucket that would either kick off yet
12:01 another Lambda and you'd have this chain, or that would be the S3 bucket you're serving public content from
12:06 to your CDN. And it doesn't have to be linear either. You can have kind of forking things. So
12:13 you could have this Lambda see, Oh, a new shows up here. I'll put the metadata in and I'll also kick off
12:19 the job that will add it to the list on the front page. And I will kick off the job that will add it
12:25 to the RSS feed. And those can be a bunch of Lambda functions that each have a small focused purpose.
12:30 It sounds really amazing to think of how that kind of opens it up. And to me, it feels like almost like
12:35 the event sources and the workflow bit is more important than, Hey, run this function.
12:42 Yeah. Because I mean, when you think about it, we've been really good at running small bits of
12:46 code for a very long time. And a lot of the value comes in when you can hook those small bits of code
12:51 up to something that's a useful thing in the real world. Right. Cause I could make a little program
12:58 that simulates a vending machine that will, you know, take pretend quarters and things, and I can type in on
13:03 the terminal, but until I'm hooking that up to hardware, that's actually giving someone a soda
13:07 or whatever, it's much less useful. Right. Right. Absolutely. Absolutely. So you're talking about
13:15 S3 and I'm assuming that that probably ties right back into AWS Lambda, which is probably the most
13:21 popular one of these, but it's not the only one. Like I think there's Azure functions, for example,
13:27 what are, what are some of the implementations and places where we might find this type of
13:32 programming model? There's a load of different providers. There's obviously the big three cloud
13:36 providers all have their offering GC or Google cloud platform has what they call cloud functions.
13:41 Microsoft Azure has what they call Azure functions. AWS has Lambda, which is a kind of a pawn on function,
13:49 right. Nothing to do with, Python Lambdas, right? No. And I, yeah, that's been a problem
13:56 sometimes cause you'll pull something into Python to automate. So if you want to automate the deployment
14:01 of a Lambda function and you pull that into Python and then you name a variable Lambda, you can have
14:06 problems if you're not careful. So why is this keyword gone? I don't know if that actually happens.
14:11 There we go. It does. You can override keywords in Python. So you can override true, false, Lambda,
14:17 all kinds of things that you should. Yeah. These are not good. And recently I can't speak to the
14:21 other cloud providers. I just don't track them that closely, but AWS Lambda recently switched to
14:27 Python. Well not switched, made available Python three six, which is pretty cool. Yes, that is
14:33 absolutely right. And it has been great. Yes. Cause surprisingly until like a month ago,
14:38 this was a Python two only option, right? Yeah, exactly. And I mean, if you think about when it came out,
14:44 Lambda was out, I believe in towards the end of 2014 when, you know, Python three has been out for
14:50 six years at that point. Yeah. That sounds about right. And I would love to be able to go run a
14:56 second experiment and release it with Python three only and see what that does for adoption. Yeah. Yeah.
15:01 Yeah. That would have been awesome. These would just be really interesting experiments to run,
15:04 but for these other providers, there are some smaller providers that only do JavaScript. So,
15:09 Auth0 is an example of one that has a function as a service, but it's JavaScript, Node JS only.
15:16 Mm-hmm.
15:16 If you look at IBM's OpenWhisk, you can actually provide an arbitrary Docker container as the runtime
15:22 for your function. And so it manages the scheduling and it will invoke as many as needed to handle events.
15:27 So it's still got the function as a service going for it, but you get a lot more control over what
15:33 run times you give it. And so you can not only do Python three, but you can do entire Ruby installs.
15:39 You can do Apple Swift, any language that you can fit in a Docker container, which is any language.
15:45 Right. You could probably do something like Fortran if you really wanted to torture yourself,
15:48 right? Like you could do basically if it can accept a request, you can do it, right?
15:52 Yeah. I mean, as long as it accepts, I believe the rules for OpenWhisk is it has to accept data on
15:57 standard in and provide a result on standard out when the container is invoked as a process.
16:02 And so as long as you can do those things, you're good to go. So COBOL in the cloud.
16:07 How interesting. Yay. Let's get more COBOL in the cloud. So one of the things that immediately comes
16:15 to mind when I think of this and the Docker variant obviously would be one way to basically get an escape
16:23 patch errors. Like how much do you have control over dependencies and requirements? So for example, what if I want to
16:31 use like NumPy or requests or something that doesn't ship with Python? Yeah, sure. But I want to use that.
16:37 Like, can I go to AWS Lambda and go, Oh yeah. And like, here's my requirements.txt make that happen.
16:43 Unfortunately, you can't just upload a requirements.txt as much as I would love to. You can with a Google
16:49 cloud functions, you can give them your code and a requirements.txt and they'll do it for you.
16:54 That's pretty cool. In Lambda, you provide a zip file, which contains, can contain any dependencies
16:59 you want, whether that's C object libraries, if you're using something that links directly to C like
17:05 NumPy, or it can just be Python files like requests. So you would use pip on either your local machine or a
17:12 build server that would build in all of your requests and pack them into a zip file. And I've
17:18 had to do some backflips to get SciPy and SciKit learn working in Lambda. Wow. But you've, you've done
17:24 it? No, it absolutely works. And the nice thing is that AWS now provides a Docker image that duplicates
17:30 the Lambda environment. So you can download that Docker image, build stuff in there. So it's all going to be
17:35 built just right. And then dump it to a zip file on your machine and then upload it to Lambda. So you can
17:40 have exactly the same build environment locally for Lambda. And then you build all the SciKit learn
17:46 and optimize it for that hardware or for that, that environment. And then you can strip out
17:51 everything that isn't needed. So you can do things like compact the .o files. You can strip out a bunch
17:59 of a TXT and MD and RST documentation files to get your size down because that relates to how fast your
18:05 function can run. Yeah. Cause it's got to somehow take that thing apart and work with it. Right.
18:10 I mean, it's got to unzip it, but it's also got to download it to whatever machines running your
18:14 code. Cause they don't all have it. Right. Right. Absolutely. Does it get, like warmed up as it
18:20 runs? Like how many machines might be involved in executing your code? No. And does it, if it's
18:26 running a while? Yeah. It depends on your number of invocations. So the way that it works under the hood
18:30 is they spin up basically a container that will handle events and each container will handle one event at a
18:36 time. So let's say that you have five events per second and they all take half a second.
18:41 Okay. Then on average, you're going to have about three hosts running or three containers handling
18:47 requests because if they each take half a second, you've got two that are fully utilized and one
18:51 that's half utilized. If that makes sense. Yeah. Yeah. That makes sense. And so if you have a bunch
18:55 of events that come in in a very spiky kind of way, so you have 500 requests this second and then
19:01 nothing for 10 minutes, it's going to spin up on a bunch of machines to handle these events and then
19:06 they'll all spin down basically. And eventually they get evicted if they're not used. What's the
19:11 payment model for this kind of stuff? How much difference does it make from say like, I'll just
19:16 fire up a VM and just run my stuff there relatively. Yeah. That depends a lot on what your utilization
19:22 level can be. So if you get, let's say you have a VM and you get one request a day, you're paying for
19:28 24 hours of computer to handle 50 milliseconds or a hundred milliseconds or something. Yeah. And whereas
19:35 in Lambda you pay by the unit, I believe is the megabyte second of memory. And that also scales your CPU.
19:43 So they have tiers of anywhere from 128 megabytes of RAM up to, I believe a gig and a half. And each of those
19:52 is build per hundred millisecond slice by the tier. Yeah. Let's suppose I've got a function that doesn't
19:59 really use much memory, right? It's pretty basic and it's run five times a day for 10 milliseconds.
20:06 Is this like a penny, a dollar, $5? Oh, that's in the hundreds of pennies. The way I had a blog post that
20:14 I calculated out the cost to quote unquote more sensible unit, the, Pico dollar per bite second.
20:19 Okay. And if you run, I believe it's, you get a million invocations for something like 14 or a million
20:30 milliseconds for something like 1.4 cents. Wow. And so there's a lot of, I mean, you get,
20:37 it's more expensive if you were to run a 24 hour EC2 instance and then run a 24 hour Lambda function,
20:44 which you can't because there's a timeout, but if you ran them one after another for a total of 24 hours.
20:49 Like if it was under a super heavy load, that was basically equivalent to a continuous, right?
20:53 Yeah. If you were to run 24 Lambda hours, if you will. So if you run, you know, 24 of them all for one hour solid,
21:00 then that would be the equivalent of a little more expensive than the equivalent EC2.
21:06 But that would be assuming that you can fully utilize your EC2 instance, which is not all that
21:12 common for most workloads because you've got kind of extra capacity to handle random spikes and
21:17 variations in what users are doing. So if you're going to get a little bit more traffic, you don't
21:22 want to spin up a brand new server every time you get just slightly over the threshold. Right.
21:26 Yeah. Yeah. That makes a lot of sense. Whereas Lambda can match very precisely,
21:29 you know, one-to-one with the events that you've got.
21:35 Hey everyone, Michael here. Let me take just a moment and thank one of our sponsors who makes
21:38 this show possible. This portion of Talk Python in May has been brought to you by Rollbar.
21:42 One of the frustrating things about being a developer is dealing with errors,
21:46 relying on users to report errors, digging through log files, trying to debug them,
21:50 or a million alerts just flooding your inbox and ruining your day.
21:53 With Rollbar's full stack air monitoring, you get the context, insight, and control you need to find
21:59 and fix bugs faster. Adding the Rollbar Python SDK is just as easy as pip install Rollbar. You can start
22:06 tracking production errors and deployments in eight minutes or less. Rollbar works with all the major
22:11 languages and frameworks, including the Python ones like Django, Flask, Pyramid, as well as Ruby,
22:16 .NET, Node, iOS, and Android. You can integrate Rollbar into your existing workflow, send error alerts to Slack
22:23 or HipChat or automatically create new JIRA issues, pivotal tracker issues, and a lot more. They have a special
22:28 offer for Talk Python to me listeners. Visit talkpython.fm/Rollbar, sign up and get the bootstrap plan free
22:34 for 90 days. That's 100,000 errors tracked for free. But you know, just between you and me, I hope you don't encounter
22:41 that many errors. Give Rollbar a try today. Just go to talkpython.fm/Rollbar.
22:47 There's all these different event sources. I understand how AWS can invoke our function.
22:52 If like say an S3 thing changes, it's all like inside AWS, but you could wire up even like an API to this thing,
23:01 right? Like if somebody hits this URL with a post with this JSON body, run this function, right?
23:05 Yep. Just like you would put a handler in your Pyramid or Django application, you can attach a Lambda function
23:13 to a, what's called in AWS an API gateway. In other serverless platforms, they call it, I believe, a HTTP event.
23:22 But it's all the same. Basically, they take, they have something that's running all the time that's waiting for HTTP requests.
23:28 And then when someone hits it, it will invoke your Lambda function, get the output, and then send it back.
23:34 And you only pay when people are actually using it. So you can run HTTP API that's available that costs you pretty much
23:41 nothing in base cost. It's just for requests.
23:44 Yeah, that's pretty awesome. And can you do things like map, like SSL certificates and custom domains and stuff to those URLs?
23:52 Do you know?
23:52 Yep. Yep. You can do custom SSL certificates. You can do custom domain names.
23:56 And in your function, you can figure out what domain name it was sent to. So you can do special things in your template, for example.
24:03 Let's say you have a short domain and a long domain, but they both show the same site.
24:08 If your Lambda function is rendering things or rendering links, it can give the correct URL back.
24:14 Yeah, I see.
24:15 By looking in the event and seeing all the information about where it was sent and all that stuff.
24:19 Okay. That sounds great.
24:20 And it has access to all the data sent to it, right?
24:25 Like the URL, the query string, the post body, the headers. Is that possible?
24:29 Yeah, absolutely. But there's kind of two ways you can do it in Amazon's API gateway.
24:34 You can use what's called the Lambda proxy integration, which gives you this giant JSON event that has everything in it.
24:40 So it'll have the forwarded by, the X forwarded for, all any other HTTP headers like auth and the query string and the path and everything.
24:51 Or you can use subset language that AWS calls a velocity template language.
24:56 And you can select very specific little bits.
25:00 You shape the event that you get in your function to only the stuff that you need.
25:04 And this means that your function runs faster because it's decoding less data.
25:08 And there's less kind of a security attack surface area because you're only letting through one or two little things.
25:14 And then that's happening before it gets to your code at all.
25:18 Right. You don't want to.
25:19 There were those mass assignment injection attacks, for example.
25:23 Yeah. Or you could just have someone who sends a massive regex or something else that's hard, particularly expensive to decode.
25:32 And make your Lambda function slower and, you know, cost you more money in the end, right?
25:36 Right. Yeah. It's interesting to think that the more we use these cloud resources on a consumption-based model,
25:42 how distributed denial of service has a direct monetary component.
25:48 Yeah. I mean, it's more like a banking denial of service.
25:51 Yes, exactly.
25:52 So if I have, like, my VM running at, say, DigitalOcean or something, and somebody decides to attack it and pound on it,
25:58 well, it may degrad or even kill my service.
26:01 But I'm still going to pay the $10 a month or whatever I pay, you know what I mean?
26:05 Whereas this, it could vary, right?
26:07 It can vary, but there are limits.
26:09 AWS puts in place two kinds of limits.
26:12 They call them, the first ones they call safety limits, which are relatively low just so that you don't outbill,
26:18 you don't bill yourself out, right?
26:20 Right.
26:20 And then they have what are called soft limits, which are limits that they say,
26:24 okay, most of our users never hit this limit, but the ones that do can just call us and we'll raise it right up for you.
26:30 And then they have hard limits of services that there's some technical limitation where they just can't go above, you know, 40 gig Ethernet, for example.
26:37 Sure.
26:38 Okay.
26:39 That makes sense.
26:39 And, of course, in AWS, I'm pretty sure the others have this as well, but you have, like, billing alerts.
26:44 Yep.
26:45 Right.
26:45 You have billing alerts.
26:47 You can also monitor specific things about Lambda.
26:50 And that's another thing that's really nice in serverless is the provider needs to monitor really well to bill you correctly.
26:56 And so you also happen to get really good monitoring because they need good monitoring to bill you.
27:02 Right.
27:03 And they just surface that for you.
27:04 So that lines up real nice.
27:05 Oh, yeah.
27:06 That's cool.
27:06 So maybe that's a good place to look, maybe compare and contrast with traditional web frameworks like Pyramid, Flask, Django compared to, like, this programming model.
27:16 I mean, obviously, the way you set up the server is different.
27:18 Like, you don't deal with N2NX and whatnot.
27:21 But, you know, sort of the paradigms, what do you think?
27:24 There's a lot of things that you don't get.
27:26 So, for example, you can't just have some super long-running API request because, for example, Lambda has a five-minute timeout maximum.
27:35 But usually you'll set that lower so that you don't bill yourself out or so you don't cost yourself too much money.
27:40 Because, you know, if your average web request doesn't terminate in five seconds, the user's gone anyways.
27:46 So you want to just stop that.
27:47 And you also get a lot more control over what's shared and what's not.
27:52 So in Django, a Flask, or a Pyramid, you have sort of a shared state that's internal to the server that isn't persisted out to a database, for example.
28:01 Right.
28:01 Maybe some in-memory static caching and stuff you pre-computed at start and you just can reference that, right?
28:06 Yep.
28:07 So you can build up pretty expensive caches locally that in Lambda don't make sense to do quite so much.
28:14 And so what you would use for that is some other really fast storage system like DynamoDB or a Redis or even Elasticsearch.
28:22 Sure.
28:23 All of these things give you a really low latency way to just get data back.
28:27 And then that would be accessed by your Lambda functions.
28:29 And then you would be able to get stuff really quick.
28:32 And so that's different in that you don't have the persistence.
28:34 And then the other thing is that you don't have the same limitations on language boundary.
28:40 And I know this is a Python podcast.
28:42 So we'll flip around this example to imagine that if you're writing something in Express, which is a Node.js framework, and then you want to write a function Python because you like Python more.
28:52 You're kind of SOL unless you make a new microservice.
28:55 Right.
28:56 Whereas in Lambda, you can go as granular as you want with languages.
29:00 So I can say, oh, well, the user create endpoint is in Python 3.6, but our profile image generator only runs in 2.7 right now because I haven't gotten around to it.
29:11 And so you can actually make a more granular migration between languages because you're doing one feature at a time.
29:16 I see.
29:17 You upgrade a function at a time and execute a function more or less in its own isolated environment, right?
29:24 Yep, exactly.
29:25 And then if you compare that to a Django project or something, then you've got one Python interpreter for your whole app.
29:31 So everything either has to be Python 3 or nothing can use Python 3.
29:34 Right.
29:35 It's all or nothing.
29:35 Yeah.
29:36 Okay.
29:36 That's pretty interesting.
29:37 So the granularity is really cool.
29:39 And then you also get the ability to make your dependencies separate.
29:43 So if you have certain dependencies that you only want to run against very trusted data, you can make those only in the functions that are invokable by very trusted things.
29:52 So you get a lot more security firewalling, not literal firewalls, but a lot more compartmentalization between functions.
29:59 And you can even do AWS resource permission distinctions between functions.
30:05 So you can do things like, say, this function's allowed to write to S3, but it's the only one.
30:09 Everyone else is denied.
30:10 Right.
30:11 Okay.
30:11 That's actually pretty awesome.
30:13 Because, again, in like a traditional whiskey app, you have to put those walls up yourself.
30:19 And it can be tricky because it's still the same memory in the end anyway, right?
30:24 Yeah.
30:24 I mean, regardless of how tricky it is, it's just, it's easy to make a mistake.
30:28 Sure.
30:28 Or to accidentally add an endpoint that probably shouldn't get right to S3 and then so on and so forth.
30:35 Sounds like serverless code might be a little bit more safe by default.
30:40 It's as safe as you make it.
30:42 Yeah.
30:42 You can just say, oh, I'm just going to give all my functions full admin and they're just going to execute arbitrary Python that comes in off the internet.
30:49 You can just eval every request.
30:52 Here, give me that pickled object.
30:54 I'll work on that.
30:54 No problem.
30:55 Yeah.
30:55 I also take arbitrary pickled objects and just, yeah, let's go.
30:58 That'll be fun.
30:59 Let's try that.
31:00 But you can do a lot.
31:01 There's a couple of really good talks.
31:03 One is gone in 60 milliseconds as an example of how even in a serverless context, if you over permission your functions, attackers can still get things that they shouldn't from your Lambda functions.
31:15 Of course.
31:15 Yeah.
31:16 It just sounds like it might be a little easier to exercise some least privilege.
31:20 Yeah, definitely.
31:21 Type of stuff here.
31:22 Okay.
31:22 So talking about the dependencies and persistence and caching and things like that, it sounds to me like to really, if we're going to have kind of complicated programs that are
31:33 running in this serverless architecture, you kind of need to go a little more all in on the cloud providers.
31:38 Like, let's just stick to AWS because we've been talking about Lambda, but, right, this applies generally.
31:43 So AWS has DynamoDB and some kind of caching.
31:48 I'm guessing Redis.
31:49 I haven't played with their Redis option.
31:51 Yeah, they offer Redis or Memcached as a service.
31:54 Yeah, exactly.
31:55 So there's that.
31:56 There's RDS.
31:57 Storage would go to S3 instead of the file system.
32:00 Right.
32:01 So do you feel like to be effective with this stuff, you kind of have to go a little more into the various APIs?
32:07 Whereas I could use EC2 and, like, basically forget I'm on AWS.
32:11 Yeah, yeah, you could.
32:12 The downside of doing EC2 and you forget that you're on AWS is that you've, you know, forgotten Amazon's, you know, zillion man years or a zillion developer years that have gone into creating all these higher level services that are basically commodities.
32:28 So things like S3, oh, store this blob for me and then let me get it later.
32:32 When you're running just on EC2 and you're storing it to disk, you need a backup strategy.
32:37 You need to make sure that if that server goes down, it's still available.
32:40 So you have to do the replication.
32:41 And S3 is just one example, but you have all these services that make your life easier.
32:47 And so you do have a trade going on.
32:49 So you can choose to use as few provider-specific services as possible, but then you don't get the benefits of using those.
32:58 So the example that I like to use is there's an online training company called A Cloud Guru that they built their prototype over about a week on Lambda and using Firebase.
33:09 Okay.
33:10 And the downside is that they would be locked into that forever.
33:13 The upside is that if they couldn't have done that, they couldn't have started.
33:16 And so they wouldn't even exist, right?
33:18 Right.
33:19 And so every time you get up, you're making a trade between going and killing your own hand-raised, going out and growing your own food versus getting it from someone else where you're locked into that provider somewhat.
33:30 So you're making a trade-off between what you're able to do in a short amount of time versus how easy it would be to switch to another provider, basically.
33:37 Right.
33:38 Of course.
33:38 And it doesn't necessarily mean you have to stick with serverless, right?
33:41 Like you could go and use RDS and Redis and S3 and then switch to EC2 and still use those, right?
33:48 You're just kind of just stuck to AWS at that point, but not to Lambda.
33:52 Yeah, because a lot of the services that you'll use alongside Lambda, you would use from a traditional application.
33:57 So you'll see people that write web applications and then some of the functionality is in Lambda because they didn't want to deal with something like a resource overrun on that particular item.
34:08 Or they wanted a special event source that wasn't an HTTP type event.
34:12 Or they just liked that context better because it's a language they don't normally work with.
34:17 There's all kinds of reasons that you would have kind of a hybrid.
34:20 Right.
34:21 Of course.
34:21 I have a lot of faith in these cloud providers.
34:24 Like they very rarely go down.
34:27 And, you know, when they do, it's usually really, really short.
34:30 But what if you wanted to have some flexibility to say move?
34:34 Like maybe could you speak to lock in a little bit?
34:37 Everywhere you're going to go, you're going to have some lock in, whether that's just the time it would take to move your data or the time it would take to write your app to make changes.
34:45 An example of not very lock in lock in would be something like file systems on Linux because you can switch between X4 and ZFS.
34:54 And that's, as far as your code is concerned, the same because they both provide the same interface.
34:59 Whereas when you migrate from, like you've talked about on previous podcasts, from Postgres to MongoDB, you've got to do a change to your code to deal with the different modeling that MongoDB does of your data and queries versus what Postgres does.
35:15 Right, exactly.
35:15 Yeah, that took a couple days of work and a few bugs you had to hunt down, right?
35:20 Yeah, that was different, of course.
35:21 Yeah, but you made it.
35:22 So, and serverless is the same way.
35:24 You're going to be embedded in whatever cloud provider you're in because in most contexts, you kind of want to be embedded like that because then you take advantage of their work so that you don't have to.
35:35 Right.
35:36 But if you're worried about transitioning cloud providers, there are a few things that you can do, like never handling an event directly.
35:43 Is you always transcode it into a sensible format.
35:46 So instead of relying on the Lambda proxy event format, you have something that transcodes that into just the stuff that you need in a format that makes sense to you.
35:54 So that way, at least, you just have to rewrite these shims and then your internal code makes sense.
36:00 Sure, that makes a lot of sense.
36:01 Or your internal code still works.
36:02 Right, right.
36:03 When you rewrite the shim and to handle the, say, the Google format of the event or the Amazon format of the event.
36:08 Yeah, I guess, you know, probably some proper architecture makes a lot of sense here.
36:12 Like, for example, your primary code could directly write to S3 or it could call some other code that says save this file to my thing, wherever that is.
36:24 And then that could be implemented to do it to S3, it could be implemented to Azure Blob Storage, whatever, right?
36:30 Azure actually has a really cool implementation of this that I like a lot that I hope that more providers will copy.
36:35 You make a trigger, they call it a trigger, an event source.
36:38 And you can have Azure Blob Store be the trigger, or you can have something like Dropbox be the trigger.
36:44 And what it'll do is it will pull down that file, put it in a temporary directory where your function is running, and then invoke your function and tell it about the local file path.
36:54 Oh, nice.
36:54 So you don't actually deal with, like, the Dropbox or Azure Blob Storage APIs to get files.
36:59 It puts them in a local directory for you, and then you use your language's regular file support.
37:04 So events like that would be amazing for more providers to implement.
37:08 And I think they've done a great job there.
37:10 Yeah, yeah, that actually sounds really quite interesting.
37:13 Another thing that I guess I see is a bit of a challenge for serverless is what if I want to work on this locally, right?
37:23 Like, I want to just fire up my local CPython and run this code and see what happens, you know?
37:30 Like, maybe I'm on a plane, and I want to, you know, do a little bit of work before I get to this conference for something I'm doing or some customer demo or something, right?
37:42 And it involves this app.
37:44 Like, what's the story on local or offline or any of these things?
37:48 You've got some options.
37:49 For local, there's the kind of what I would call the first-degree local, which is you can develop locally and deploy to a dev environment that just takes your code directly.
37:59 So that's kind of first-degree local, but the code's actually not running on your machine at all.
38:02 If you want more than that so that you can have a faster feedback loop, because deploys do take time.
38:06 Not a lot of time, but time.
38:08 You can use something like Atlassian has a thing called local stack that will fake out API Gateway, Kinesis, Dynamo, S3, and a bunch of other services on your local machine.
38:18 So they'll be on local ports.
38:19 And then you can run code that would use AWS services and point them at your local, local stack.
38:26 Nice.
38:26 Yeah, maybe just change, like have a different, like a dev local config that has different endpoints.
38:31 It would be a config change that would point your code at those local services.
38:36 So you can then develop locally, like on a plane.
38:38 So once you have local stack downloaded, you would be able to do that.
38:42 Other thing is plugin for the serverless framework, which I'm a contributor to the serverless framework, but not to this plugin.
38:48 That it runs Docker locally and will use that container image that I talked about and simulate different event invocations to your function.
38:58 And so you can either use that with, they have a built-in thing for the DynamoDB developer mode, which is a little jar that you run that puts up a fake DynamoDB.
39:08 Or you could use that in conjunction with local stack to get the runtime and the service simulation together.
39:15 Yeah, I see.
39:15 Yeah, I wonder how much Atlassian came up with local stack to like solve their own local cloud testing problems and just open source it.
39:24 Do you know?
39:25 I don't actually know the people that made local stack, but it definitely seems like the kind of thing that someone Atlassian size would say.
39:31 We have so and so many developers and it would save us so and so much time to not deploy every time they want to test something.
39:38 So let's take that time that we're going to save and invest in this tool that's going to, you know, make our developers a lot happier down the line.
39:46 Yeah, that actually made me think of continuous integration and testing and verification and stuff like that.
39:53 Like, how would I basically verify my stuff in a CI style?
39:58 Yeah, so there's some things that are hard, like simulating actual chains of lambdas.
40:03 So let's say you have that podcast lambda that we used as an example earlier where it takes an MP3 file, it does stuff, it puts it back in S3.
40:10 And then the next function picks up the new file and does something else and so on.
40:15 It's hard to simulate that chaining, but because each function is pretty focused, using a regular testing framework to test, given this input, what happens?
40:24 And there are several libraries that will pretend to be AWS for you in Python unit tests.
40:30 I really like placebo, which you can run it and it will record your interactions with AWS, the interactions of your code, and save them all in order.
40:40 And then you can rerun it in playback mode.
40:43 And so it will insert itself before your calls go to AWS and just send back the recorded response.
40:49 So you can make sure that things are called a certain number of times and that your code handles these responses from AWS the right way.
40:57 Okay, that sounds cool.
40:57 You'd use unit level testing for that and then you'd have an integration level test where you deploy to a staging environment and then you have scripts that exercise kind of the full life cycle with all of those AWS services that you can't simulate locally or want to make sure that your local simulation isn't different from the current AWS behavior.
41:15 Right.
41:15 It sounds like mocking might be an important part as well for certain parts.
41:20 Actually, with placebo, you don't have to do the mocking because it injects itself into the AWS client library and sort of does that mocking under the hood for you, which is pretty cool.
41:28 I see.
41:29 Yeah.
41:30 It's involved.
41:31 You just don't have to write it.
41:32 Yeah, it's involved, but you don't write it.
41:33 Nice.
41:33 I guess that's mockless now.
41:34 Yeah, it's mockless.
41:36 Mockless unit testing.
41:38 This portion of Talk Pythonry is brought to you by us.
41:41 As many of you know, I have a growing set of courses to help you go from Python beginner to novice to Python expert.
41:47 And there are many more courses in the works.
41:49 So please consider Talk Python training for you and your team's training needs.
41:53 If you're just getting started, I've built a course to teach you Python the way professional developers learn by building applications.
42:00 Check out my Python jumpstart by building 10 apps at talkpython.fm/course.
42:05 Are you looking to start adding services to your app?
42:08 Try my brand new consuming HTTP services in Python.
42:11 You'll learn to work with RESTful HTTP services as well as SOAP, JSON, and XML data formats.
42:16 Do you want to launch an online business?
42:18 Well, Matt McKay and I built an entrepreneur's playbook with Python for Entrepreneurs.
42:22 This 16-hour course will teach you everything you need to launch your web-based business with Python.
42:28 And finally, there's a couple of new course announcements coming really soon.
42:31 So if you don't already have an account, be sure to create one at training.talkpython.fm to get notified.
42:36 And for all of you who have bought my courses, thank you so much.
42:40 It really, really helps support the show.
42:42 We talked about developing locally, but there's also some other tools that just help with things like deployment
42:48 and something called Zappa, which basically as soon as AWS Lambda switched to Python 3,
42:56 Zappa's like, hey, we're Python 3.
42:58 Because it's like running more or less on top of Lambda.
43:02 And there's some others as well.
43:03 Do you want to talk about those?
43:03 Yeah.
43:04 So Zappa is a project that will take your WSGI-ish app.
43:09 So it'll take things like API star, now that it supports Python 3, Flask, Django, Pyramid,
43:15 and wrap them up in its own kind of fake WSGI that takes the Lambda API gateway events
43:23 and will put them into the request object for that web framework and then give it to your function that would work as a Django app,
43:30 but is now inside of Lambda, but it doesn't know.
43:32 Right.
43:33 So you write code as if it were Django or something.
43:36 So you pretend that it's Django and then Zappa handles packaging up each endpoint and associating it with API gateway
43:42 and then uploading that code to Lambda and hooking it up.
43:47 And then you get your Django-ish endpoint.
43:50 Or, well, it is a Django endpoint, but now running without Django actually serving the connection.
43:55 Interesting.
43:55 Yeah, because you basically, when you start these apps normally, you say, here's my WSGI app.
44:01 And that's the implementation of WSGI is like, you've received a request, right?
44:06 So they just have to adapt to, hey, you've received a request.
44:09 It just happened to have come not from a WSGI server, but from somewhere else, right?
44:13 Yeah, that's exactly right.
44:14 Because you're getting the same sort of data about the request in a different format from API gateway.
44:19 And so what Zappa does is it makes that format match the format that WSGI expects and then gives it off to your code.
44:25 Okay, cool.
44:26 And that'll do deployment as well.
44:28 So you say, Zappa deploy, and it'll deploy it up for you.
44:31 And it will also let you do some local testing and things of that nature.
44:36 Yeah, I wonder if that actually makes it more locally testable because you could just run it as a Django app or...
44:44 I think you're still really bound to those services.
44:46 So I don't know how much that really helps you.
44:49 Yeah, I guess you're right.
44:50 Because it's really that you're trying to get to that S3 bucket.
44:52 You're trying to get to that RDS instance and so on, right?
44:55 Yeah.
44:55 And for example, you might have a thing that you're calling that's only available inside of your VPC.
45:01 which is a virtual private cloud, which is basically a private little subnet.
45:04 Yeah.
45:04 And if you're testing locally, you're just not going to have it.
45:07 Yeah.
45:07 But Zappa is really cool.
45:09 Chalice is another one that's made by Amazon as sort of a labs project.
45:13 I think it's still point something.
45:15 So zero point something.
45:16 But it looks a lot like writing Flask, but it uses those decorators that you put on to auto-discover how it should connect your code to API Gateway.
45:25 And it has a thing that will try and auto-discover what IAM permissions you need, which is, in my experience, a little hit or miss.
45:33 Okay.
45:33 And then there's also something called Gordon.
45:35 Yeah.
45:35 Gordon is a Python frame.
45:37 It's both written in Python and you can deploy Python with it, but it's just for deployment.
45:43 So you'd write your code like Gordon doesn't exist and then you use Gordon to deploy your code.
45:48 Okay.
45:49 So things like Gordon, Ansible, the serverless framework you use to specify all these resources around your function, but you write your function sort of independent of them.
45:58 So they don't want kind of anything to do with your internal code.
46:01 They just want to deploy it.
46:02 Interesting.
46:03 Okay.
46:03 So let's talk about where people are using this.
46:06 What are some popular deployments?
46:09 We already talked about a cloud guru, which is that online training place for cloud stuff.
46:15 Yep.
46:16 And then you also probably know of iRobot.
46:18 They make the Roomba and they have a new series of Roomba.
46:21 I think it's the nine something.
46:22 Right.
46:23 And that has an associated app and it will map your home over time as it sort of figures out where things are.
46:28 And that's actually all backed with Lambda and API Gateway.
46:32 Oh, that's crazy.
46:33 So is it like you hook it on your Wi-Fi?
46:35 Yeah.
46:36 Yeah.
46:37 It has network.
46:37 It just streams its location and info.
46:40 I don't think it streams the location.
46:42 You'd have to talk to them about that.
46:43 They're also huge Python users, so they might be a good guest.
46:46 Oh, yeah.
46:47 That sounds fun, actually.
46:48 The Roomba maps kind of locations in your home and eventually it'll figure out like, well, this is always here.
46:52 And this chair is only sometimes here, so it must be mobile.
46:55 And so it can figure out the most efficient way to vacuum your house.
46:58 But you can also do things like kick it off remotely.
47:01 So if you're at work and you're like, oh, no, I have guests that are going to be in for dinner and I'm at work, run the Roomba so I don't look like a slob.
47:08 And all of that's going through the iRobot API, both on the Roomba side, because they need to be able to tell the Roomba to go.
47:16 And then on your app side, because your app needs to tell them, I want this to happen.
47:20 And all of that's going through API Gateway with Lambda.
47:23 Yeah.
47:23 It seems like these IoT things would be a really good fit for serverless.
47:27 They just need to talk back.
47:29 Yeah.
47:30 Because you don't know what the usage profile is going to be.
47:32 And you're really cost sensitive because people pay you however many dollars for the device and then they expect it to just keep working.
47:39 Yeah, that's cool.
47:40 Another one is Nordstrom, right?
47:42 They've been speakers at serverless conf several times and they use Lambda API Gateway and Kinesis, which is an event streaming service.
47:50 So you can put as many events as you want on Kinesis and it'll sort of keep them in order to a certain extent and invoke Lambdas for batches of them.
47:58 And they have a project called Hello Retail that I'll make sure gets in the show notes.
48:03 But is a really nice.
48:05 I want to see how an architecture of a real thing works.
48:09 So for them, it's a simple retail platform that's an example of all these services kind of together in one place.
48:14 Okay, great.
48:16 So I guess, you know, all this sounds really good.
48:20 There's a few cases where serverless makes things a little harder, but a lot of places where it makes it much easier.
48:26 When would you say that we should maybe, so we talked about a lot of places to use it.
48:31 When would you say you maybe should not use it?
48:33 Like if I'm trying to do this thing, like it's probably not a good use case for this model.
48:38 What is that?
48:39 There's a few.
48:39 There's things that you have right now, things that have a low latency sensitivity.
48:43 So if you absolutely positively need a response in X milliseconds.
48:48 Right.
48:49 So like high frequency trading might not make sense.
48:51 And this is like single digit milliseconds.
48:53 Yeah.
48:53 So you're not going to high frequency trade on Lambda.
48:56 But even if you have something like an ad marketplace, Lambda is probably not the best because usually your responses are sitting around.
49:05 I believe it's 150 milliseconds to go through API gateway and back and then however long your Lambda function wants to run.
49:12 So if you need 50 millisecond latency, then I think you're going to have problems.
49:16 I see.
49:16 So there's kind of an assumed like 150 millisecond latency just in the whole system.
49:22 Yeah.
49:22 Yeah.
49:22 Just because you're invoking a new container and it's starting and it's loading your code into memory and doing all this stuff.
49:27 And subsequent invocations are faster because the code's already in memory and it's just waiting on more events.
49:32 But you have kind of a base time that you're going to be spending just communicating over the network between API gateway and Lambda.
49:39 Whereas if you're going from client directly to your Django app, there's no like API gateway in between.
49:46 And so you are adding a hop even though it's a hop inside AWS.
49:50 So it's pretty quick.
49:51 The other thing is for WebSockets.
49:53 There's not a great Lambda WebSocket story yet, but you can go over to a place like Firebase and they have awesome WebSocket support and they hook into Google Cloud Functions.
50:02 So it depends on your provider too.
50:04 I see.
50:05 Okay.
50:05 Well, that sounds like good advice.
50:07 Firebase definitely worth checking out.
50:09 Yeah.
50:09 And they have really good Google Cloud Functions integrations now that were, I think some of them were just announced at Google Next a month or two ago.
50:17 Yeah.
50:18 Excellent.
50:18 So this is probably a pretty good place to wrap things up.
50:21 One thing I did want to give you a chance to talk about is you just wrote some video courses on serverless programming, right?
50:27 Yeah, that's right.
50:28 I have two, one from a little while ago that's just on AWS Lambda and function as a service and how to write code for kind of this new architecture and runtime and all that stuff.
50:39 And the most recent one I have is using the serverless framework, which is one of these deployment options, one of these tools to deploy GraphQL APIs, which is a query language.
50:49 It's developed at Facebook.
50:50 I think you've had a show on it unless it was a different Python podcast.
50:53 Yeah, I think it was on a podcast in it actually, but that's, yeah.
50:56 So people can check that out.
50:57 That sounds interesting.
50:58 Yeah.
50:59 So that's the serverless framework to deploy a GraphQL API that you would consume from a web front end, like a single page app, maybe using React, or you could use it from a mobile app.
51:08 Anything that can speak GraphQL.
51:11 And the idea is that you can then write your own backend with a lot less experience than you needed before because now you're not managing like a load balancer and auto scaling groups and all that stuff.
51:20 You just deploy your function that talks to your data store that then gives you access to all your data from the client.
51:26 That sounds really cool.
51:27 So those are both at a Cloud Guru and we can link to that in the show notes so people can check that out.
51:33 All right, Ryan, this is really interesting stuff.
51:35 Let's close it out with the two questions.
51:36 If you're going to write some Python code, what editor do you open up?
51:39 That would be Vim, of course.
51:41 All right, cool.
51:43 And most notable but not really super popular PyPI package you want to draw people's attention to?
51:49 I don't know if it's super popular, but I really love StructLog, which is a library for structured logging.
51:56 So you import it and you put it into your standard library logging configuration and it intercepts stuff and it can take keyword arguments.
52:04 So instead of logging out a formatted line with a bunch of percent S or the curly braces, you log out the name of your event.
52:11 So that can just be a short message and then as many key values as you want as keyword args.
52:16 And then it'll log them as either JSON or a prettified kind of CLI thing.
52:20 And so you can inject a lot more data into your logs and then you can get a lot more out of them if you're parsing them through like the elastic stack for log parsing or using CloudWatch or anything like that.
52:31 So in Lambda, you can also use this and it will get parsed as JSON by CloudWatch, which is pretty cool.
52:37 Yeah, yeah.
52:37 That sounds like a really cool addition.
52:39 All right.
52:40 Final call to action.
52:41 How do people get started with this stuff?
52:42 I have a blog about this that has some Python material as well as Node.js.
52:46 So whatever people are into at serverlesscode.com, they can just kind of go through there.
52:50 I have tutorials, projects, interviews, kind of a mix or hit up those video courses if you're the kind of person that learns from video.
52:57 All right.
52:58 Very cool.
52:59 Thanks for sharing all this serverless stuff with us, Ryan.
53:01 It was very interesting.
53:02 Yeah.
53:02 Thanks, Mike.
53:03 My pleasure.
53:03 Yep.
53:04 You bet.
53:04 Bye.
53:04 This has been another episode of Talk Python to Me.
53:09 Today's guest was Ryan Scott Brown, and this episode has been sponsored by Rollbar and Talk Python Training.
53:17 Rollbar takes the pain out of errors.
53:18 They give you the context and insight you need to quickly locate and fix errors that might have gone unnoticed until your users complain, of course.
53:26 As Talk Python to Me listeners, track a ridiculous number of errors for free at rollbar.com slash Talk Python to Me.
53:34 Are you or your colleagues trying to learn Python?
53:36 Well, be sure to visit training.talkpython.fm.
53:39 We now have year-long course bundles and a couple of new classes released just this week.
53:45 Have a look around.
53:46 I'm sure you'll find a class you'll enjoy.
53:48 Be sure to subscribe to the show.
53:50 Open your favorite podcatcher and search for Python.
53:52 We should be right at the top.
53:54 You can also find the iTunes feed at /itunes, Google Play feed at /play, and direct RSS feed at /rss on talkpython.fm.
54:03 Our theme music is Developers, Developers, Developers by Corey Smith, who goes by Smix.
54:08 Corey just recently started selling his tracks on iTunes, so I recommend you check it out at talkpython.fm/music.
54:15 You can browse his tracks he has for sale on iTunes and listen to the full-length version of the theme song.
54:21 This is your host, Michael Kennedy.
54:22 Thanks so much for listening.
54:23 I really appreciate it.
54:25 Smix, let's get out of here.
54:27 Smix, let's get out of here.
54:27 I'll see you next time.
54:48 Bye.
54:49 you you