#218: Serverless Python functions in Azure Transcript
00:00 Do you have stateless code that needs to run in the cloud?
00:02 The clear answer years ago was to create an HTTP or even GASP, a SOAP service before then.
00:08 While HTTP services are still very important, some of this code can be moved entirely away
00:13 from the framework that runs it and over to serverless programming and hosted functions.
00:18 On this episode, I meet up with Asavri Tayal to discuss serverless programming in the cloud.
00:23 This is Talk Python to Me, episode 218, recorded live at Microsoft Build on May 8, 2019.
00:29 Welcome to Talk Python to Me, a weekly podcast on Python, the language, the libraries, the
00:47 ecosystem, and the personalities.
00:49 This is your host, Michael Kennedy.
00:51 Follow me on Twitter where I'm @mkennedy.
00:53 Keep up with the show and listen to past episodes at talkpython.fm and follow the show on Twitter
00:58 via at talkpython.
01:00 This episode is brought to you by Command Line Heroes from Red Hat and Datadog.
01:04 Please check out what they're offering during their segments.
01:06 It really helps support the show.
01:08 Asavri, welcome to Talk Python.
01:09 Hi, Michael.
01:10 Thank you.
01:11 It's super to have you here.
01:12 It's great to meet you live at Build.
01:15 So it's going to be a lot of fun to talk about serverless.
01:17 Yeah, excited to be here.
01:18 Yeah, it's cool.
01:19 So it feels like everyone I'm having on the show lately is doing this like sort of dual
01:23 life, right?
01:24 Like we're at PyCon and then now are at Microsoft Build.
01:28 Yep.
01:29 Same for you?
01:30 Same for me.
01:30 We were just in Cleveland at PyCon and it was great.
01:34 So now coming back to Seattle back home and enjoying Build.
01:38 Yeah.
01:39 Both of those are great conferences.
01:40 What was your thoughts on PyCon?
01:41 Did you enjoy it?
01:42 You had a good time there?
01:43 Yeah, this was my second PyCon ever.
01:45 Definitely a lot of fun.
01:46 I really like the sense of community that Python developers have.
01:50 It's different from all the marketing conferences we go to.
01:53 You know, Build being one of them.
01:55 So always a pleasant experience.
01:57 Always good.
01:58 Yeah.
01:58 Yeah, I would say one of the biggest differences I felt was when you walk onto the expo floor
02:04 at PyCon, it's like everybody there is kind of doing their own thing, their own separate
02:09 thing.
02:09 It's just a sampling of the community, whereas places like Build, they're great, but they're
02:14 like, it's more like, here's the team for this.
02:16 So here's the team for that, right?
02:17 It's much more sort of focused.
02:19 Great.
02:19 I like the fact that at PyCon, everybody was there to share their stories of, hey,
02:23 this is how we use Python.
02:25 This is what we build.
02:26 And maybe it's useful for you as well.
02:28 So I'm going to share my story with you.
02:29 That was definitely my favorite part of PyCon.
02:32 Yeah, that's awesome.
02:33 I had a great time there as well.
02:34 I always enjoy that conference.
02:35 So let's go ahead and get started with your story, though.
02:39 How did you get into programming in Python?
02:40 Yeah, I started web development when I was 15.
02:44 And definitely my first programming language was JavaScript.
02:49 But eventually, as I got exposed to Python, and I remember, I think my first web application
02:55 that I developed was in Django, was developing like a portal of some kind for farmers to go
03:03 up and look up various great imaging systems that was being developed by a computer vision
03:08 lab.
03:08 So anyway, that's how I started Python.
03:11 And eventually, when there was an opportunity to make Azure great for Python developers, it
03:17 just happened that I was there and I had some experience with Python and was basically asked
03:23 to take over Python for functions and app service.
03:25 Yeah, that's super cool.
03:26 Tell me more about this Django app.
03:27 Yeah, so the Django app I was building was back in college.
03:31 I was working with the University of Illinois computer vision laboratory.
03:34 And they were building a algorithm that could take as input the image of some grain and basically
03:43 identify the quality of that grain based on a machine learning model or a computer vision
03:48 model.
03:49 So I wasn't as involved with the data science part of things.
03:52 But at the end of the day, the algorithm needed to plug into a portal that the farmer could
03:56 go interact with, plug in their grain and get market value for it.
04:01 So that portal was something I built using Django.
04:04 Yeah, that sounds like a really fun first project, actually.
04:06 Yeah.
04:07 Nice.
04:07 Okay, so that was then.
04:09 And did you study computer science at University of Illinois?
04:11 Yes.
04:12 Yeah, cool.
04:13 And now you're at Microsoft working on the Azure team.
04:16 Yes.
04:17 Tell me what you do there.
04:18 Yeah, so I'm a product manager on Azure Functions.
04:21 And currently, I'm leading both the Python experience on Azure Functions and Java experience
04:27 as well.
04:27 So really, what that means is everything that needs to be built, either on the platform tooling
04:34 or the runtime, I go in and build.
04:36 So trying to understand the story of the journey of a Python developer on the Azure Functions
04:43 platform, how they come in, what their problems are.
04:46 So on a daily basis, speaking to lots and lots of customers about why they're trying to use
04:51 serverless, why they're trying to use functions, what are some problems they face today and how
04:55 we can solve them.
04:56 And really taking those requirements back to the engineering team, building POCs and prioritizing
05:01 features on the product.
05:02 Yeah, that's cool.
05:02 Did you have a lot of those conversations at PyCon?
05:05 At PyCon, yes.
05:06 Also at Build the last two days, it's great to see that the number of folks who are developing
05:11 with Python are increasing at Build every year.
05:14 Yeah, it's pretty cool.
05:15 Did you guys have like an expo stand area?
05:18 We do.
05:19 It's a pretty, there's a lot of hustle bustle around the functions stand.
05:23 It's been great to see all the interest, yeah.
05:26 Yeah, I'm sure it's a fun area to work on.
05:28 Yep, for sure.
05:28 Cool.
05:29 So before we get into the details of Azure Functions and the Python story, let's focus
05:34 on just serverless.
05:35 We had servers.
05:36 And then we had the cloud, we had kind of virtual servers, and then we have serverless.
05:42 Walk us through that.
05:43 Right.
05:43 So we almost think of serverless as the evolution of application development on the cloud.
05:49 You know, we used to have on-prem or on-premise, and then we moved on to IaaS, then we moved
05:55 on to PaaS.
05:56 And now serverless is even the next level of PaaS.
05:59 What that means is, hey, PaaS already took away your problems of having to manage physical
06:05 hardware when it comes to servers.
06:07 Now you only need to worry about what's running on that hardware.
06:10 But you still needed to worry about what OS you're using, what packages or dependencies are installed
06:16 on that OS.
06:17 patching those OS is upgrading those machines.
06:21 serverless takes away all of that because at the end of the day, what you need to do as a
06:32 developer is give us your application code, and we'll run it in the cloud for you.
06:37 Very important concept in serverless is that you also no longer need to estimate how many
06:43 app servers do you need to run your application.
06:46 So typically when you were building an app, you would have to think, hey, my app's going
06:49 to get 20,000 requests.
06:51 And so I estimate I'll need about maybe 100 servers to serve those requests coming in.
06:57 You no longer have to do that with serverless because we're going to automatically scale based
07:03 on events that your application is receiving.
07:05 Right.
07:05 You don't have to worry about memory, like your server running out of memory because it's
07:10 doing too much and things like that, right?
07:11 Exactly.
07:11 Just at the individual level, maybe.
07:13 Exactly.
07:13 And that's why when we talk about serverless, we actually also start to talk about event-driven
07:17 serverless, which is really important because if you write an application that's going to
07:22 be triggered off an event in the cloud, something like an HTTP request or an event in some source
07:27 system somewhere, could be storage, could be an eventing mechanism, we are constantly
07:31 watching for those events.
07:32 And when those events occur, we'll go ahead and spin the resources you need to run your
07:37 application.
07:37 And when more of those events happen, we go spin more resources.
07:41 So you're really never out of compute or memory in that matter.
07:45 Yeah, that's pretty cool.
07:46 So maybe some scenario like that fits in my world.
07:48 Maybe I have some videos from an online course, but they're in a raw form.
07:52 I want to get them transcoded for the right kind of streaming.
07:55 I could upload them to like a storage that could trigger an event, a serverless function, which
08:00 grabs it, runs it through a transcoder service, and then drops it back maybe like in a file
08:04 in a location and something like this.
08:05 Yeah.
08:06 Exactly.
08:06 Both.
08:07 I think in the scenario that you described, there were two important factors.
08:10 One, your scenario was event-driven.
08:12 You were going to kick it off and then process something and drop it in a queue somewhere.
08:16 And the other one that it seemed like it was a lightweight job that you maybe didn't need
08:22 to stand dedicated servers up for.
08:24 Right.
08:24 Can you do like timing?
08:25 Like, could I say run this function every hour?
08:28 Yes, absolutely.
08:29 We've got the timer trigger.
08:30 So if you go ahead and set the time that you'd like your function to run on, we can always
08:34 do that.
08:34 Cool.
08:35 So I guess maybe I would put that into like an automation category.
08:39 Like what are some of the other main scenarios?
08:40 I guess this was automation.
08:42 The other two scenarios that we talk about with Python, the first one is web scripting or APIs.
08:48 If you, for instance, had a Cosmos DB instance and you wanted to go ahead and stand up
08:53 a REST API in front of that Cosmos DB, you could do that using functions.
08:58 Okay.
08:58 So it might be like an easy way to build an API endpoint that serves up JSON or something
09:02 like that.
09:03 Exactly.
09:03 How would I, I guess, in the Python space, what technology would I use that?
09:06 Like if I were building an API and I was hosting it in MicroWisgi, I would do that with Flask
09:12 or Pyramid or API star or something like that.
09:16 Like, do I work at that level in these functions or like what's the story there?
09:20 Yeah.
09:20 So when you develop a function, you would have to pick a trigger for your function, right?
09:25 That's the event that we were talking about earlier.
09:28 In this case, as long as you tell us, hey, it's going to be an HTTP trigger, you configure
09:33 the parameters you want for that request.
09:35 Things like, are we accepting a get post request or are we authenticating against an API key
09:41 or an AED or no auth to mechanism?
09:43 As long as you give us that configuration and give us the Python script that you'd like to
09:47 run when that trigger occurs, we'd be able to run it as a function for you.
09:52 You no longer have to go in and define droughts in a Flask app.
09:55 Right.
09:56 Do I return a string?
09:57 Do I return a dictionary?
09:58 Like, what do I return?
09:59 It's up to you.
10:00 We have this concept called bindings and functions.
10:03 So you can actually return or you can bind your return value to a data source.
10:10 So for instance, if you were to say, hey, I'd like an output binding for my function app,
10:14 which was Azure Q Storage, for instance, right?
10:17 And I'd like to bind that to the return value of function.
10:20 Now, whatever you return, whether it's a string value or a byte array or a JSON string or a stream,
10:26 we'd go ahead and use that to create a message in Azure Storage Q.
10:31 Okay.
10:31 That's pretty cool.
10:32 That's the website.
10:34 And what's the third one?
10:35 The third one that we're actually seeing very often is ML inferencing.
10:39 So I've mentioned I've been talking to a lot of customers for serverless, specifically Python applications and serverless.
10:47 And really seeing two personas.
10:50 One, a lot of data scientists who are coming to us and telling us, hey, I trained a machine learning model.
10:56 Now I'd like to deploy it as a web service.
10:58 Serverless sounds like I no longer need to worry about all those things that typically I don't care about.
11:05 And I would have to go spend time on figuring out.
11:08 Right.
11:08 As a data scientist, you don't care about like configuring micro WSGI for high performance or whatever.
11:14 I'm not a data scientist, but I imagine for a data scientist, those problems are not even fun.
11:18 You like thinking about the algorithm you want to use for training your model or the libraries you want to use.
11:22 Probably not about what packages to install or when to security patch your OS, right?
11:27 Yeah, yeah.
11:28 Serverless is a great scenario there because you give us your model, you give us your scoring script,
11:33 tell us what event to use, and we'll go ahead and provision the application for you in the cloud, right?
11:38 And then there are also second kind of category of the people or customers that we speak to who are,
11:44 saying maybe not mainstream Python developers.
11:47 They might say, hey, we work at a large enterprise and we're using Java for our mainframe application.
11:52 But our data science team gave us this thing called a model, and we need to kind of run it in our application or use it in our application to make some predictions.
12:01 Doesn't look like I can do that with Java.
12:04 I'd like to do that with Python.
12:05 Serverless is a great way to create a microservice that you can invoke via an HTTP endpoint,
12:11 all written in Python, that you can call from your mainframe Java application if you want to do it.
12:17 That sounds pretty cool.
12:17 Do you find students using it a lot for like testing out their code or creating little projects?
12:22 That's a good one.
12:23 I still think serverless is very new to the student community.
12:27 We're trying to create more awareness in that space.
12:30 But definitely spoken to a couple of people who've taken an interest and started to use serverless more on the cognitive services side of things.
12:38 But I think we still have a long way to go there.
12:40 There's always kind of a lag between academic courses and stuff like that, right?
12:44 They don't seem to change technology super fast.
12:47 So I can imagine that.
12:48 So you're talking about Azure Functions, and obviously it supports Python, and that's great.
12:54 But just to kind of round out the story, like what other languages does it support?
12:58 Yeah, Functions supports JavaScript, TypeScript, .NET, Java, and we recently announced PowerShell as of a week ago.
13:07 That sounds pretty cool.
13:09 What does a workflow look like if I write regular functions in a Python file?
13:12 I know what that is, but I don't edit this like on Azure or AWS or wherever I'm working on with my functions.
13:20 Like what does the workflow kind of look like there?
13:22 Give me a sense.
13:23 To help you develop functions, we basically have two tooling options.
13:27 And they sort of walk you through the process of building a function.
13:31 The first one is we've got an extension with Visual Studio Code where you can go in and say,
13:37 hey, I'd like to create a project, which is a function project.
13:40 Pick a language for your project.
13:42 That would be Python.
13:43 Pick a trigger for your very first function.
13:45 It can be an HTTP trigger.
13:47 And then it'll populate a template for you that you can start modifying.
13:52 So when I say a template, it's basically a combination of two files.
13:56 One's a template Python script, which pretty much has a single function.
14:00 It's a hello world function.
14:01 Takes an HTTP request, returns hello name as the response.
14:04 Right.
14:05 So that's where you write your normal code, right?
14:06 Exactly.
14:07 You can do pretty much what you want between the beginning and end there, right?
14:11 Like long as it conforms to the shape.
14:13 Exactly.
14:14 You can do pretty much what you want.
14:15 You can also load up additional modules within that folder itself.
14:19 So you don't have to have all your Python code living in that file.
14:22 If you had dependencies or in our ML case, if you wanted to throw in a model as a dependency
14:27 in your function, you can go ahead and add all of the data within your function project.
14:30 And what about packages I'm using?
14:32 Like suppose I'm using requests or NumPy or whatever.
14:35 Right.
14:36 So we've got a requirements TXT format that we support.
14:39 You automatically get that when you generate a new function project.
14:42 Just go and type in your packages and the versions you need and we'll go ahead and install them
14:47 for you when you deploy.
14:48 Yeah, that's cool.
14:48 So basically it creates a little virtual environment for the thing.
14:52 Right.
14:53 When you're developing locally, we actually leave it up to you for how you'd like to manage
14:57 your dependencies.
14:57 As long as you're running in some environment where the Python path does have the packages you
15:03 need, we'll be able to identify the packages on that path and run.
15:06 So we leave it up to you whether using VN, FIPN, the global Python installed in your machine
15:12 definitely feels like Python developers are passionate about how they manage their virtual
15:16 environments.
15:16 For sure.
15:17 In the cloud, the only contract is that you give us a requirements TXT file and we'll use
15:21 that to install the packages.
15:22 This portion of Talk Python to Me is brought to you by Command Line Heroes.
15:28 Command Line Heroes is an original podcast from Red Hat and it's back for its third
15:33 season.
15:33 This one is all about the epic history of programming languages.
15:37 The very first episode explores the origin and evolution of Python.
15:41 Let me tell you, this show is really good.
15:44 It has a great host.
15:46 It's highly produced and edited.
15:47 Imagine if Radiolab made a tech podcast.
15:50 Yeah, it'd be like that.
15:51 Even better, this particular episode has a bunch of cool Python personalities such as Guido
15:55 van Rossum and I even make an appearance in there.
15:58 Listen and subscribe wherever you get your podcasts or just visit talkpython.fm/heroes.
16:04 That's talkpython.fm/heroes.
16:08 Do you like pre-compute that little environment or something?
16:11 So, because you don't want to obviously reinstall it per request.
16:14 Yeah.
16:14 So, it basically gets installed during deployment time.
16:18 Yeah.
16:19 When you deploy your function, we take your code, install the dependencies that it needs, package
16:23 it up as a zip package, store it in Azure storage.
16:26 And when your applications are invoked or the first call comes in, that's when we go ahead
16:31 and run that zip package on the platform.
16:34 How's it run?
16:35 Does it run like in a Docker container?
16:36 Does it run in like a VM?
16:38 How is it shared?
16:39 Like, what's that story look like?
16:41 Good question.
16:42 You shouldn't worry about it because it's serverless.
16:44 But since you asked nicely, I'll share it with you.
16:47 We do run in a container underneath the covers.
16:50 Good thing is our Python offering is actually running on Linux, which is something that's brand
16:55 new to Azure Functions.
16:56 Traditionally, we only supported Windows hosting.
17:00 Just given the kinds of scenarios Python developers are bringing in, we decided to go ahead and
17:05 host Python on Linux.
17:07 So, underneath the covers, things are running in a Docker container on Linux.
17:12 We actually go ahead and even open source that image.
17:15 It's called the Azure Functions Python image on our Docker Hub profile.
17:19 And you can use that to build your own custom container.
17:22 Or if you're just giving us code, we'll go ahead and run that code in that container for you.
17:27 That's cool.
17:27 Could I use it to like test locally?
17:29 Could I, you know, Docker pull, Docker run that image locally and play with it?
17:33 Yeah, you can absolutely do that.
17:35 If you wanted to have the container approach, the cool thing is you don't need to.
17:39 Right.
17:39 We've got something called the Azure Functions Core Tools.
17:42 It's a CLI experience.
17:44 As long as you install the Azure Functions Core Tools on your local machine and VS Code, in
17:49 fact, does that for you.
17:50 You can test your functions locally simply by hitting F5 in VS Code.
17:55 It automatically starts the function process.
17:58 In fact, it'll even identify that it's a function process and go ahead and attach the Python debugger to the functions process.
18:04 So you can start serving your function on the local host endpoint and start testing against real Azure events.
18:11 That sounds pretty cool.
18:12 I saw that it had a command line.
18:14 So you can do things like go in and say like it's the funk command, right?
18:20 So you can go and say funk new.
18:22 And then you can say, what is it?
18:24 Funk run or something like this to like.
18:26 Yep.
18:26 Funk host start.
18:27 Yeah.
18:27 Funk host start.
18:28 That's right.
18:29 That sounds funky.
18:29 Yeah.
18:30 So yeah.
18:31 So you can do that in the command line as well.
18:33 Like if you don't use Jules Chico.
18:35 That sounds pretty cool.
18:36 Absolutely.
18:36 So you can use the command line in conjunction with your favorite editor.
18:41 So if you were using PyCharm, that's what some of the users are using as well.
18:44 Yeah.
18:44 You can definitely go ahead and attach.
18:46 Yeah.
18:46 That's cool.
18:47 Like one of the things that drives me crazy about working with the cloud, any cloud, right?
18:51 Whatever is the sort of airplane scenario as I see it.
18:55 All right.
18:56 Like I'm working on this project and now I'm somewhere without internet or I'm in a hotel
19:02 with bad internet or whatever.
19:03 And it's just like, well, now you're done.
19:06 You know, it doesn't work anymore.
19:07 And so it's cool that you have this way to kind of like run it locally, like regardless
19:11 of that.
19:12 Yep.
19:12 And it's the same runtime that's running in the cloud as well.
19:15 So there's nothing different about it.
19:16 Okay.
19:17 Well, that seems pretty cool.
19:18 We talked about some of the events, right?
19:20 We talked about HTTP.
19:21 We talked about, I asked if I could do it on a timer, like a cron job equivalent, which
19:25 is pretty cool.
19:26 What are some of the other ones?
19:27 All the storage events.
19:28 So Azure Storage, QBlob, Cosmos DB events, also the eventing and messaging systems.
19:35 So Service Bus Queues, Service Bus Topics, Event Hubs, any IoT Hub messages coming to an
19:41 Event Hub, Event Grid.
19:43 And the cool thing is, as of two days ago, we also announced Kafka.
19:47 Kafka triggers are supported as well.
19:50 Going back to functions being open source, I feel like sometimes I can't say that enough.
19:54 We've got an extensibility model where we do support any trigger or binding that you want
19:59 to bring in.
20:00 An example would be somebody recently wrote a SignalR binding for Python.
20:04 And so pretty much any binding that you bring can plug into our extensibility model as well.
20:09 That's pretty cool.
20:10 Could I build a Slack one?
20:11 So anytime a certain Slack message comes in?
20:13 Yes.
20:14 I actually just built a demo for that yesterday.
20:16 Oh, really?
20:17 We should talk about that.
20:19 That's pretty awesome.
20:19 Like, what was the scenario you were like laying out there?
20:22 Yeah.
20:22 So we wanted to build a demo when a GitHub issue is filed.
20:26 I'd like to go ahead and process that issue, do some sentiment analysis on it, and go ahead
20:31 and post to our Slack channel.
20:33 So that was the simple scenario I was building.
20:35 But definitely felt like just having those bindings to these data sources or just being
20:42 able to write my Python code and not focus on anything else made things really easy for
20:46 my demo.
20:47 Of course, that's super cool.
20:48 You mentioned the IoT stuff, and that seems like a really interesting angle, right?
20:53 People create these little IoT devices, and they've got to be sitting back like tons of data in
20:59 some cases.
21:00 So I can just somehow subscribe to that in particular and not necessarily have to write an HTTP endpoint.
21:06 What was the story there?
21:07 Right.
21:08 So the IoT events are generally routed through IoT Hub to Event Hub.
21:13 So as long as these events are coming into Event Hub, we've got an Event Hub triggered.
21:17 So every time a message is received on Event Hub, we'll go ahead and trigger or invoke your
21:21 function script and pass it the message coming from or the data coming from your IoT device
21:26 for the function to process.
21:28 So that actually makes a very popular scenario for serverless as well, because IoT scenarios are
21:35 generally burst nature.
21:37 So you'll suddenly get 1,000 events that you need to process and spin up functions for,
21:42 and then it might just go down to zero.
21:45 So really, it doesn't probably make sense for you to be paying for VMs that are running 24
21:50 hours to serve that one-time burst activity.
21:53 Yeah, sure.
21:53 Well, you talk about paying and pricing, right?
21:56 Like, if I go and spin up a virtual machine, Linode, or somewhere like that, I know that
22:02 it's $5 or $10 a month flat, right?
22:06 Like, there's no surprises.
22:07 If I leave it on, it's $10, let's say.
22:10 What's the story with functions?
22:12 I feel like of so many of the cloud resources, it's not always sure, but with functions, it's
22:18 like really not clear all the time, like what is this going to cost?
22:22 Right.
22:23 There's a lot of factors, right?
22:24 Not just how many times it's called, but how much work did each function do, things like
22:28 this.
22:28 How do I know, like, what this might cost?
22:31 Or is there a free tier, things like that?
22:32 Yeah.
22:32 With functions, we actually support a few different hosting plans, and the price is typically tied
22:37 to what hosting plan you pick.
22:39 So the first and the most popular one is the serverless, or the truly serverless, or what
22:44 we call the consumption plan.
22:46 What that means is that when a function execution occurs, there's a flat rate for that function
22:52 execution, and you only get charged for the compute that you use GB per second to run that
22:59 function, right?
23:00 So your actual compute consumption determines the cost.
23:03 And of course, any other resources that you might be using during that consumption, memory,
23:07 et cetera.
23:08 That's the truly serverless plan.
23:10 On that, I believe we give you, and I do need to go back and double check this once, but
23:16 I know we give you a million executions free to get started.
23:19 I forget the other thing that we also give you free with the memory.
23:24 I should go back, double check that.
23:25 The other hosting plan that we have is called an app service plan.
23:29 And this is different from what a lot of the other serverless vendors actually provide, because
23:33 we had a bunch of customers who told us, hey, I love the functions programming model,
23:38 but you know, I already have the set of VMs that I'm using to run a web application.
23:43 Maybe can I just use the extra compute available to me on those VMs to run my function as well,
23:49 and scale within those set of VMs.
23:52 And so that's what we call the app service plan.
23:54 Hey, it's not truly serverless, but if you already have a set of N VMs, tell us about those VMs and what plan they're running on,
24:01 call the app service plan, and we'll go ahead and run the function on that for you.
24:05 It's not serverless scaling.
24:07 It's CPU-based scaling.
24:09 So it's scaling based on CPU metrics, how much CPU is being used or how much memory is being used.
24:14 But it's kind of a determined set.
24:17 It does give you advantages like now you can put your functions within a VNet.
24:21 So there's networking options.
24:23 You can connect to data in a VNet, you know, those kind of capabilities.
24:26 So we more or less had these two groups of people, the serverless plan people, really happy about their true serverless event-based scaling.
24:35 They just don't have to think about it.
24:36 It does its magic.
24:37 Exactly.
24:38 It's the cloud.
24:39 And then we had the folks who were super happy with the app service plan because the performance is great because things are already provisioned for you and all the networking capabilities.
24:47 So in the last few months, we actually went and did some work to enable a third hosting plan, which we're very, very excited about, actually.
24:55 It's called the Elastic Premium Plan.
24:58 And in the Elastic Premium Plan, what you can essentially do is you can tell us how many reserved VMs you want for your application.
25:07 So if you're going to have a third one, which is the first time, which is the first time you want for your application.
25:11 So if you're going to have a third of the application, you can tell us how many of the applications are going for your application.
25:11 So if you're going to have a third of the application, you can tell us how many of the applications are going for your application.
25:16 So if you're going to have a third of the application, you can tell us how many of the applications are going for your application.
25:20 I always want to have my application warm and running on that one instance.
25:24 But if I receive traffic that's more than that instance can handle, I'd like to scale out to meet that demand and scale back down to one when the demand's gone.
25:34 Which is cool because now you get the benefits of really good performance because reserved instance, networking options because you already have something that's provisioned there, and the serverless scale.
25:47 Yeah, that sounds pretty cool.
25:48 Like most of the time, probably just one VM may do it.
25:53 And one of the things that I think that I'm always a little suspicious of in the serverless world is there's how long your code takes to run,
26:02 and then how long it takes end to end for the request to come in, for the temporary Docker image to spin up and get created, and all that kind of stuff.
26:12 And it seems to me like you might be able to avoid a lot of it in this scenario you're talking about here.
26:17 If you have your own VM that's dedicated to it.
26:20 In fact, what you explained right now, we're already optimizing for that.
26:24 So, you know, the whole point of us running in a Docker container, which we are certain what container it is, is the fact that it's already warm and running.
26:32 Even when a first call comes to your application, we're actually running a dummy Python site.
26:37 And when we invoke your application, we go ahead and specialize that site with your code and with your application settings in order to invoke your code.
26:45 A lot of that's already been done.
26:47 A lot of that's already been done.
26:48 So we're optimizing for performance even on the serverless plan.
26:51 That said, though, even that doing that job takes a little bit of time because, again, it is serverless.
26:57 We're not provisioning anything for you.
26:59 So when you do specialize something, it takes a little bit of time.
27:02 If you absolutely had a workload that could not function on that kind of latency either, premium plan is a great option.
27:10 Interesting.
27:11 There's a lot of talk about Docker and Docker images and stuff in here.
27:14 And it's really cool.
27:15 And that obviously gives you like a decent level of isolation in your environment and stuff like that.
27:20 But would I be able to like create my own Docker container?
27:24 Say instead of taking the one that we talked about that we could get Docker pull and mess with, could I base a custom image on that and like get it exactly the way I want?
27:34 And then run that?
27:35 Yeah, you can absolutely do that.
27:36 So like I said, our Docker image is open source.
27:39 You could go ahead and extend that image to add whatever requirements you want to add.
27:45 And the typical reasons why we see somebody would do that is if you had dependencies that weren't essentially pip installable, but something that was an OS level dependency.
27:54 Maybe you wanted to install the PyTorch package via apt-get on your container, right?
28:00 In that case, custom container is a great scenario.
28:03 The other one is if you were using a distribution of Python that we didn't support out of the box, something like the Anaconda distribution.
28:10 Again, go ahead and customize the container with the Anaconda distribution and you can still use functions with it.
28:16 Okay, then I put like blob storage and point to it in my container or my function concentrate.
28:20 How do I do it?
28:21 You can simply push your Docker image to either Docker Hub or ACR and we also support private repositories.
28:28 But as long as you bring us the image itself, we'd be able to use that to create a function app.
28:34 This is mixing a lot of these cloud and networking ideas together, right?
28:37 Right, and that's why the only caveat I will mention though with custom containers is we will not be offering the consumption or the serverless plan with custom containers.
28:46 We are sticking to the premium plan and the app service plan to optimize for performance benefits.
28:52 Sure.
28:53 So we talked a little bit about being able to run with that func host start command, run locally our code.
29:02 But, you know, if you're going to do like a production release of your code, you probably want to have continuous integration, maybe continuous deployment.
29:12 Like how do I test this stuff?
29:14 Can I test it in like some kind of CI system or something like that?
29:18 We integrate with Azure DevOps.
29:19 Azure DevOps is basically a CI CD tool.
29:23 Specifically, Azure Pipelines is the CI CD component of DevOps.
29:27 And it provides you with multiple client touch points.
29:31 You can go to this portal where you'd say, hey, I'd like to set up a DevOps pipeline or a CI CD pipeline for my function app.
29:38 We're already populating functions and Python templates in there.
29:42 So it'll be able to generate the structure of the various steps that you need to take in order to do CD.
29:48 And you can go ahead and integrate your own CI in there as well.
29:51 We've also got the command line experience, which is starting from if you've already developed a function app,
29:59 you can go ahead and run the az function app devops create command.
30:04 It's documented in the functions documentation.
30:07 But that'll automatically identify the app that you're using is a Python app.
30:11 It contains requirements txt, which are certain packages.
30:14 And go ahead and generate what's called an Azure Pipelines YAML file that contains exactly the script that needs to run for your CD step.
30:23 Okay.
30:23 And then how do I test it?
30:24 Can I test it with like pytest or like what is the story?
30:28 You can definitely write unit tests against pytest.
30:30 For each of the triggers and bindings that we integrate with, we're including in the library mock interfaces.
30:37 Right.
30:38 Okay.
30:38 So if I want to have like say a CosmoDB or like a blob storage one, I guess that would be hard to test.
30:44 Like it would be hard to simulate something going into blob storage, right?
30:48 Exactly.
30:48 So in that case, you'd use that mock rich type that we're providing to create an object that you can test against in a CI CD kind of setting.
30:57 If you were to be testing locally, however, we do let you run against real time Azure events.
31:04 So if I started a function that's going to be triggered by a message being added to queue, when I'm running it locally on my local host endpoint, I can go to Azure, add a message to queue, and it will trigger off the function.
31:16 Yeah, that sounds like a pretty cool way.
31:18 So that's more of a development side of things.
31:20 It's more like the inner loop, outer loop story.
31:22 Okay.
31:22 Right.
31:23 So we think of like, hey, when you're developing locally, just trying to get your function to work in the way you want it to.
31:29 Inner loop is great.
31:31 You develop with a CLI.
31:32 You test against real time Azure events.
31:33 Now, when you think you're ready with your function app and you'd like to move to the cloud, that's when you go ahead and initiate the CI CD or DevOps create command.
31:42 Okay.
31:42 I'm happy with it.
31:43 I want to put it out there.
31:44 How do I do that without?
31:46 Let's suppose I have a function.
31:48 It's getting 100 requests per second, right?
31:51 Even if it only takes, you know, half a second to deploy this thing, that could be a problem.
31:56 Is there a way to roll it out so that it kind of upgrades without breaking inbound requests?
32:02 We've got this feature called deployment slots.
32:05 Okay.
32:06 And essentially what that means is, hey, you can have different kind of canary environments that you're publishing your application to.
32:14 For instance, I could have an app that's running in a production slot and 100% of my traffic is coming to that slot.
32:20 Now I've got an updated version of that app, a V2 version, say.
32:24 I'll go ahead and create a separate slot, which is, you know, my canary slot, say, and publish my application to that slot.
32:31 And I can start navigating some of my traffic, the incoming traffic to that slot so that, you know, maybe 80% of my traffic is coming to production, 20% is going to canary, and so on and so forth.
32:44 I start to navigate the traffic until I feel very comfortable that my staging slot or my canary slot is doing what I expected to do and bring my number of requests on the production slot down to zero before I swap.
32:57 Yeah, that's actually cooler than I expected.
32:59 I figured, like, there might be some kind of, like, Kubernetes sort of upgrade story type of thing.
33:05 But the ability to, like, send, like, 1% of your traffic at it and just go, well, let's just make sure that this is going to hang together.
33:13 Yep.
33:13 Yeah, that's pretty cool.
33:16 This portion of Talk Python to me was brought to you by Datadog.
33:19 Get insights into your Python applications and infrastructures with Datadog's fully integrated platform.
33:25 Debug and optimize your code by tracing requests across web servers, databases, and services in your environment.
33:32 Then correlate and pivot between distributed request traces, metrics, and logs to troubleshoot issues without switching tools or contexts.
33:39 Get started today with a 14-day trial, and Datadog will send you a free t-shirt.
33:44 Visit talkpython.fm/Datadog for more details.
33:50 So, Bill, there's always a lot of announcements from folks at Microsoft.
33:54 Like, this is one of the conferences that you hold back all of your big news for, right, until you, like, let it out here.
34:01 So, I guess, you know, maybe give us a rundown on, like, what's the big news for you in the serverless and Python?
34:08 Yeah, it's interesting because we've actually been releasing pretty exciting announcements over the last month, month and a half almost.
34:17 So, the Elastic Premium plan that I spoke about went out about a month ago.
34:22 And that's the biggest thing, even at Build, we're talking about it, and a lot of people already knew.
34:28 So, I would say that's the biggest announcement of it all.
34:31 The second one that's also super exciting is support for Kubernetes.
34:35 So, Functions was already open source, the runtime and the programming model and the framework itself.
34:42 But there was still a component of the platform that was still novel to what Azure was doing.
34:49 That was called the Scale Controller.
34:50 And just to give you a little rundown, Scale Controller is the component that's responsible for event-based scaling.
34:57 So, for instance, if your function was triggering off messages in a queue, are Scale Controllers always watching that queue?
35:05 How many messages are on that queue?
35:06 What's the size of each message?
35:07 How long is it taking to process?
35:09 And depending on that, actually scaling your infrastructure.
35:12 Now, we actually went in and rewrote that Scale Controller in Golang and made it open source.
35:20 So, you can now deploy your functions containers or function apps to Kubernetes and also take advantage of the event-based scaling.
35:27 It's a project we're calling KEDA.
35:29 And it's an attempt to really open source a lot of what we're doing in functions and try to reduce the vendor lock-in that may still be remaining.
35:40 For instance, I mentioned earlier in the podcast, we added support for the Kafka trigger.
35:44 That was motivated by the KEDA project.
35:47 We're also looking at adding support for things like RabbitMQ as triggers.
35:51 That sounds super cool because, you know, to me, I feel like the cloud is the new lock-in, right?
35:56 For both good and bad, like, you know, not just Azure, but AWS and the other ones.
36:01 Like, the more you take advantage of these really cool features, the more you are kind of committed, right?
36:07 But it sounds like with this, as long as you're willing to run a Kubernetes cluster, you can kind of bring it wherever you want it to go.
36:13 Exactly. And it's more for the audience, I'd say, who are already bought into Kubernetes.
36:18 You know, you don't have to do that if you really, truly are serverless.
36:22 Just leave the infrastructure to us and let us do what we do best.
36:27 But if you're a big enterprise and everything you have is going to be running on Kubernetes and you want functions to be a part of that,
36:34 it's a great way to still be able to take advantage of the event-driven serverless functions.
36:38 Okay, so I already have a Kubernetes cluster.
36:40 Can I just take this and, like, run it on the cluster as one of the things and then start calling the endpoints there?
36:47 Exactly.
36:48 Where's the rest of the infrastructure?
36:49 Like, where are the HTTP endpoints that tie it back to the functions and, you know, stuff like that?
36:54 Is that enough to just sort of have my own little private serverless thing in some cluster or what?
37:00 Yeah, absolutely. I'd actually, without going too much in detail on KEDA, I would encourage you to look at the documentation.
37:07 It's got some great references on how things are actually working in the Kubernetes space.
37:12 There's a lot going on in terms of your cluster, your pods.
37:15 I'm still learning about Kubernetes, to be honest.
37:18 There is great documentation online.
37:20 I'd encourage you to go look up KEDA, KEDA project.
37:23 Okay, cool. What else?
37:25 We introduced this concept called extension bundles.
37:28 It's a little underneath the cover, so you don't really need to know about it.
37:32 But really, historically, what happened was because our core runtimes were in .NET.
37:37 There's a lot of .NET that kind of flew out or snuck up on you when you were developing Python.
37:43 And really, we don't want that for our Python developers.
37:46 You know, we want, if you're a Python developer, let us come and plug into your ecosystem rather than trying to drive you to .NET.
37:53 Extension bundles abstracts away a lot of that stuff.
37:57 So if you're using an extension or a binding that's written in .NET, you no longer have to have the .NET SDK installed.
38:05 All you need is Python if you're developing in Python.
38:08 And we'll take care of all of that for you.
38:11 You're kind of what you would expect, but I guess it's nice that it's there, right?
38:14 It's an evolution of the platform.
38:15 I think from functions we want to where you could only develop on Windows and host on Windows to v2, where you can now develop on any platform, be it macOS, Windows, Linux machine, and host on Linux, to finally being able to abstract the few fritters that were still left.
38:32 I'm really excited and looking forward to a place that we're at right now.
38:37 I think we're getting pretty close.
38:38 Yeah, that's cool.
38:39 So speaking of environments, what versions of Python are supported?
38:44 Today we support 3.6, Python 3.6.
38:46 We're looking to add 3.7 support in the future as well.
38:49 We've already got a container image available if that's of interest.
38:52 And I guess if you can do your own container, then put whatever else you want out there, right?
38:56 Exactly.
38:56 Our core runtime is actually already compatible with 3.6 and 3.7.
39:01 We heard feedback from specifically some of the ML users that there were packages like TensorFlow that weren't yet compatible with 3.7 or were still working on the wheels for 3.7 and such.
39:13 So we just decided to delay that by a few months before we go ahead and make a big release.
39:19 Sure.
39:19 Okay, cool.
39:20 One of the things that I saw people doing on AWS Lambda is using sort of abstracting away the deployment of something like Flask, right?
39:32 So there's a project called Zappa, and Zappa will let you take what looks like a Flask function and turn it into a bunch of serverless functions, which is an interesting idea.
39:44 I don't know.
39:45 I saw that in the docs you guys have something kind of like that, but it's much more manual than working directly with Flask.
39:52 Do you have any plans for something like that?
39:54 Not yet.
39:55 And honestly, we're still thinking about it.
39:57 We don't know if that's a direction that most users want to go in.
40:01 Just speaking to customers, and we've got a very good community of preview users.
40:05 They've given a lot of feedback.
40:07 This hasn't come out that much.
40:09 So really, I'd encourage you and everybody who's listening to give us feedback on that.
40:15 Tell us if that scenario is something that's important for you, and help us just understand how we should think about these things.
40:23 Yeah, honestly, I look at that, and it's super interesting, and it's kind of cool to think about it.
40:27 But in practice, I'm like, do I really want to maintain my web app as like 100 separate little things?
40:33 Or does it make more sense to maybe build like 10 specialized serverless bits and then host the rest of it as like a regular web app or something?
40:42 I don't know.
40:42 It's an interesting idea, though.
40:44 Right, right.
40:45 Yeah, yeah.
40:45 Cool.
40:46 Well, I guess that's probably about it.
40:50 Anything else you want to share about the service that you guys got going on right now?
40:53 We've got a few things in motion.
40:55 I would say the biggest one, definitely Python is something that is our biggest focus right now for the Azure Functions team.
41:03 So just keep your eyes and ears open for what we have coming next in terms of improvements for the preview itself and our geo offering.
41:12 That's really my biggest call to action here.
41:15 Yeah, cool.
41:16 One of the things I thought was really nice is I went to the Azure serverless functions bit for Python and said to subscribe to our announcements.
41:24 Just go watch the GitHub repository.
41:26 That's a pretty cool way to like, instead of like, yeah, we're going to shoot you a newsletter or something.
41:31 No, just it'll be in GitHub.
41:32 Subscribe to it.
41:33 Yeah, we definitely spend a lot of time and lives on GitHub these days.
41:38 Open source is great.
41:39 Anytime we want to speak to the community, we pretty much post an announcement.
41:43 Even something like, I'm not sure if you've heard of durable functions in Python.
41:47 We're thinking about, hey, it doesn't make sense to have durable functions for Python.
41:51 Do Python users have scenarios that they need durable for?
41:55 Something I posted the announcement about a GitHub issue about, and there was a ton of interest.
42:00 So it's a great way to just go watch our repository, see what we're doing every day, the questions we're asking, and participate in the discussion.
42:07 Yeah, that's cool.
42:08 What do you mean by durable functions?
42:09 Like functions, if they fail, they'll like restart, kind of like a transaction, or what is this?
42:13 No, actually.
42:14 So going back to a little bit of principles of serverless per se, serverless functions are typically supposed to be short-lived and stateless, right?
42:24 Okay.
42:25 But what we heard was there can be scenarios that are long-running because maybe you're running some kind of an orchestration.
42:31 Maybe you're doing a function, like a map-reduced scenario, a fan-in, fan-out scenario, right?
42:36 I see.
42:36 And that can be a problem because a lot of the server infrastructure, serverless infrastructure will stop your function if it takes too long, right?
42:44 Exactly.
42:44 In fact, on our infrastructure, on our platform, it's a 10-minute default timeout, but you can tweak it to be a 20-minute at the max.
42:53 The new Elastic Premium plan actually can run forever.
42:55 That's a premium benefit of the Premium plan.
42:58 But with durable functions, what we do is we are basically providing a framework for you to be able to orchestrate your function executions.
43:07 There are two concepts in durable.
43:09 There is something called an orchestrator function, which can run forever.
43:12 And orchestrator function basically calls into what are called activity functions, which are basically just regular functions and have the 20-minute timeout.
43:21 So if I were doing a scenario like function chaining, I could have my orchestrator function kind of coordinate calling function A, taking the response from function A, and using it to invoke B.
43:32 Similarly, B to C, and so on.
43:35 Simple function chaining scenario or something where I was actually doing a fan-in, fan-out would be more complicated.
43:39 I could orchestrate various functions with the long-running orchestrator function.
43:44 Or even the other day, somebody built a scenario which required an OTP pin, for instance.
43:48 Human interaction with a function, which we humans take time sometimes to respond.
43:53 Yeah, for sure.
43:54 So those kind of scenarios are possible with durable functions.
43:57 It kind of solves the problem of being able to maintain state and run for a longer duration.
44:02 Does it like serialize a state while it's waiting and like bring it back?
44:06 It basically does.
44:08 Somebody on Twitter today just posted about the black magic of durable.
44:11 So it's a little bit complex and I can go into the details.
44:14 But yes, it's basically using storage to maintain state.
44:17 Okay, cool.
44:17 That's exciting.
44:18 That's a cool idea.
44:19 Yeah.
44:19 All right.
44:19 Well, thanks for sharing all these ideas.
44:21 There's just a couple of questions I'll ask you before I'll let you go.
44:25 One is if you're going to write some Python code, what editor do you use?
44:29 Yeah, so good question because I was traditionally using PyCharm and recently, when I say recently,
44:34 since about the last year or so, starting using VS Code.
44:38 And I'm really appreciating right now the VS Code ability to be able to work with Azure
44:44 because I work a lot with the Azure cloud.
44:45 The extensions do a great job at it.
44:47 Yeah, that's cool.
44:48 There's definitely a lot of integration there.
44:50 Do you have plugins for PyCharm as well?
44:53 I was using a few plugins for PyCharm.
44:55 I honestly don't even remember what plugins there were right now.
44:58 Yeah, sure.
44:59 It's been a while.
44:59 All right.
45:00 And then a notable PyPI package that maybe you ran across.
45:03 You're like, oh, that's cool.
45:04 People should know about this.
45:05 So yesterday, I ran into something called Flask Dance.
45:10 Flask Dance.
45:11 And really, what it helps you do is a lot of the OAuth dance that you would do manually,
45:15 typically, I mentioned the Slack application to you, right?
45:18 A lot of the OAuth dance that you would do typically yourself, it makes it really, really easy to be able to do that.
45:25 And I was super thankful for that package yesterday when I had to do this in a short time.
45:29 Yeah, that's awesome.
45:30 Because OAuth can be kind of annoying.
45:32 You do this, and then it calls back this function, and then it gets to be...
45:35 And of course, every API provider does it differently.
45:38 Yeah, of course.
45:39 And yeah, that's no fun.
45:41 So awesome.
45:41 Flask Dance.
45:42 That's cool.
45:42 I hadn't heard about that one.
45:43 All right.
45:43 Final call to action.
45:44 People are excited about serverless.
45:46 What should they do?
45:46 I think you should go check out Azure Functions.
45:49 We've got some great getting started material.
45:51 Like I said, if you're a CLI person, go ahead, download a CLI tool, start developing.
45:55 Or you can also use VS Code.
45:57 We are all eyes and ears right now for any feedback that you can give us.
46:02 And given that we're out there in the open, all of our discussions, ideas, feature requests,
46:08 you really have a great chance to influence the Azure product, I'd encourage you to come
46:13 participate and talk to us.
46:14 Yeah.
46:15 All right.
46:15 Cool.
46:15 Well, thank you so much for being on the show and sharing all of what you're up to here.
46:18 Yeah.
46:19 Thank you so much for having me.
46:20 Yeah.
46:20 Bye.
46:20 This has been another episode of Talk Python to Me.
46:24 Our guest on this episode was Asavri Tayal, and it's been brought to you by Command Line Heroes
46:28 and Datadog.
46:29 Command Line Heroes is a podcast telling the story of developers.
46:33 This season is all about programming languages and starts off with Python, of course.
46:38 Subscribe at talkpython.fm/heroes.
46:41 Datadog gives you visibility into the whole system running your code.
46:46 Visit talkpython.fm/datadog and see what you've been missing.
46:50 Don't even throw in a free t-shirt for doing the tutorial.
46:52 Want to level up your Python?
46:55 If you're just getting started, try my Python Jumpstart by Building 10 Apps course.
46:59 Or if you're looking for something more advanced, check out our new async course that digs into
47:05 all the different types of async programming you can do in Python.
47:08 And of course, if you're interested in more than one of these, be sure to check out our
47:12 Everything Bundle.
47:12 It's like a subscription that never expires.
47:14 Be sure to subscribe to the show.
47:16 Open your favorite podcatcher and search for Python.
47:19 We should be right at the top.
47:20 You can also find the iTunes feed at /itunes, the Google Play feed at /play,
47:25 and the direct RSS feed at /rss on talkpython.fm.
47:30 This is your host, Michael Kennedy.
47:31 Thanks so much for listening.
47:33 I really appreciate it.
47:34 Now get out there and write some Python code.
47:36 We'll see you next time.
47:40 We'll see you next time.