Learn Python with Talk Python's 270 hours of courses

#121: Microservices in Python Transcript

Recorded on Friday, Jun 2, 2017.

00:00 Do you have a big monolithic web application or service that's hard to manage, hard to change,

00:04 and hard to scale? Well, maybe breaking them into microservices would give you many more

00:09 options to evolve and grow that app. This week, we'll meet up again with Miguel Grinberg

00:13 to discuss the trade-offs and advantages of microservices. It's Talk Python to Me,

00:18 episode 121, recorded June 2, 2017.

00:22 Welcome to Talk Python to Me, a weekly podcast on Python, the language, the libraries, the

00:51 ecosystem, and the personalities. This is your host, Michael Kennedy. Follow me on Twitter,

00:56 where I'm @mkennedy. Keep up with the show and listen to past episodes at talkpython.fm,

01:01 and follow the show on Twitter via at Talk Python.

01:04 Talk Python to Me is partially supported by our training courses. Here's an unexpected question

01:10 for you. Are you a C Sharp or .NET developer getting into Python? Do you work at a company

01:15 that used to be a Microsoft shop, but is now finding their way over to the Python space?

01:21 We built a Python course tailor-made for you and your team. It's called Python for the .NET

01:26 developer. This 10-hour course takes all the features of C Sharp and .NET that you think

01:31 you couldn't live without. Unity Framework, Lambda Expressions, ASP.NET, and so on, and it

01:37 teaches you the Python equivalent for each and every one of those. This is definitely the fastest

01:42 and clearest path from C Sharp to Python. Learn more at talkpython.fm.net. That's talkpython.fm.dotnet.

01:50 Miguel, welcome back to Talk Python. Thank you. Thank you for having me. It's great to have you back. Previously, we talked about

02:01 building Flask web applications and web services, and that was really fun. And I think we're going to take it up a notch in terms of sort of abstraction and talk about a more general idea, which obviously is frequently done in Flask, but could be done in Pyramid, could be done in Django, or, you know, Node.js, right? Microservices.

02:19 Yes. So microservices are really interesting, and there's a couple of ideas vying for this, like how do we decompose our big applications, and we'll dig into that. But first, I just want to let people know, if they want to hear your previous episode, that was episode 48, and maybe we could quickly talk about PyCon. We were just both there, right? You have a good time there?

02:41 Yeah, yeah. I had a lot of fun. It always surprises me that people recognize me, and they stop me, and they say, you know, thank you, your tutorials, your blog, what got me into programming, or helped me advance. So I always end up with a smile.

02:57 It's really amazing how many people I was around who were like, wow, people actually really appreciate what I do.

03:04 Right. Yeah, yeah. Likewise. I mean, I just do it because I enjoy it. I mean, to be honest, I do it for myself.

03:11 Yeah. But it's really great to see that you make a difference, like some of the other podcasters at our podcaster booths were like, wow, people really appreciate what we do. I like, you know, you do this work often, really, very much in isolation.

03:25 I don't mean it in a bad way, but like, you sit down to write your book, or you write your series of blog posts, or you record a course, or even the podcast is kind of, you know, just two people, right? And then you go to one of these events, you're like, wow, there's a lot of people who this actually affects how cool it is that I'm doing this.

03:43 Yeah, that's something that I really enjoy.

03:45 Yeah. And you did a tutorial, right? On microservices, right?

03:48 I did a tutorial. So these are the three hour long tutorials that happened before the main conference. So yeah, this year, this is the fourth consecutive year that I do a tutorial. And this one was on microservices.

04:01 Yeah, excellent. So we'll definitely have to put that into the show notes.

04:04 Sure. Yeah, it's been on YouTube, like the day after I gave it, they put it, they're very efficient.

04:10 It's amazing. They really seem to have the AV work down at PyCon these days.

04:15 They're very good. Yes.

04:16 I think it's also part of their outreach mission, I would guess. Like there's 3,300 people that came to PyCon, but there's, you know, already, I think some of those videos have more people watching it online than actually attended the conference.

04:28 Right. Yeah.

04:29 Yeah. So what were some of your favorite sessions, like some talks or experiences?

04:34 Always dictionaries. Those are always interesting. For example, there were a few this year. So, and I always end up learning something new that I didn't know.

04:43 It's amazing, right?

04:43 Yeah. So, and I always end up learning something new that I've been doing.

04:45 Yeah. So, and I always end up learning something new that I've been doing.

04:46 Yeah. So, and I always end up learning something new that I've been doing.

04:47 I always end up learning something new that I've been doing.

04:48 I always end up learning something new that I've been doing.

04:49 I always end up learning something new that I've been doing.

04:50 I always end up learning something new that I've been doing.

04:51 I always end up learning something new that I've been doing.

04:53 I always end up learning something new that I've been doing.

04:54 I always end up learning something new that I've been doing.

04:55 Yeah, I always end up learning something new that I've been doing.

04:56 Yeah, I always end up learning something new that I've been doing.

04:57 I always end up learning something new that I've been doing.

04:58 I always end up learning something new that I've been doing.

04:59 I always end up learning something new that I've been doing.

05:00 I always end up learning something new that I've been doing.

05:01 I always end up learning something new that I've been doing.

05:02 I always end up learning something new that I've been doing.

05:03 I always end up learning something new that I've been doing.

05:04 I always end up learning something new that I've been doing.

05:05 I always end up learning something new that I've been doing.

05:06 I always end up learning something new that I've been doing.

05:07 I always end up learning something new that I've been doing.

05:08 You know, some people know you, some people don't.

05:10 What do you do day to day?

05:11 Where are you coming from?

05:12 What's your perspective?

05:13 In the last few years, I have been working.

05:16 Well, actually, for a very long time, I've been working as a software engineer.

05:20 In the last few years, I've been building cloud applications.

05:23 Right now, I'm working for Rackspace.

05:26 And I am helping build some of the services that our customers use when they go to the Rackspace

05:34 control panel.

05:35 More specifically, the services that I work on are ones that allow customers to deploy

05:41 applications that then we manage.

05:43 They deploy applications very easily by clicking and dragging stuff.

05:47 Basically, we do all the magic.

05:49 And we, of course, we use microservices for all of this.

05:52 Yeah, that's really cool.

05:53 And you have a book, right?

05:54 That we talked about just a little bit in the previous podcast.

05:57 Right.

05:58 And then I have a book.

05:59 The book is called Flask Web Development.

06:01 It's now, I think, a little bit over three years old.

06:05 And I'm currently working on the second edition.

06:09 So, probably later this year, hopefully before Christmas, we'll see.

06:13 The second edition will be out.

06:15 That's going to basically refresh.

06:18 The books largely are going to be the same.

06:20 It's going to update a few things that changed in Flask or some of the extensions and related projects that I referenced.

06:29 Yeah, progress is great.

06:30 But when you create things like books or video courses, it's really frustrating when they change, actually.

06:38 Yeah.

06:39 I mean, it's really amazing, to be honest, that after three years or a little bit more, large parts of the book are still updated.

06:48 Flask, thanks God, is not a framework that likes to change things in big ways.

06:53 Yeah, that's right.

06:54 It was a pretty mature framework when you got to it.

06:56 If you did like Jopranto or Sanic right now, you might be rewriting that thing in a year.

07:02 Right.

07:03 Yeah.

07:04 Actually, I do have one of my open source projects.

07:06 I have support for Sanic and AIo HTTP, as well as Flask and Django and many others.

07:13 It's a socket IO server.

07:16 It's called Python socket IO.

07:18 And I find that AIo, HTTP and Sanic require more attention than the old friends from the whiskey world.

07:27 Yeah, absolutely.

07:28 Yeah, but it's good.

07:30 Those things are changing.

07:31 Those things are growing.

07:32 Those are the frameworks that are pushing the web forward in the Python space.

07:35 Yes, absolutely.

07:36 So it makes living around their orbit more work, but I think it's going to make it all better for everyone in the end.

07:42 Yeah.

07:43 All right.

07:44 So let's start with what are microservices?

07:47 Like, if I want to just take and create like a keep it on Flask since that's where your book is, if I want to just create like a Flask app, I could just put everything in there, right?

07:56 I could do my user management, I could do my data access, I could do my reporting, all that stuff.

08:01 I could just stick into like one big Flask app and ship that, right?

08:05 Correct.

08:06 And I covered many times, it's in the book and in tutorials that I've done in previous years.

08:11 You know, with Flask, you have a way to organize your application when it starts to become large.

08:18 It gives people some trouble.

08:20 There are sometimes issues with secular dependencies, but you can do it and you can end up with a single application.

08:26 In the context of microservices, we call these types of applications monoliths because they're one big thing.

08:33 Yeah, yeah.

08:34 So maybe compare and contrast with what are microservices relative?

08:39 I can draw a parallel.

08:40 So we all know that if you write your application in a single big function, that's really not good, right?

08:47 It's hard to maintain.

08:48 Yeah, you should at least use two functions.

08:50 Two functions.

08:51 You should use two or three, right.

08:53 So basically what you do when you're talking about functions is you write small functions.

08:59 Each function, at least you should try that it does one thing.

09:03 And then the functions call each other and that's how you achieve the big problem, right?

09:08 The solution to the big problem, right?

09:10 Right.

09:10 And functions, the way I think of it is if I can't give it just a simple name that says what it does, it's wrong.

09:17 There's something wrong with it.

09:18 I need to change.

09:19 I need to change the function so I can name it better, right?

09:21 Correct.

09:22 Right.

09:23 So microservices, it's basically the same idea, but applied to a web service.

09:28 So the traditional way in which you develop a web application in Python, say using Flask, Bottle or Django or anything, pyramid.

09:37 Basically, like you said before, you put all the contents in one application.

09:42 And then without realizing it, you have a coupling between the different subsystems, right?

09:48 You have a user subsystem that keeps track of your users.

09:53 And then you have many others.

09:55 And, you know, they all use the same database models.

10:00 And you don't realize it, but you are basically making it harder for that application to grow and maintain because of all the these references that one subsystem has into the other.

10:10 So solution that microservices bring is that you take all these conceptually separate subsystems and you create a separate web service with each one.

10:22 Right. So maybe you've got like a front end web app that still does the back end server side stuff.

10:29 But instead of going straight to the database or straight to some sub modules in your web app, it calls these related microservices that sort of implement the functionality, right?

10:38 Correct. Right. So that gives you a number of advantages, some disadvantages, too.

10:44 But the clear advantage is that each service is going to be very simple.

10:48 We're going back to, you know, you know, very small code bases for each service.

10:53 For example, with Flask, you can easily write a an entire microservice in a single file.

10:57 Right. So give us some examples of microservices that would be reasonable to create.

11:02 Like, would there be a logging microservice?

11:05 Would there be an authentication or like how would you how have you decomposed this before?

11:11 So the example that I used at the the Python tutorial was a chat application.

11:16 So the chat application exists as a monolith.

11:19 And I showed in class how to break it into microservices.

11:23 And basically there were five microservices.

11:26 So one was users. That's the name. Basically registered users.

11:30 The second one was tokens.

11:32 Basically took care of generating authentication tokens for the client side app.

11:37 There was a third microservices was messages.

11:41 This is the one that adds and basically stores messages when a user types a message.

11:47 The fourth was the UI application.

11:50 So basically there was a very simple service that served the JavaScript and CSS and HTML files to the client.

11:59 Super simple.

12:00 And then the final one was the one that did the server push.

12:05 So this was based on WebSocket using my socket IO server.

12:09 And anytime there was a change, either a new user or a user leaving the chat or a new message, the service knew how to push those updates to all the clients.

12:19 So five microservices are all completely independent applications in this case written in Flask.

12:26 Okay. That makes a lot of sense.

12:28 It definitely adds some complexity, right?

12:30 So you're no longer maintaining the configuration for one app, but you're maintaining five or four and then the interplay between them, right?

12:39 Correct.

12:39 So the complexity, I did mention this in the class, the complexity doesn't go away.

12:44 Basically, you're shifting the complexity to a different place.

12:47 And now we have an ops.

12:49 Yeah.

12:50 It's more complicated to deploy an application that's based on five.

12:54 And this was a relatively small app, right?

12:56 Normally you may have dozens or maybe even hundreds of microservices.

13:01 So, yeah, definitely the complexity goes somewhere else.

13:05 What I find that I like to shift the complexity into those places because I'm a software developer, right?

13:12 So from my point of view, I really like clear code that's easy to maintain.

13:18 For example, something that I see done with microservices is if you have a team where you have a beginner, right?

13:25 Usually if you have a big, complex application, you're going to be afraid that this person that doesn't have a lot of experience may inadvertently break something.

13:34 Right.

13:35 And they could break it entirely, right?

13:37 Correct.

13:38 It could be...

13:39 You know, unknowingly, right?

13:41 It's because of all this coupling that, you know, from over time keeps increasing in these types of applications.

13:48 Right.

13:49 And the slightest little problem in like even a trivial part of the app, if it makes it fail to start, like you've taken the entire site down for everyone, for everything, right?

13:59 Right.

14:00 Right.

14:00 It's gone for everybody.

14:01 Right.

14:02 So with microservices, however, you can have a beginner work on one microservice, even own it.

14:07 And if there are any problems with that microservice, that's not going to affect the overall application, right?

14:12 All the other microservices will continue to run.

14:14 So this is in general, not only when a beginner makes a mistake, but in general, if one microservice is sick, it goes down or has problems, that doesn't mean that the whole application goes down.

14:29 It's just that system.

14:30 And many times if you kill that microservice and start a new instance, then you're back up running and you have more time to fix the problem.

14:38 Yeah, that's really an interesting way to think about it.

14:40 And you could probably even just force a rollback to the previous deploy and run that.

14:47 And that could be super hard to do in your regular application because maybe the UI has changed, maybe the database schema in some little part has changed, and SQLAlchemy freaks out or whatever, right?

14:58 Right.

14:59 Yeah.

14:59 Databases are one of the big reasons why deployments for monolithic applications, it's so hard, right?

15:05 Once you migrate the database.

15:07 I mean, yes, migration frameworks have downgrades, but very few people use them.

15:13 And even those that use them, many times they don't test them, so they're usually broken.

15:18 Yeah.

15:18 And they could be remove this column which had data in it.

15:21 Correct.

15:21 Right.

15:22 So, yes, the idea with microservices in particular to databases is that each microservice has its own database.

15:28 So, if you migrate one database for the messages service, that has nothing to do with the users.

15:35 So, it's a much smaller problem if you end up having problems.

15:39 Yeah, that's really cool.

15:40 There's a ton of advantages to that.

15:42 I like the way, gosh, who was it?

15:46 Martin Fowler was referring to these databases as the ones from the monoliths and bigger ones.

15:52 He called those integration databases.

15:54 Right.

15:54 And these called application databases.

15:56 I'm not sure if that's quite the right term, but I really like to think of it as like, you can take this one big complex database that's trying to represent everything from every part of the app or multiple apps.

16:07 So, the user's table is as complicated as it could possibly be, right?

16:11 Right.

16:12 The order history table is as complicated as it could be because it has to support every single possible option.

16:17 But if you break it into these little microservices, you know, you could have a really simple like, here's the microservice that handles orders.

16:23 It has a database that handles orders.

16:25 Right.

16:26 It's just that.

16:27 Correct.

16:28 Now, there's a problem with that.

16:30 You lose the ability to run joins because now you don't have everything in one database.

16:36 Right.

16:36 So, if you need to correlate users or customers with orders, you have to do it in the application.

16:42 Yeah, exactly.

16:43 Like you can't join across ACP requests.

16:46 Correct.

16:47 That doesn't work.

16:48 Not really.

16:49 You have to do it in the Python space in our case.

16:52 Yeah.

16:53 I don't find that terrible.

16:55 That's my first observation.

16:57 My second observation is that even though people that know me know that I'm a big fan of relational databases, when you're working with microservices and your databases are usually one table or two tables, you know, the reason to use relational databases sort of lessens.

17:16 And now it's starting to make more sense to go to a NoSQL database.

17:21 Yeah.

17:21 Especially a document database.

17:23 Yes.

17:24 The one thing that you get kind of contains the pre-joined data as a hierarchy anyway.

17:29 Yeah, that's really interesting.

17:31 So, I can definitely see how that makes like rolling back one of these services if it gets sick much, much easier.

17:37 And the chance that it gets sick is smaller as well, right?

17:40 Because it's simpler.

17:41 There's a lot less chances of making mistakes because you're working with a much simpler code base.

17:47 How about scalability?

17:48 Well, right.

17:49 So, if you have a big monolithic app, you need to scale it.

17:54 You need to scale the whole thing.

17:56 Maybe going back to the chat example, you're probably going to have a lot more activity around messages than around users, even less on tokens.

18:05 So, if you were to scale a monolith, you're going to be basically, you're going to have to provision, you know, for the entire thing.

18:12 Right.

18:13 You have to work.

18:14 You have to aim for the worst case scenario.

18:17 Correct.

18:18 Basically, across any part of it, right?

18:20 So, if you're going to have to do a lot of messages from users, you need to run 10 instances.

18:25 You're going to have to provision 10 instances for everything, right?

18:29 Because it's all one piece.

18:31 Now, when you are doing microservices, you can scale each service independently.

18:37 That's really, really cool.

18:39 So, it's super exciting.

18:40 You can scale.

18:42 I mean, if you use something like Kubernetes, for example, you can scale across different hosts.

18:48 If you have a cluster of container hosts, automatically does it for you.

18:53 So, you can have not only scalability, but reliability by having your instances of the same service distributed across multiple hosts.

19:01 Yeah, that's really, really neat to think that, okay, I might have two or three of the straight-up web front-ins, maybe five of the orders, servers, you know, three of the message senders, and just to be able to configure those independently is really cool.

19:19 And then dynamically, as well, right?

19:21 Yeah.

19:22 I mean, the concept of auto-scaling also applies to this.

19:25 So, you know, the messages can, you know, or orders or whatever, anything that's very active, you can decide, okay, I'm going to start one more.

19:33 And some other components we haven't discussed yet help with that dynamicity.

19:38 Sure.

19:39 One of the things that was striking about the Instagram keynote, which is a really cool story of moving Django from Python 2 to Python 3, while you have millions and millions of users, and doing it on a single branch without going down, was super interesting.

19:57 But one of the things they were really obsessed about was how can we get basically being very aggressive with how they work with memory so they can get the best memory usage out of each server that they work with.

20:11 Like, for example, they went so far as disabling garbage collection in their Python apps, period.

20:18 Which is just crazy.

20:20 That sounds scary.

20:21 Yeah, they have a really interesting blog post they wrote up about it that they were able to get much better memory sharing across processes if they did that, like dramatically better.

20:31 It made it...

20:33 Right.

20:34 It probably makes for a cleaner use of memory, right?

20:36 Memory is not coming and going.

20:38 Exactly.

20:39 And apparently the cycles that were leaked were not sufficiently bad that there was a...

20:43 Surprisingly, it worked.

20:44 So the point is they're really, really focused on this.

20:47 And when you scale the monolith over and over and over, maybe it takes 200 megs per worker process.

20:53 Right.

20:54 Yeah.

20:55 If you want 10 of them, that's a gig.

20:56 But you could get these other ones much, much smaller and only scale the parts that are really hot, right?

21:01 Correct.

21:02 It's also a big savings, right?

21:04 And if you need to buy hosting for 100 instances of a monolith, that's going to be very expensive.

21:11 That's going to be a lot of cloud instances.

21:13 Yeah.

21:14 Now, if you're using microservices, you're scaling up, you know, very little things and only the ones that you need.

21:21 Yeah.

21:21 So you have a lot more knobs and you end up saving a lot of money.

21:25 Yeah.

21:26 And this was not the case with Instagram because they were already in this monolith space.

21:31 But had they been in microservices, they could have done their migration from Python 2 to Python 3 and Django 1.3 to modern.

21:39 They could have done that microservice by microservice.

21:42 And it probably would have been dramatically easier.

21:44 One at a time.

21:45 One at a time.

21:45 And if, so that's one, one of the other benefits you get.

21:49 Let's say, I don't know if this is true, probably not, but let's say one of the services that in this example, if they had microservices, if one of them was, could not be upgraded to Python 3 due to some late time dependency that hasn't been upgraded.

22:06 That's not a problem.

22:07 You can keep that one running Python 2.

22:10 It doesn't matter.

22:11 So you're not constrained to use the same platform in all your services.

22:15 If you find that some service can be benefited if you write it in Go or in Ruby or in Node.js, that's totally fine.

22:25 You can pick the best tool for each service.

22:27 Yeah, that's really cool that you can break it up.

22:29 And it also means, like say the data level, right?

22:32 Like you talked about relational versus NoSQL, like you could do MySQL on some pieces and MongoDB on others.

22:38 Absolutely.

22:39 And you don't have to say, well, this part's going to have to fit into Mongo or that part's going to have to fit into MySQL when it would more naturally live somewhere else.

22:46 Yeah.

22:47 So basically you can pick the best tools for each service and each service is completely independent from the others.

22:53 But basically you are encouraged to keep this coupling that's always bad under control by having these hard boundaries between services.

23:06 Hey everyone, this is Michael.

23:08 Let me tell you about Datadog.

23:09 They're sponsoring this episode.

23:11 Performance and bottlenecks don't exist just in your application code.

23:15 Modern applications are systems built upon systems.

23:18 And Datadog lets you view the system as a whole.

23:21 Let's say you have a Python web app running Flask.

23:23 It's built upon MongoDB and hosted and scaled out on a set of Ubuntu servers running Nginx and Microwisgi.

23:29 At Datadog and you can view and monitor and even get alerts across all of these systems.

23:33 Datadog has a great getting started tutorial that takes just a few moments.

23:37 And if you complete it, they'll send you a sweet Datadog t-shirt for free.

23:41 Don't hesitate.

23:42 Visit talkpython.fm/datadog and see what you've been missing.

23:46 That's talkpython.fm/datadog.

23:48 You know, there are some companies that basically have rules that say you're not allowed to create a web app that has more than 10,000 lines of code in it.

23:59 What you have to do is create a service and then maybe multiple services and then you can construct your app out of these services.

24:07 Right?

24:08 Almost like creating these guidelines that just naturally leads to microservices.

24:13 So we've competed against monoliths.

24:15 The other thing that I feel is like really strongly working in this space, trying to achieve the same thing has some benefits, some trade-off is serverless architecture.

24:25 AWS Lambda, Azure Functions, things like this, right?

24:28 Yes.

24:29 What do you think about those relative to this?

24:32 So glad that you asked because that's actually how we at Rackspace, in my team, that's how we deploy our microservices.

24:40 So we haven't discussed this, but one of the main components in a microservices platform is the load balancer.

24:49 You know, to achieve all these scalability and no downtime upgrades and another benefit that you get, you need to have all the services load balanced.

24:58 Even if you run one instance, it needs to be behind a load balancer.

25:01 So what you get when you go to a serverless platform like Lambda on AWS is that AWS manages the load balancing for you.

25:12 So all you need to do is, you don't even need to have a WSGI server.

25:16 All you need to do is write your microservice as a function and then upload the function with all its dependencies to AWS.

25:27 And then anytime, you know, the function gets called, AWS will somehow figure out how to run it.

25:33 It'll start a container, put the code in it, and then run it.

25:36 If you, in a burst, you make a hundred calls, then AWS is running the load balancer and it'll run a hundred containers for you.

25:46 You don't have to worry about it, which is really nice.

25:49 And then if you make an upgrade, you know, the moment you make the upgrade, any calls from the known will use the new code.

25:56 So you got immediate, no downtime upgrades as well.

25:59 Yeah, that's really neat.

26:01 Do you think that there's good situations to have like a combination?

26:05 It seems to me like there's certain things that would be really well suited for like a serverless Lambda type of thing.

26:13 And others maybe more stateful, like sort of a bigger microservice that would much better fit somewhere else.

26:20 So I'm thinking like if you wanted to say charge someone's credit card with Stripe, like to do that as a, like a single Lambda function that's really stateless, really straightforward.

26:30 Maybe that would make perfect sense.

26:32 Maybe something more complex, like for example, your message push stuff wouldn't necessarily be as appropriate there.

26:39 Right.

26:39 So here's one very important thing that Lambda does not support.

26:44 It does not support WebSocket services.

26:47 So server push, exactly what you just mentioned.

26:50 Basically for that, you need to have a permanent connection with the client.

26:54 So we have a WebSocket connection.

26:57 All the clients have permanent connection to the server.

27:00 The server needs to handle a lot of connections.

27:02 Now the Lambda services are, or functions I should say, really ephemeral, right?

27:08 They run and then they exit and then they don't exist anymore until you make another call.

27:12 So there's no way to have a permanent presence in Lambda.

27:16 So in that case, you will have to host that in a container or something like an instance, a cloud instance, for example.

27:26 Sure.

27:27 And do you get maybe better response time with, if you run it and say your own container that's more permanent?

27:33 Like, so there's probably a little startup to infrequently called Lambda functions or something, right?

27:39 In general, if you're looking for performance, you will not be using Lambda.

27:43 That's my experience.

27:45 It's, in general, slower.

27:48 So I can give you an example from work.

27:51 You know, we don't have, you know, nothing that's extremely complex.

27:55 But typically we see a REST API call that's hosted on Lambda and it goes, it gets to Lambda through another AWS service called API Gateway.

28:06 And we're seeing that nothing is, nothing takes less than half a second.

28:11 So 500 milliseconds for a simple call.

28:14 There's really, we found no way to bring it below that.

28:18 No matter how simple the actual endpoint is.

28:21 There's that much overhead just to get to your function, basically, get it up and running and so on.

28:26 Right. Yeah.

28:27 There's so many layers of that that go through AWS before, you know, your code gets to run that you really, you can optimize all you want.

28:34 You're still not going to make a difference.

28:36 Yeah.

28:37 Whereas in a Flask or other Python and Wizgy app, like 10 millisecond response time would be totally reasonable.

28:42 Correct. Right.

28:43 You can see much faster times and then you have the option to go async if you want, you know, something that's, that's very, very fast and can tolerate a lot of clients.

28:53 Yeah. So I guess the takeaway probably is like these serverless components of microservices, these like serverless building blocks are cool, but you can't just in general, it wouldn't make sense to just go and I'm only doing that for the most part.

29:06 Probably.

29:07 It makes sense in many cases, not every case.

29:10 And you have to keep performance in mind.

29:13 Typically, if you know, the kind of services that we do, these are all requested by a, by a rich client application, something like Angular or React.

29:23 And, you know, those are background requests.

29:26 So, yeah, if it takes half a second, that's fine.

29:28 Yeah.

29:29 Usually not, not a big deal.

29:30 We actually, we, we had one of the little services that we wanted to build at some point was,

29:36 a auto-completion.

29:37 Yeah.

29:38 It's not worth that.

29:39 And was that frustrating on Lambda?

29:40 Yes, exactly.

29:41 That you cannot, you cannot host that in Lambda.

29:43 That's the typing equivalent of hearing your own voice echo back in a call, right?

29:47 Yeah.

29:48 It's not good.

29:49 Yeah.

29:50 That didn't go the way.

29:51 Yeah.

29:52 That was.

29:53 So one of the challenges that I can certainly see, especially if you start throwing containers

29:57 into this as well as like, if I have a monolith, it knows how to find the user, the user interaction

30:04 bit and the credit card bit and so on.

30:06 It just, you know, imports and it works.

30:09 But if you break it across all these different servers, like how do you keep it connected without

30:15 the hardwiring every bit of deployment into the code?

30:17 There's a component that all microservices platform have that's called the service registry.

30:24 So basically there are, you know, each platform does it in a slightly different way.

30:29 But in general, the idea is that when you start a service or an instance of a service, the first

30:35 thing that the service does is talk to the service registry, which is in a known address,

30:40 right?

30:41 So everybody knows where to find the service registry.

30:44 That one, you basically hard code the domain or something in it.

30:48 Right.

30:49 It's hard coded.

30:50 It's usually in production deployments, it's highly available.

30:53 So you are not going to have, you know, a single point of contact.

30:56 Probably you can code a few, you know, addresses to talk to the service.

31:01 And if one of them is down, you try the next one.

31:03 So you want to make sure that, you know, this piece of code is always running.

31:06 Basically, the service starts and then it reports itself to the registry.

31:11 It says, hey, I'm here.

31:12 I'm at this address.

31:13 If you're running containers, the address is going to be basically Docker, for example,

31:18 it's going to come up with some port for you.

31:20 So you find out what port you're running on.

31:23 And then you tell the service, okay, I'm running on this address and this port.

31:27 So I'm ready to start taking jobs.

31:29 Yeah.

31:30 Yeah.

31:31 And that service registry can be very simple code, right?

31:34 It could almost just store like a dictionary in memory or something, right?

31:38 Right.

31:39 Essentially, it's a dictionary, right?

31:42 If you think about it.

31:43 The complication is that you want that to be super robust.

31:47 If the server where the registry is running goes down, then nothing can find, there's no way for microservices to communicate.

31:56 So it's very important that you hosted, you know, multiple locations and, you know, that there is a redundancy.

32:03 Do people sometimes use things like S3?

32:06 Just like, I'm going to write to this bucket.

32:09 And then, that's actually an interesting idea.

32:12 I haven't seen that.

32:13 It might get complicated with multiple accesses.

32:16 Yeah.

32:17 You need to implement some sort of a knocking mechanism.

32:20 I would think to keep the, you know, the, the file that has all the list of services robust and never get corrupted.

32:27 Yeah.

32:28 Maybe each file could have it.

32:29 Each service could have its own file.

32:30 I don't know.

32:31 Or, right.

32:32 You could write different files.

32:33 That, that actually could work.

32:34 Yeah.

32:35 I think that could work.

32:36 Interesting.

32:37 But basically, there's a really simple thing that just every, every server can go, I'm here.

32:41 The things I need, where are they?

32:42 Right?

32:43 Right.

32:44 So then, just to complete this, they, one of the, the simplest service registry options that, that, that, the one that I like, it's called etcd.

32:53 It's an open source project from core OS.

32:56 And, basically, yeah, you, you send a, a request.

33:00 You can even do it in a batch script with core.

33:02 Okay.

33:03 Just, it's a key value database, basically.

33:06 That's very fast.

33:07 So then etcd, in this example, will have the list of all the services that are running.

33:12 On the other end, we have the load balancer.

33:14 And the load balancer will go periodically check the contents of this service registry and refresh its own configuration based on that.

33:24 So, a service starts, it writes itself to the service registry.

33:28 Then on the other side, the load balancer says, oh, there's a new service.

33:31 I'm going to add it.

33:32 Yeah.

33:33 Oh, that's, that's cool.

33:34 I didn't think of using the service registry to drive the load balancer, but that's cool.

33:38 Yeah, that, that, that's very nice.

33:39 There are, there's actually, I know of one load balancer that, that has this functionality embedded.

33:45 It's called traffic.

33:46 If you go with a more traditional one, like Nginx or HAProxy, which are the ones that I've used for a lot of time.

33:53 With those, you, you need to add a piece to the side that does the, the, the, watching the service registry and then updating the configuration and restarting the load balancer.

34:04 Right, right.

34:05 And the one that I know about, which is actually written by, by Kelsey, it's called ConfD.

34:10 That's the one that I, I showed in the class.

34:12 Okay.

34:13 Yeah.

34:14 Yeah.

34:15 That's cool.

34:16 And Nginx resorts pretty quickly.

34:17 So it's, it's pretty good.

34:18 Right.

34:19 Nginx is pretty good about reloading itself cleanly.

34:22 When, when you update the configuration, HAProxy is getting there.

34:26 It's getting better.

34:27 It's a little bit clunky, but basically it starts a new process when, when you want to update the configuration.

34:33 And process by which all the connections are passed from the old process to the new process has been for many years has been problematic.

34:42 It costs some downtime.

34:43 It's much better these days.

34:45 Okay.

34:46 That's great.

34:47 Another challenge I can imagine is if I just start using logging in my monolith app, it will all go to the same file.

34:56 It will go always in order unless I'm doing threading.

34:59 It's a piece of cake, right?

35:00 It's a piece of cake.

35:01 If I have 20 little microservices, how do I like trace through the steps?

35:06 Right.

35:07 You need help.

35:08 You really can't manage 20 log files.

35:12 Sometimes it gets insane.

35:14 So basically you use, you find another piece, but basically a log consolidator.

35:20 And there's one that's pretty good for, I mean, if you're doing Docker containers, that's called log spout.

35:26 Log spout.

35:27 Okay.

35:28 So basically what this does is it grabs all the logs from all the containers in a host, the host where log spout is running.

35:37 And then basically it outputs a stream that you can watch over.

35:42 For example, you can connect with a web browser.

35:45 That's WebSocket, for example.

35:47 So you can connect over WebSocket and then watch the stream of logs in real time.

35:51 Or you can connect log spout to something more useful in a production environment, which will be, for example, an elk stack.

36:00 So Elasticsearch, Logstash, and Kibana, which is this trio of open source apps that basically create a very powerful log storage and search solution.

36:12 Yeah.

36:13 Okay.

36:13 So basically you put in something that brings it back into one place.

36:17 Yes.

36:18 You usually want to have everything in one stream and then you can filter if you want.

36:23 But imagine if you have five instances of the same microservice, you may, even though it's five different ones, you may want to see the entire thing.

36:34 Because if a client is sending requests to this service, requests are going to randomly arrive to any of those five.

36:41 Right.

36:42 It's hitting the load balancer.

36:43 Yeah.

36:44 Right.

36:45 So you probably want to see the sequence regardless of which of the instances get a specific request.

36:52 Yeah.

36:53 Yeah.

36:54 Do you do or do people do things where they somehow identify a request?

37:00 So like flag it with some kind of unique ID at the top and then flow that through so that you can say this, this is the steps that this request went through.

37:10 Yeah.

37:10 That's pretty common.

37:11 Some, some platforms offer that.

37:14 I implemented by hand in some cases myself.

37:17 But basically, yes, the entry point request.

37:21 So the first, the service that receives a request from the outside world assigns an ID to that request.

37:28 And then in any communications that service has with other microservices, it will pass that ID.

37:34 So you always preserve and log the initial ID and then, and then you get a trace of all the services that worked on a single client request.

37:45 Right.

37:45 And there's infrastructure to actually do the communication between microservices.

37:49 Is that typically requests?

37:51 There are different ways.

37:52 So the easiest will be to use HTTP as an interface, as a REST API for each service.

37:58 And then, yeah, you use a Python requests.

38:01 Some people prefer something that's less chatty.

38:05 So HTTP, as you know, you know, you have all these requests, the headers, you know, all that stuff.

38:10 When you are talking to a client, that makes sense.

38:13 Right.

38:14 And besides, that's the only way the browser can, or an HTTP client can talk to the server.

38:19 But when you're talking among services, you may say, well, okay, I want something quicker.

38:24 So some people implement RPC schemes where a service can say to the other server, hey, I need to do, I need you to do this.

38:31 And it's, for example, passing messages through a message queue, which could be a Redis queue or SQS if you're on AWS.

38:38 Do people set up socket servers, maybe?

38:41 You could do a socket server too.

38:42 Yeah.

38:43 Yeah.

38:44 If you're looking for really low latency, low traffic.

38:45 The main idea, what I would consider a good design point, you know, when thinking about how microservices communicate, you want to leave the door open to having different, you know, tech stacks on different services.

38:58 You don't want to go with something that, let me give you an example.

39:02 I would probably not use a Celery worker for this, right?

39:06 Because that will basically restrict me to use Python.

39:09 Right.

39:09 And you probably wouldn't ship data across the wire as pickle.

39:12 Right.

39:13 Yeah.

39:14 As versioning issues, not just, not just Python.

39:17 Yeah.

39:18 But even within Python, that could be a big problem too.

39:21 Right.

39:21 Yeah.

39:22 Yeah.

39:22 Yeah.

39:22 Yeah.

39:23 Yeah.

39:23 Yeah.

39:24 Yeah.

39:24 Yeah.

39:25 So, you know, the way Celery works, I think it's not, you know, it's not friendly to the microservices

39:28 idea.

39:29 So the idea is that microservices try to push.

39:31 So yeah, I would go HTTP.

39:33 Maybe messages are JSON formatted over a queue.

39:38 Mm-hmm.

39:39 All things that you are sure that any technology can easily communicate over.

39:44 Yeah.

39:45 It makes a lot of sense.

39:46 I imagine there's a lot of JSON happening back in the data center.

39:49 Right.

39:50 Yes.

39:51 Absolutely.

39:52 Yeah.

39:53 Nice.

39:54 So that maybe brings us to an interesting place to talk about the tools.

39:55 We talked about requests and some of the other tools that work and don't work, but what

39:59 are the other Python tools that you might involve here?

40:02 Well, of course you need a framework, right?

40:04 And we discussed this.

40:06 I like, or surprise, I like to use Flask, but really you can use any web framework that

40:12 you like, right?

40:13 As long as it knows how to communicate with the other services.

40:17 As far as other Python tools, there are many packages that talk to service registries, for

40:24 example.

40:25 So if you want your Flask-based or Python-based microservice to be able to talk to the service

40:31 registry, there's packages for certainly for etcd.

40:35 If you use another one like console, for example, this one from HashiCorp, there's packages.

40:41 So you're going to find a lot of support for these tasks that you need to do that are sort

40:46 of specific to microservices in the Python ecosystem.

40:50 Sure.

40:51 Besides that, you're going to be doing things, and this is something that I really like, in

40:56 the same way like you work with a monolith, but you're going to be working with a much much

41:00 much smaller code bases.

41:01 code bases.

41:02 So you're going to be doing unit tests the usual way, but you're going to be testing each

41:07 service separately.

41:08 And then you're going to have integration tests if you need, and you probably do.

41:12 But yeah, nothing really changes.

41:14 It's just that the scale goes, you know, you're doing your work on a much smaller scale.

41:20 You're working with smaller applications.

41:22 Yeah.

41:23 It sounds to me like they're easier to work on and maintain and deploy, but maybe more difficult

41:28 to initially set up the infrastructure that wires them together.

41:31 Yes.

41:32 You have more servers to set up.

41:33 You've got the load balancers.

41:34 You've got the service registry.

41:35 Like these things you have to add.

41:37 But once that is in place, it kind of sounds like life gets easier.

41:40 So there's like a bar to cross, but once you cross it, you're good.

41:44 Yeah.

41:45 Right.

41:46 I agree with that.

41:47 Yes.

41:48 It's difficult to set up the platform.

41:49 And of course you can go, if you use Kubernetes, for example, or AWS Lambda, you know, a lot

41:56 of all those pieces are done for you.

41:59 You don't have to worry about load balancers and service registries, right?

42:04 They do it for you.

42:05 This portion of Talk Pythonry is brought to you by us.

42:09 Us.

42:10 As many of you know, I have a growing set of courses to help you go from Python beginner

42:13 to novice to Python expert.

42:15 And there are many more courses in the works.

42:17 So please consider Talk Python training for you and your team's training needs.

42:21 If you're just getting started, I've built a course to teach you Python the way professional

42:26 developers learn by building applications.

42:28 Check out my Python jumpstart by building 10 apps at talkpython.fm/course.

42:34 Are you looking to start adding services to your app?

42:36 Try my brand new consuming HTTP services in Python.

42:39 You'll learn to work with the restful HTTP services as well as SOAP, JSON and XML data formats.

42:45 Do you want to launch an online business?

42:47 Well, Matt McKay and I built an entrepreneur's playbook with Python for entrepreneurs.

42:51 This 16 hour course will teach you everything you need to launch your web based business with

42:55 Python.

42:56 And finally, there's a couple of new course announcements coming really soon.

42:59 So if you don't already have an account, be sure to create one at training.talkpython.fm

43:04 to get notified.

43:05 And for all of you who have bought my courses, thank you so much.

43:09 It really, really helps support the show.

43:11 In this class that I gave at PyCon, I didn't want to just tell, okay, install Kubernetes and

43:17 you're done.

43:18 I wanted to teach what microservices are.

43:20 So I built my own platform, which was a lot of fun.

43:24 And I thank the PSF for approving my tutorial ideas and let me work on this.

43:32 It was a lot of fun.

43:33 And I wanted to demonstrate that really it's not as hard as it sounds.

43:38 You can go pick, you know, the best tools for each of the different tasks, you know, in

43:43 a very flask, you know, fashion, right?

43:46 Where everything is done, you know, you pick the best tool for each task.

43:50 Yeah, it sounds like people who like micro frameworks for their web frameworks might like this as

43:55 well, right?

43:56 Because you kind of get a pick and...

43:57 Yes, they're going to find that there's a lot of affinity, right?

44:01 So I built a platform using Bash.

44:04 So it's all Bash scripts.

44:05 You can do a...

44:06 You've seen Kelsey Hightower do a super cool voice operated demo of a no downtime upgrade,

44:13 right?

44:14 So take away the voice thing.

44:15 I didn't do that.

44:17 But, you know, during class I showed how with a Bash script you can deploy your upgrades

44:23 without the service ever going down.

44:25 You know, your application keeps running while you do the deployment.

44:28 Sure.

44:29 So, yeah.

44:30 Yeah.

44:31 So is there a roadmap or some guidance on how to take maybe a 50,000 line monolith web app

44:38 and turn it into a number of services?

44:41 Well, that's really difficult.

44:43 I think that the best advice I can give you is you probably cannot do it all in one go.

44:49 You're going to have to find a strategy to do, you know, phased migration to microservices.

44:55 You need to have a very good unit tests in place because the task of...

45:02 Basically, what you're going to do in most cases is take the monolith and put it inside a microservices platform as a big piece, right?

45:10 And then over time, you're going to start taking little pieces out of it and write microservices, right?

45:15 So the task of writing a microservice when you have, you know, the monolith is basically involves a lot of copying and pasting, right?

45:23 You have to move endpoints that are in the monolith to a standalone application.

45:28 Sure.

45:29 And that's pretty easy in some aspect, but breaking the tight coupling and the dependencies of code that you're moving around, that sounds to me like it could be pretty challenging.

45:39 Yes.

45:40 It's difficult.

45:41 It's actually hard.

45:42 You know, all that...

45:43 Basically, when you work on a monolith, you accumulate technical debt.

45:46 That's pretty common.

45:48 You're going to find that many times that technical debt is going to inform your decisions.

45:53 You're going to take less than ideal decisions when you design your microservices to keep things the same way.

45:59 I can give you an example.

46:00 In this project that I showed during the PyCon class, I was actually migrating this chat application to microservices.

46:08 And I started and I migrated the UI first.

46:11 That was very easy.

46:13 And then I migrated the users.

46:15 And then I went to migrate tokens.

46:17 And I realized that I could do a much better job with tokens.

46:21 The tokens in the old application were sort of inefficient, were random strings.

46:27 You know, when you're working with microservices, you want tokens that can be verified without calling the token service.

46:34 And when you need that, you usually use JSON web tokens, which you can verify with cryptography.

46:40 So, I had to decide, I mean, do I keep this and make it inefficient?

46:45 Or do I say, okay, I'm going to draw a line in the sand and I'm going to change the token format, but then everything is going to break.

46:53 I'm going to have to migrate all the services, you know, to the new token style.

46:56 Right.

46:57 And those decisions, you know, on a real application, they're going to be, you know, much harder to make.

47:02 Yeah, I can imagine that.

47:04 But, so, one thing I was thinking of while you were speaking of like how you might break this apart, it seems like you could almost do the partition in the web tier and then the data.

47:16 Like, so, for example, if you have a database that obviously the monolith talks to the database, all of it to the same, through the same connections.

47:23 If you could break this out into services, they theoretically could go back and just continue talking to the same database and you could kind of get the service decomposition piece working.

47:32 And then you could say, okay, now how do we move this into the application database that's dedicated to each one of the services.

47:37 So, that could be a valid approach.

47:39 So, when you do it that way, if you're sharing the database, then the zero downtime upgrades are still difficult.

47:47 Yeah, I'm just thinking as a transitional thing.

47:49 So, it's going to be a transition, right?

47:50 Yeah.

47:50 You want to go all the way eventually.

47:52 But, yeah, definitely.

47:53 Okay.

47:53 Yeah, you have to figure it out.

47:55 It depends on the application what's the best route.

47:57 But, yeah, it's difficult.

47:59 What I've seen some people do is they say, okay, I'm going to migrate to microservices, but only from now on.

48:04 I'm not going to change what I have.

48:07 So, basically, they grandfather this big piece.

48:10 You know, they think of it as a microservice, even though it's not.

48:13 It's a big microservice.

48:14 Right.

48:15 It's a big one.

48:16 But then, you know, from then on, any new functionality, they start writing in microservices.

48:21 And that's actually a very valid approach.

48:24 In many cases, it's the only viable way, right?

48:26 Sure.

48:27 Sure, sure.

48:28 How does software development change for a team when they're transitioning to microservices?

48:33 How does their world get different?

48:35 Well, they don't have to all work in the same code, right?

48:38 So, that's a big plus.

48:40 Fewer merge conflicts.

48:42 Right.

48:43 You basically merge conflicts.

48:45 For me, I don't remember when I had a merge conflict last.

48:49 Right?

48:50 Yeah, you don't usually see that.

48:51 So, usually, you're going to find that your team, the members will get specialized in specific services, right?

48:58 For example, at Rackspace, I've been doing a lot of authentication microservices, right?

49:03 So, you know, when there's a new need for authentication, I do it usually.

49:10 Some people may not like that, right?

49:12 May prefer to be generalists.

49:15 So, yeah, it depends.

49:16 But you find that, you know, some people is more, basically, has affinity to certain parts of the system, certain microservices.

49:26 Sure.

49:27 And now they can focus more on it now because it's more explicit.

49:29 Right, yeah.

49:30 And they can do a much better job at, you know, that specific task because they don't have all the baggage of the remaining, the rest of the system.

49:40 That's basically that needs to be, you know, that you have to make sure that you don't break.

49:46 Right.

49:47 So, one thing that seems like it might make sense would be to rotate the team around through the different services, potentially.

49:55 If you want to make sure, like, there's many people that kind of know the whole thing.

49:58 Like, you can say, okay, this month you're on this service, that month you're on.

50:01 Yeah, that's actually a good idea.

50:03 You can find a way for the person that's experienced with microservice to sort of mentor a new member, you know, and basically code review the changes that the new person makes, for example.

50:17 Yeah.

50:18 There are a lot of different ways to make sure that everybody gets a little bit of everything, sure.

50:22 Yeah.

50:23 So, yeah, overall, I find that if you like to code, right, I mean, we can talk about the ops side, right?

50:30 If you like to code, then you're going to be coding more and, you know, fixing bugs a lot less.

50:37 Yeah.

50:38 You're going to find that you're going to be working on small code bases and that leads to less mistakes and errors.

50:44 Yeah, that sounds great.

50:45 And I have to write unit tests that a lot of people don't because it's too complicated.

50:49 And now you're back to a simple application that's very easy to unit test.

50:52 Yeah.

50:53 To be more careful on the boundaries, though, because they all talk to each other, right?

50:57 And then you need, right, this I would probably put an experienced person.

51:01 You need someone that overviews what are the interfaces that all the microservices expose to the rest of the, you know, to the other microservices and sometimes to clients.

51:12 Right.

51:13 You need to make sure that, especially with the public endpoints that are consistent.

51:18 So you need one person that's experienced, at least one person that's experienced in API design to make sure that you get good APIs.

51:25 Yeah, of course.

51:26 That's the other thing.

51:27 It's very difficult to change them once they're out there.

51:30 And this is if you want to have no downtime deployments, you cannot really introduce breaking changes.

51:37 So you cannot remove an API.

51:38 You cannot remove a column in the database.

51:40 You know, there are some rules that you need to follow.

51:42 Right.

51:43 So when you design databases and when you design APIs, you need to have people review that very well, make sure that you like what you are designing, because you're going to have to be with those decisions for a long time.

51:55 Yeah, that's a good point.

51:57 Some of the HTTP frameworks for building services have even built in versioning into them.

52:05 I'm thinking of like Hug and some of these things.

52:07 But obviously, you can add it into your own apps pretty easily.

52:11 Just set up a second endpoint rather than calling the same one.

52:15 Is there something that you have to do that?

52:17 You have to do that.

52:18 Basically, you're forced.

52:19 So imagine you have five instances running of this one microservice.

52:24 And now you want to introduce a breaking change in one endpoint.

52:28 And of course, you don't want to go down for the upgrade.

52:31 So you cannot stop the five instances at the same time.

52:34 You're going to have to do a rolling update upgrade.

52:36 Right.

52:37 So, you know, during a window of time, you're going to have a bunch of instances on the old API and a bunch on the new.

52:43 And then the rest of the system knows nothing of this.

52:46 And they're going to start sending requests, probably assuming that the old API is in place.

52:50 Yeah, until you upgrade that part, which then I'll go to the new one.

52:53 But it'll be sort of moving around through these services as that process happens.

52:57 Right.

52:57 Right.

52:58 So you need to create a new endpoint for the breaking change.

53:00 Keep the old one working.

53:02 And then once you're sure that the whole system is upgraded, it's on the new one.

53:07 Only then you can go ahead and basically deprecate or remove the endpoint that you don't want to use anymore.

53:13 Right.

53:14 That's probably important as well to like eliminate the cruft building up as well.

53:18 Yeah.

53:19 Yeah.

53:19 I think that, you know, this platform encourages you to be clean and to keep things clean, to think about these important decisions, you know, very carefully.

53:30 Yeah.

53:30 Excellent.

53:31 So we talked about how AWS Lambda has this sort of built in latency.

53:37 And when you think about performance, a lot of times there's really two sides to it. Right.

53:42 One is if I go and I hit that endpoint, how quick does it get back to me?

53:47 And then the other one is if a million people hit it, how much does it degrade from that one person hit it? Right.

53:52 Like how scalable is it and how like single request high performance is it?

53:58 I can certainly see how this generates better scalability. Like you can very carefully tune like the individual parts of your app and scale those up.

54:07 But it also seems like it might add some latency like for an individual request. So how much slower, like what's the typical changes? Like would I add 10 milliseconds to a request? Would I add a hundred milliseconds? What?

54:20 Yeah, that's a good question. So even if we take serverless out of the equation, so.

54:24 Yeah, because it's really bad. Yeah.

54:25 Right. So the performance is not the same as in a monolith, just by the mere fact that in many cases, the client is going to send the request and the service that receives that request cannot carry out the request alone. Right.

54:39 So you have to talk to a bunch of other microservices. Right. So there are a lot of communications among the microservices that of course will take time as well. Right.

54:48 So, so, so, so latencies increase no matter what. Right. So microservices is not, if you're looking for performance, it's not really.

54:57 It doesn't necessarily have to be microservices versus monolith. It could be more coarse grained microservices, more fine grained ones potentially. Right.

55:06 Right. The times that I've seen, they're not really, you know, that terrible. I mean, we are talking like tens of milliseconds.

55:14 Yeah. You know, you're making requests over the internet, maybe across the country, it might be a hundred millisecond ping time. So if it's a hundred, 110, who cares?

55:21 It could be in the noise. Right. Yeah. Absolutely. In many cases, it's going to be in the noise. Yeah.

55:26 So, you know, compared to the, all the benefits that we already discussed, I think in my view, it's a no brainer. It makes a lot of sense in many cases, but yeah, it's, you know, it's all this complication of, you know, services talking to each other.

55:41 And you might find that in some cases you need to go async. So you can totally have an asynchronous microservice. So you tell it, you need something and the service says, yeah, okay, I'll do it on my time.

55:52 Right.

55:53 But keep going. Don't mind me. Right. That's totally fine.

55:56 Yeah. For example, send this email. You don't have to wait for the email to be acknowledged to be sent. Right.

56:01 That's a great example. Right. Yeah. Yeah. It can definitely, email can be pretty slow actually, given all the stuff that it does.

56:09 Email, yeah, you count it in seconds.

56:11 Yes, exactly.

56:12 Microseconds.

56:13 You got to mail a group, right? Like, I want to send this mail to 2000 people in my class on my platform. Like, okay, that needs to be a thing. Just let me tell you, I've learned that.

56:22 Yeah. And that's actually, that's a good example in which you may, in many cases, you're not interested if the email bounces.

56:30 Yeah. What are you going to do about it anyway? Right. Right. Exactly. If it bounces, there's nothing to do. If you are, you can have that service that's asynchronous record the addresses that are bouncing in a database that's going to be owned by that service. And then later on, you know, in a cron job or whatever, you can clean up the addresses that are bad.

56:49 So some other service can send a request to the email service and ask, you know, what are the bad addresses that you know? And that that will be another endpoint. You'll return the addresses and then they can be cleaned up or, you know, the accounts can be canceled or whatever.

57:04 Yeah. It sounds, it sounds really promising.

57:06 It's a great way to think about problems, right? It's, it's all, you know, little pieces. So it's a lot easier to think about solutions there. And, and, you know, at the beginning, it's hard to start thinking this way.

57:18 But, but then you get used to it and all the problems become easier.

57:22 Yeah. I can definitely see how that, that would happen. It might be difficult to think, how am I going to build this huge app? But if I can build, well, could you build 10 small apps and then have them help each other out? Right? Right. Exactly. Yeah. Very cool.

57:36 All right, Miguel. I think we're going to leave it there. That's like, I think a great, great conversation on microservices. So let me ask you the two questions. So if you're going to work on your microservices and Python, what,

57:48 editor do you use? So it's getting complicated. It's complicated. Yeah. That's the correct answer for me. I usually iterate over a few editors. So Vim is my go-to editor. Many times I need to edit files on remote hosts. So Vim works anywhere. So that, that, that's, that's the one that I use most of the time. Sometimes I need to debug. And for that, sometimes I use, you know, an IDE. And the two that I've been using, I can't even

58:18 decide on one. I, the two that I'm using is PyCharm and lately Visual Studio Code, which, which is surprisingly good.

58:26 Yeah. The, the Python plugin there is, is doing really quite a bit. Python plugin is, it's not an official plugin, but yeah, this person that I wrote it, he, he did an awesome job. It's a, it's very, very good.

58:38 Yeah. He did a great job. I actually had him on an episode, maybe 20, 20 shows back or something. It's, it's very cool how we took like 10 different

58:46 open source projects and like brought them all together and turned it into the plugin for Visual Studio Code. It was cool.

58:52 Well, he did a great job. It's super powerful. And in particular, I like the way that you, you, you set your configuration for a project, which basically it opens up a text file and you write JSON.

59:05 Yeah. It is quite interesting for sure.

59:08 That's a, you know, contrasting to PyCharm where you have to enter a dialog, you know, a window and find the setting that you want.

59:15 Yeah. Well, it also makes it very source friendly, like a source control friendly.

59:19 Source control friendly. And, you can copy configs from one project to the next. It all becomes much, much easier.

59:26 Yeah. Great. All right. And PyPI package.

59:30 So one package that I'm, I'm sort of ashamed. I, I didn't know. And I, I, I learned about it from my colleagues at Rackspace. It's called Arrow.

59:39 And so, so this is a package that it's a, drop-in replacement for, for the daytime package in the Python library, but it implements all the holes in support that, the daytime has.

59:52 For example.

59:53 Yeah, definitely. I would definitely second that one. Arrow is awesome.

59:55 Yeah. I, I, I only knew it from, from a few months since I've been working with this team and I, yeah, it's, I, I use it all the time now.

01:00:03 So for example, it has, you know, daytime starts with, with this naive time zone approach where there's no time zone.

01:00:10 So by default, Arrow will, will use UTC, which is what you always want anyway. Right.

01:00:15 So you always work with UTC.

01:00:17 Especially if you're working on servers.

01:00:19 Right. Yeah. You, you want to have, common units. So, so that, that's the one that everybody uses.

01:00:24 And then, support to convert to and from the ISO 8601, which is, daytime can output ISO 8601, but cannot import from it.

01:00:35 So if, something very common. So another thing that I, I tend to work on is, is the, the billing microservices in my team.

01:00:43 And you, you want to know, you have a date and you want to know the first of the month and the, the last of the month.

01:00:50 It's like in one line you can get it. You don't have to do strange aerobics or acrobatics to, to get the first and last of, of a given month. So yes.

01:01:01 Yeah. People should definitely check that out. All right. So people heard this conversation. They're probably excited about microservices. How do they get started? What do they do?

01:01:09 So what I recommend, if you have, three hours to waste, you can check out the, the YouTube video of my tutorial.

01:01:17 I think I've made it very approachable. If, if your experience developing either Django, it doesn't need to be Flask.

01:01:23 So any web, you know, monolithic web applications, I think you're going to pick up the tutorial really well.

01:01:29 So the code that comes with the tutorial, which is on GitHub includes a, vagrant setup.

01:01:36 So you can deploy the system that I show in the class to, to your machine, to your laptop on a vagrant VM, and then you can play with it.

01:01:45 And even at the end of the class, I, I listed a list of things that, that will be great ideas to, if you want to practice.

01:01:53 So you can take the project that I built and then extend it in many different ways.

01:01:58 that would be my recommendation. you can look into Kubernetes, which is, something that you can also deploy to your laptop.

01:02:05 If you want to use it for testing. I include it in, in, in the, in the GitHub repositories for this class, the, the scripting, required to deploy.

01:02:15 The same application to Kubernetes, if you're into that. And then the, the other valid option would be to look into AWS Lambda and API gateway.

01:02:23 Have you seen Zappo?

01:02:24 You mean Zappa? Yes.

01:02:25 Zappa. That's Zappa. Yeah. Which is a framework that just like uses Lambda as a backend.

01:02:30 Yes. I I've seen it. I, I, I even wrote a clone of it, which is called Slam, which is in my GitHub, account as well.

01:02:38 But yes, the idea is that you take a Whiskey application, which is really a function. If you think about it, Whiskey,

01:02:45 you know, it's a callable, not only a function, but a callable. The API is quite simple. You can think

01:02:50 about it as a, as a function, right? And, and then Lambda requires a function. So there's a really

01:02:55 match. The only problem is that the way Lambda expects the function is not the, you know, in the

01:03:01 way Whiskey applications are formatted. So then Zappa comes in or my slam also. And basically the,

01:03:07 it's an adapter that sits in between Lambda and your application and makes the conversion between the

01:03:12 two formats. I see. All right. Well, that's really cool. That that's a really easy way to deploy your,

01:03:18 your, any Python web application that that's, you know, Whiskey. So Django,

01:03:22 Plask, Pyramid, Bottle, all those, you can get a deploy to AWS. Yeah. All right. Well, very,

01:03:29 very cool. So I definitely recommend people check out your tutorial, which I'll put in the show notes.

01:03:33 And I'm also going to put Kelsey Hightower's talk. Yeah. Those go well together. I think

01:03:39 that's, that's actually a good thing to watch first. If you like that, then you can learn how

01:03:44 those things work by watching the tutorial. Yeah. Kelsey's is high level and flashy and

01:03:49 interesting. And then yours is the detail. Yeah. Right. I wish I could be as good a speaker as he is.

01:03:55 Yeah. That was really great. Yeah. All right. Well, Miguel, thank you so much for being on,

01:04:00 on the show once again. And it's great to chat with you. Thank you for inviting me. You bet. Bye.

01:04:04 This has been another episode of talk Python to me. Our guest was Miguel Grinberg. And this episode

01:04:12 has been brought to you by Datadog and Talk Python training. Datadog gives you visibility into the

01:04:17 whole system running your code. Visit talkpython.fm/Datadog and see what you've been missing.

01:04:23 Don't even throw in a free t-shirt for doing the tutorial. Are you or a colleague trying to learn

01:04:28 Python? Have you tried books and videos that just left you bored by covering topics point by point?

01:04:33 Well, check out my online course, Python Jumpstart by building 10 apps at talkpython.fm/course

01:04:39 to experience a more engaging way to learn Python. And if you're looking for something a little more

01:04:44 advanced, try my WritePythonic code course at talkpython.fm/Pythonic. Be sure to subscribe

01:04:50 to the show. Open your favorite podcatcher and search for Python. We should be right at the top.

01:04:54 You can also find the iTunes feed at /itunes, Google Play feed at /play and direct RSS feed

01:05:01 at /rss on talkpython.fm. Our theme music is Developers, Developers, Developers by Corey Smith,

01:05:08 who goes by Smix. Corey just recently started selling his tracks on iTunes. So I recommend you

01:05:12 check it out at talkpython.fm/music. You can browse his tracks he has for sale on iTunes and listen

01:05:19 to the full length version of the theme song. This is your host, Michael Kennedy. Thanks so much for

01:05:24 listening. I really appreciate it. Smix, let's get out of here.

01:05:36 Thank you.

01:05:41 Thank you.

01:05:41 Thank you.

01:05:44 Thank you.

01:05:45 Developers, developers, developers, developers, developers, developers, developers.

01:05:49 .

01:05:50 you

Back to show page
Talk Python's Mastodon Michael Kennedy's Mastodon