00:00 Containers are revolutionizing the way that we develop and manage applications.
00:03 These containers allow us to build, develop, test, and even deploy on the exact same system.
00:08 We can build layered systems that fill in our dependencies.
00:12 They even can play a crucial role in zero downtime upgrades.
00:16 This is great until you end up with five different types of containers, each of them scaled out, and you need to get them to work together,
00:23 discover each other, and upgrade together.
00:25 That's where Kubernetes comes in.
00:27 Today, we'll meet Kelsey Hightower, a developer advocate on Google's cloud platform.
00:31 This is Talk Python to Me, episode 126, recorded June 9, 2017.
00:54 Welcome to Talk Python to Me, a weekly podcast on Python, the language, the libraries, the ecosystem, and the personalities.
01:07 This is your host, Michael Kennedy. Follow me on Twitter, where I'm @mkennedy.
01:11 Keep up with the show and listen to past episodes at talkpython.fm, and follow the show on Twitter via at Talk Python.
01:18 This episode is brought to you by Rollbar and us at Talk Python Training.
01:22 Be sure to check out what we're offering during our segments.
01:25 It really helps support the show.
01:26 Hey, everyone. Just a quick heads up on this episode.
01:30 I recorded it on site, on location, with Kelsey Hightower, which is a great experience, cool opportunity,
01:36 but it turns out the audio was a little echoey and not the same as back in the studio where I normally record.
01:42 So you'll have to forgive a bit of an echo on this one.
01:44 It's a great conversation, and I hope you learn a lot from Kelsey.
01:47 Now, let's chat with him.
01:48 Kelsey, welcome to Talk Python.
01:50 Awesome. It's awesome to be here.
01:52 Fantastic show. So I'm honored to be a guest.
01:54 Thanks. It's an honor to have you as a guest.
01:56 You had such a cool keynote at PyCon. It was really fun.
02:00 Yeah, I was honored to do that one.
02:02 That was my, I think, completing the mission.
02:04 You know, Python meetup is where I first started, and then having the honor to do a closing keynote at PyCon 2017 was amazing.
02:12 You've completed the circle of your Python journey?
02:15 Yeah, I think so. I think you just think about what just over the years,
02:18 it was probably maybe six or seven years in between my very first talk and giving that Python keynote.
02:22 So a lot has changed for me personally during that time frame.
02:26 Yeah, I'm sure it has.
02:27 So I totally want to talk about PyCon, but let's start with your story.
02:31 How did you get into programming in Python?
02:33 I think, like most people, you know, I was looking for my first real programming language.
02:37 So I started with Bash as a system administrator.
02:40 And you kind of run into some limitations with Bash, you know, things you can and can't do.
02:43 So I reached for things like Python.
02:46 So I was working in the financial industry, and I just needed a tool that could actually replace some of our COBOL jobs.
02:51 Okay.
02:52 So we had these old things that like transform data for the mainframe.
02:55 Can you believe that stuff still runs?
02:58 Oh, yeah.
02:58 I mean, and fast.
02:59 Yeah.
03:00 Right.
03:00 And people still make a ton of cash on it.
03:02 So my first, you know, dip my toes in the water and I learned PacDecimal, right?
03:06 Because I had to convert a lot of the formats for the mainframe.
03:09 So you're dealing with EPSIDIC, fixed-length formats.
03:12 And then Python was like a really straightforward language, but it also had great libraries to kind of deal with a lot of the math stuff and, you know, things that you would do even in the mainframe world.
03:21 Right.
03:21 Absolutely.
03:21 Okay.
03:22 And so you started out in Python and now, well, started out in Bash, got into Python.
03:29 And now what are you doing?
03:30 Mostly Go?
03:31 Yeah.
03:32 So Golang is kind of my language of choice.
03:34 You know, something happened in the industry where, you know, when I used to think back, most of the tools you were using,
03:39 as a system, I remember written in Python, whether it was like AppGit or Yum or, you know, things like Ansible.
03:45 It was kind of like the system means go-to language, you know, after Perl, there was Python.
03:49 And, you know, something happened where Golang just became like the thing you'd use for like distributed systems and containers.
03:56 So kind of made a switch to that a couple of years back.
03:58 That's cool.
03:59 Because, yeah, you're all about the containers these days, right?
04:01 Yeah, containers, distributed systems, this whole new world where we're like packaging our apps in like this universal format
04:07 and then just giving it to the system to deploy.
04:10 So that's been a pretty interesting journey.
04:12 Yeah, yeah.
04:12 It's very interesting.
04:13 And I think we're just at the beginning of this really, right?
04:16 Yeah, I think a lot of times I think these are patterns we've always been doing.
04:19 If you think about like in the Python world, we were dealing with like virtual nth.
04:23 We were trying to create these self-contained deployments.
04:25 So virtual nth, having pip.
04:27 And when you put those things together, things like Docker is the natural evolution where you say, hey, I create this virtual environment.
04:34 Everything I need is in there and self-contained.
04:36 How about we just package that up and ship that around?
04:38 Right.
04:39 Instead of creating a virtual machine or even a real machine on real hardware, setting it up, setting up a virtual environment, get it all going.
04:48 Now you just have, what, like a Docker file?
04:51 Yeah, and I think the Docker file, if you think about what it looks like, if you've never written one before, it's essentially like an organized bash script in many ways, right?
04:58 There's some semantic meaning to some labels and things you can do in there.
05:01 But for the most part, you're making a repeatable set of steps.
05:04 You know, maybe you install Python or you have a base image where Python's already installed.
05:08 And then within that Docker file, you will say things like pip install my requirements.txt.
05:13 And then when you're finished, you can just say, here's how you start my app.
05:17 So it just gives us a repeatable process to what we've already been doing.
05:20 Sure.
05:21 It's like scripting, automating all of the best practices we should be doing, but maybe aren't.
05:27 Yeah, I think so.
05:28 And also, usually on a team, you have like one person who does know how to build an app from scratch, right?
05:33 Usually keep this like build my app type script inside of the directory.
05:36 And a Docker file has become that.
05:39 And it also now produces an artifact so that once you build it once, ideally you can reuse it in multiple environments.
05:45 Right.
05:46 Yeah, that's cool.
05:46 Let's you test closer to production, for example.
05:51 But you definitely don't want to lose that person that knows how to build that thing.
05:54 Oh, yeah.
05:54 We're not replacing people.
05:55 I think what we're doing is also there's a lot of open source projects, right?
05:58 You can imagine going to get like your favorite web app built in like Django, for example.
06:02 And then having to deal with like what version of Python, what dependencies, how to build it.
06:07 Sometimes people just want to play with it right now.
06:10 So an alternative, and this is key because some people are starting to replace their documentation, their scripts with just Docker files.
06:16 And that's infuriating to a lot of people.
06:18 So I think you can complement your installation by saying, hey, here's the raw, you know, maybe if you're still doing eggs or whales for the actual application.
06:27 But an alternative could be, oh, here's a Docker container if you just want to just run it on your Docker machine.
06:33 Yeah, yeah, absolutely.
06:34 Absolutely.
06:35 Let's talk really quickly about PyCon before we dive deep into Kubernetes.
06:41 So you got to go to PyCon and give this presentation.
06:43 That was super amazing.
06:46 What was your favorite thing about the conference?
06:48 Well, I think the thing I like the most about PyCon, most people are there because they want to be right.
06:53 And I think that's a unique thing that we don't see at a lot of tech conferences.
06:56 Either tech conferences could be expensive, so you really need to be reimbursed by your company or you have to take time off work to do it.
07:03 And since PyCon is on the weekend, a lot of people choose to be there.
07:07 Some people, a lot of people pay their own way.
07:10 So just the experience of the, you know, I call it the, I guess kind of the hallway track, right, where you just walk around, you meet new people.
07:16 I met people with the, what do they call it, the holy staff or whatever.
07:20 You know, this thing travels the world.
07:22 Yes, that's right.
07:23 Staff of enlightenment.
07:24 That was Anthony Shaw.
07:25 He brought it from Australia.
07:26 Yeah, so you have people like that walking around PyCon like, hey, we just want to get a picture of you with this staff of enlightenment.
07:32 And you know that there's this organic feel to why people are there.
07:36 It's a deeper community.
07:37 It's like one of these communities that have been around for so long.
07:40 This is a family reunion for most people at PyCon.
07:43 So it just feels different.
07:44 Yeah, yeah.
07:45 I just had Kenneth Wrights on the show.
07:47 He's done a bunch of Python stuff and said, this is my favorite time of the year, right?
07:51 This is actually like the favorite week.
07:53 And so I feel that way as well.
07:54 It's really, really, really amazing.
07:57 We get great keynotes like yours.
07:58 But I really, besides the keynotes, I honestly try to just do the hallway track.
08:02 Like these chances of meeting people who you would not normally get to talk to.
08:06 You just get to stop.
08:07 And then I drop in on YouTube and watch the presentations.
08:10 Yeah.
08:11 And I think the other element, I didn't get to enjoy it this year.
08:13 But my very first real conference was PyCon in Atlanta.
08:17 And the sprints, you know, where you're actually working side by side with, I guess you can call them the big names in Python, the people that are actually doing the hard work, maintaining the packages, maintaining the core.
08:29 And they just make space for you to come in and learn, do your first contribution.
08:33 That was my first contribution to Python.
08:35 I was working on PyPy at the time and PIP.
08:38 Yeah.
08:38 So some of the disk utils and those tools.
08:40 So I think that's kind of the thing that's really unique to PyCon is that you can actually sprint with the people that are working on this stuff.
08:46 That's actually a really good point because most conferences don't have that.
08:49 The big names go up, they speak at you, and then they leave.
08:52 And that's that.
08:53 That's right.
08:53 Yeah.
08:54 So very, very cool.
08:55 Okay.
08:55 There's also the after party events, right?
08:58 We went to like, it was the dinner at the art museum, for example, we got to meet up.
09:03 And we played this really cool game, the classical programmer painting naming game, which I think any programmer must do when they go to an art museum, right?
09:12 Yeah.
09:12 I think for those that have never played the game before is you go to a museum.
09:16 Where it's supposed to be serious business.
09:17 Very proper formal art, Monet and so on.
09:20 Yes.
09:20 People there who are appreciating the art.
09:22 And then, you know, you show up and then you start to give names or meaning to paintings you know nothing about.
09:27 And I think you start to learn more about people as they pick names or they give an explanation for what they see.
09:33 It's like, wow, this person really needs to take a vacation.
09:35 Yes, exactly.
09:36 Maybe there's like a picture of with like a burning field in the background, people working.
09:39 You're like, these are like Java enterprise developers, like finishing a sprint or something like that.
09:44 Or you give it a name of Docker 001, right?
09:47 It's on fire all the time.
09:48 Awesome.
09:48 Awesome.
09:49 All right.
09:50 Cool.
09:50 So let's talk about Kubernetes.
09:52 We spoke about Docker and we spoke about containers.
09:55 How does Kubernetes relate to that?
09:57 What is Kubernetes and how does it relate to Docker?
10:00 So I think when you think about like Docker, you think about I have a single machine.
10:04 And, you know, if we think about the way we used to do deployments or still do, right?
10:08 I don't think it's that far removed.
10:09 You can copy some bits to a machine.
10:12 If you don't have the environment set up, you can probably do something like virtual ent, get all your dependencies there.
10:16 Maybe use an init system to start it up or maybe just, you know, fork it into the background.
10:21 So what Docker does is says, hey, let's contain all of that, make a reusable package and then provide an API.
10:26 So no more logging into servers, right?
10:28 You got a Docker machine.
10:29 You have the Docker command line tool.
10:30 You can say, hey, Docker, run my web app.
10:33 Okay.
10:33 Docker, run my SQL.
10:35 Docker, run all my dependencies.
10:37 And then you look at that and you say, well, how do I scale that across a lot of machines?
10:40 And I think that's where Kubernetes comes in, has a new API.
10:43 You can say things like, hey, give me five of these, decouple from a particular machine.
10:48 And you can also do things like service discovery being built in, right?
10:51 So that's kind of a deeper topic where you already have MySQL deployed in this Kubernetes system.
10:56 You have no idea where it's going to land.
10:58 It's going to get its own unique IP.
11:00 How do you connect to it?
11:01 And those are the kind of features that are built into Kubernetes from a bigger cluster management standpoint.
11:05 Yeah, that's fantastic.
11:06 It solves such a great problem because Docker is pretty straightforward when you're doing one container.
11:13 Your app is in the Docker container.
11:15 But if you've got different tiers, a database tier, like a web tier, a load balancing tier, like different services, like if you're doing microservices, it gets really crazy, right?
11:26 Like how do you even like identify all of these things?
11:29 And so Kubernetes is kind of like that, both the management of all the containers plus that glue that binds them together.
11:36 Yeah, it's like a big framework where you can do all these policy-based decisions.
11:39 Like you can have things like autoscalers or a core concept inside of Kubernetes.
11:43 You can say, whenever this aggregate set of my containers reaches 10% of CPU utilization, scale them up based on the step function.
11:53 And you can define that by just using simple YAML.
11:56 You put it in place.
11:57 And the whole system works that way.
11:58 So it gives you a way to think about a lot of computers, like a single computer, just based on this declarative policy-based way of thinking about compute.
12:06 All right.
12:07 So let's see.
12:08 We take a Docker file.
12:09 I'll start with just one for now.
12:11 We take a Docker file.
12:12 We want to run that in like a cluster.
12:14 We can go to Kubernetes and go, Kubernetes, run this, scale me five of them.
12:19 And you just get like an endpoint and a port and you just treat it as one thing and it's just like round robin and load balances around?
12:26 Yeah.
12:27 So I think for most people it's great to think about this in layers, right?
12:29 So you may use Docker for your day-to-day development and then you're going to produce an artifact.
12:33 Let's call it WebApp001.
12:35 And you push that to a registry.
12:37 So the price of admission to Kubernetes is that container in a registry because that's how it pulls things down.
12:43 This is kind of the fundamental building blocks.
12:45 And then you may have a deployment.
12:46 Hey, deploy three of these.
12:48 And at that point, you just have three of these things sitting there with their own IP addresses.
12:51 They're going to land on the right set of machines.
12:54 And then the next layer is maybe you want to expose it to the outside world.
12:57 Then you can create what we call like a service and say, hey, Kubernetes, anything that looks like that WebApp, which we identify by labels, you know, app equals foobar.
13:07 And then the service will say, okay, I'll go and find those and keep that curated list.
13:11 So if one of them crashes and comes back up, it knows to replace them in a low balancer.
13:15 And then that layer will handle incoming traffic and make sure that it routes directly to the right pause to handle it.
13:21 That is really super nice because that's one of the concerns, right?
13:27 Like I'm going on vacation pretty soon.
13:30 I really don't want my sights to go down and just vanish from the Internet.
13:35 But I also don't want to be like woken in the middle of the night to try to bring it back.
13:39 I mean, this is not something that happens often, but it's something that's like really bad if it does.
13:42 And so we used to solve this problem by sticking pagers on people, right?
13:47 Yeah.
13:47 So I think the whole on-call thing is born from, especially for the simple cases, it's okay to get on-call like for something that is like, oh, wow, no one could imagine that.
13:56 That's a true emergency.
13:57 But some of these on-call pages are like, hey, you're one instance of your site is down.
14:02 You're like, so why are you calling me?
14:05 Like, how about you just re-spin that up?
14:06 We know how to do that.
14:07 Exactly.
14:08 Did you try rebooting the server?
14:09 Yeah, exactly.
14:10 How many times?
14:11 And lots of people wake up and they do that.
14:12 They wake up and say, you know what, let me just kick the server and then go back to sleep.
14:16 And that's the thing that kind of disappears in a system like Kubernetes where you just say, look, I want three no matter what.
14:22 So if a machine goes down, it just knows like, hey, let me just spin one up because the declaration you used to create the first one is still there and it can use the same template over and over again.
14:31 Nice.
14:32 And you basically pin it to a version.
14:33 Like right now you should have three of version three of our app.
14:37 If it goes down, recreate that, right?
14:39 Yeah.
14:39 And that's the key about this whole container thing, right?
14:42 Is people say immutable and that's probably a good way to think about it.
14:46 But, you know, while it's running, of course, there's memory being mutated.
14:49 But on disk, you're starting from the same point over and over again.
14:52 So if you do have to restart two or three, you're always going to get back to the same point that you expect.
14:58 Yeah, that's awesome.
14:59 So really, you're almost to the point where the only reason it wouldn't come back alive, okay, is some other service.
15:05 It depends on changes, right?
15:06 Like if you're doing microservices and the API like completely freaks out your app, then you've got to fix it.
15:12 But if something went wrong with that machine, it's just, well, it's just going to fix itself, right?
15:16 Yeah.
15:16 And it'll come back and it'll be fine.
15:18 And I think you bring up an important point.
15:19 Kubernetes doesn't solve all the problems, right?
15:21 It's not magic.
15:22 It's not magic.
15:22 We can't just sprinkle magic on us.
15:23 No, no, no.
15:23 It's definitely not magic.
15:24 You have people, like you're right, if someone were to deploy a new version of the dependency that you have and the API is fundamentally different, yeah, Kubernetes will say everything looks good to me and your app will be down.
15:36 So you have to think about where Kubernetes starts and stops and then what becomes an application concern.
15:41 And that's where your classic metrics monitoring come into play.
15:45 Yeah, but it does make it easier to test as a kind of a cluster, right?
15:49 So if you have this microservice thing with five services, you could just locally or in some staging areas spin up all the various pieces, make sure they click together, and then just make sure what you deploy is all those pieces, right?
16:01 Yes, we're definitely getting to a place where now we have an API to reason about this kind of thing, right?
16:06 You have five different deployments that need to talk to each other.
16:09 You can associate a service, give every deployment a name, and then just reference those services by name, whether there's one or five of them, and you can be sure to connect to one that's available.
16:20 So that's just much easier to do versus tribal knowledge, scripts, and just knowing how to do things.
16:25 Yeah, it sounds better for sure.
16:26 How do you know from within your app?
16:28 Like, suppose I have a Flask app like you had in your demo, and it talks to, say, a database server that's also running in a different container but also managed by Kubernetes.
16:37 Do I talk to it by name, by IP address?
16:40 How do they find each other?
16:42 So today in Kubernetes, what you would do is ideally just say, hey, I want to connect to MySQL by name, MySQL, on port 3306.
16:50 And what will happen is if you have a deployed container in Kubernetes, ideally you'll have a service in front of it, and the service name will be called MySQL.
16:58 So what that does is Kubernetes has these control loops.
17:01 So what the control loop will do is say, all right, let me find the MySQL server, get its IP address, and update the internal DNS server that we have, cluster-wide.
17:09 So all you have to do is call MySQL, and the name or the IP address associated to that service, that IP will be fixed for the life of the container.
17:19 So even though the container may go up and down and get a new IP, you'll have this kind of virtual IP that maps to it.
17:25 And it kind of runs its own DNS, so you talk to it by the name.
17:28 You've given even maybe that load-balanced group of things, and it just finds it?
17:32 Exactly.
17:33 So that becomes a virtual IP that allows us to have a stable naming convention, kind of the key to doing service discovery correctly.
17:39 Okay.
17:40 This sounds pretty cool.
17:42 Does this work pretty well in your own data center?
17:44 Like if I had 10 big Linux machines, could I just set this up?
17:47 Yeah, so that's always been the goal of Kubernetes.
17:49 We think of it like, you know, some people refer to it like the Linux of distributed systems.
17:54 Okay.
17:54 Right, so the goal is it doesn't really matter too much what distro you have or where you run it.
17:58 We have all these pluggable, what we call them, cloud providers.
18:01 So if you're in Amazon or Microsoft or Google, it will kind of detect that and like integrate with all the stuff that's there.
18:06 So if you say, give me a load balancer, it'll spin up the proper load balancer for that environment.
18:10 If you're on-prem, you're free to make your own integration.
18:12 So maybe you have something like an F5 load balancer.
18:15 You can do your integration there.
18:17 But for most functionality, it doesn't matter if it's on your laptop or in your own data center.
18:21 You install it, you hook it up correctly, you have a Kubernetes cluster.
18:25 And you guys at Google also have, you guys do some cloud computing, right?
18:29 Yes, we do.
18:30 We do a lot of cloud computing.
18:31 And you guys have a Kubernetes service as part, like kind of, you've got Google Compute Engine, you have Google's Kubernetes engine, right?
18:39 Yeah, we call it a Google container engine.
18:41 Container engine, right.
18:42 So it's largely the open source Kubernetes, deeply integrated into Google Cloud.
18:47 And we just try to give you all the things we know you would need, like access to load balancers, storage, metrics, monitoring, logging, audit logs, that kind of thing.
18:55 Okay.
18:55 And so tell me about the persistence stuff.
18:58 This is almost more of a Docker question than it is a Kubernetes question.
19:02 But if I run MySQL in one of these containers, ultimately I want that data to not be transient, right?
19:07 Where does that stuff go?
19:09 On one hand, I could obviously just hook into like RDS and Amazon or some database as a service.
19:14 But assuming I'm not doing that, if I'm storing it on the disk, where does it go?
19:18 So this question is probably the biggest source of confusion because of the defaults.
19:23 If you take Docker out of the equation and I tell you I have a server and I install MySQL on the server, app git install MySQL, and you write data.
19:33 Where does it go?
19:34 It goes to whatever volume you write to on that server.
19:37 If the server dies, there goes your data.
19:39 Now let's add Docker to the equation.
19:42 You say Docker run MySQL.
19:43 Now the default in Docker is that you're going to get a temporary file system and you won't be able to write to those.
19:49 Inside the container.
19:49 Inside the container.
19:50 Yeah.
19:51 But really inside your true root, right?
19:52 Still on the disk.
19:53 Yeah.
19:53 Except for it's going to get its own unique name.
19:55 And by design, by default, we're just going to clean up when that container dies.
20:00 Here's the temp files.
20:02 We don't need any more.
20:03 Yeah, exactly.
20:03 Now, if you wanted to do this, you could just say the same varlib MySQL.
20:07 Just mount that into the containers.
20:10 You can say Docker run MySQL.
20:12 Mount in the host, varlib MySQL, into the container, varlib MySQL.
20:16 And everything you know about writing data to this is pretty much going to be the same.
20:20 Okay.
20:21 So there's no magic there.
20:22 It's just that the default for a container is complete isolation.
20:25 I see.
20:26 So, yeah, right.
20:27 So you basically configured your MySQL to write to, like, slash datastore or var, datastore, whatever.
20:34 And then that, as long as you map that somewhere on your main machine, you could throw away that container, recreate it.
20:39 It'll read from there again.
20:41 It'll carry on, right?
20:42 Exactly.
20:42 And I think we're all spoiled by just having full access to a machine.
20:45 And then, like, wherever it writes, that's where it writes.
20:48 But inside of the container, since you can run multiple of these containers at one time, you kind of want your own file system space to do whatever you want.
20:55 But remember, you can always just mount things.
20:58 You just have to be explicit versus it being in an explicit contract.
21:01 Right.
21:01 And you just put that in the Docker file?
21:02 So that's the thing.
21:03 So in Docker files, that's where Kubernetes starts to be very advantageous to people or something like Docker Compose.
21:10 Some of these semantics around run it like this is kind of where Kubernetes starts to shine.
21:16 So when you look at a Kubernetes manifest, you say, run this container.
21:19 And, oh, these are the volumes that come from the host.
21:22 And I want them mounted here into the container.
21:24 So you look at the full spec, you can see, oh, this is what should happen.
21:28 And you know that that would be the right semantics versus, like, the Docker file, you can express some volumes.
21:33 But you really need to make sure that you do the right thing when you say Docker run, mount all these things up.
21:38 Or do something like Docker Compose.
21:39 Right.
21:40 And it's a little more like your Kubernetes Yemma file can just put that all together, right?
21:44 Yeah, you want that to be the whole contrast.
21:45 When we think about a pod, we say a pod is the network, the container, and the volumes that it needs.
21:53 This portion of Talk Python to Me has been brought to you by Rollbar.
21:56 One of the frustrating things about being a developer is dealing with errors.
21:59 Ugh.
22:00 Relying on users to report errors, digging through log files, trying to debug issues, or getting millions of alerts just flooding your inbox and ruining your day.
22:09 With Rollbar's full-stack error monitoring, you get the context, insight, and control you need to find and fix bugs faster.
22:15 Adding Rollbar to your Python app is as easy as pip install Rollbar.
22:19 You can start tracking production errors and deployments in eight minutes or less.
22:23 Are you considering self-hosting tools for security or compliance reasons?
22:28 Then you should really check out Rollbar's Compliance SaaS option.
22:31 Get advanced security features and meet compliance without the hassle of self-hosting, including HIPAA, ISO 27001, Privacy Shield, and more.
22:41 They'd love to give you a demo.
22:42 Give Rollbar a try today.
22:44 Go to talkpython.fm/Rollbar and check them out.
22:47 How do you guys use Kubernetes at Google?
22:50 Like, I saw that this sort of was born out of the Borg, which is a pretty awesome name.
22:58 And now it's an open source project, and it's hosted by, what is it, the Cloud Native Computing Foundation?
23:03 Yep.
23:04 So Kubernetes was born.
23:06 CNCF is a foundation kind of designed for all these cloud native tools.
23:10 FluidD, Prometheus, Kubernetes, OpenTracing.
23:14 This is where you start to do application level tracing on how web requests flow through a system.
23:19 So all of these collections of tools we think of make up the cloud native stack, kind of cloud native idea.
23:24 And where does Kubernetes come from?
23:26 Internally, we have a thing called Borg, but that's kind of a bit misleading.
23:29 People say Borg, and they mean like maybe six or seven layers of stuff.
23:33 Right.
23:33 Okay.
23:34 So Kubernetes represents one layer of that kind of stack, right?
23:37 So Kubernetes would be the part that knows how to manage resources across a bunch of machines,
23:42 knows how to deploy containers, and then serves as a framework of information for things like metrics and monitoring.
23:49 And you would bring in other tools for that.
23:51 So I think when you say Borg internally, it means a lot to Google.
23:54 It's kind of like a catch-all, but there's lots of stuff in there.
23:57 If you want something like Borg in the real world, you would say Kubernetes plus Prometheus plus there's a new project called Istio that works really great for microservices.
24:06 So Istio's idea is that you have these sidecars that know how to do exponential back-off, retries, TLS mutual off, and authentication between microservices and policies.
24:16 You take that and maybe some more, then you get what we call Borg, right, for the most part.
24:22 Yeah.
24:23 So is Kubernetes actually running inside Google as part of this thing that's called the Borg now?
24:27 No.
24:27 So Borg is its own system.
24:29 It has a lot of features, and it actually has a lot of the way Google works, the way Google infrastructure works.
24:35 Super specialized to Google.
24:37 Hyper-optimized, right?
24:38 But if you think about Kubernetes, it's used a lot in our cloud offerings.
24:43 Like if you think about what the cloud ML and the TensorFlow team is doing.
24:46 So they use Kubernetes for their stuff.
24:49 And you can also imagine a lot of our customers are course running on top of Kubernetes.
24:52 Kubernetes can also be used for other product offerings.
24:55 You can imagine something like building hosted services or something like cloud functions.
25:00 So Kubernetes gives us a really nice API for managing containers in a cloud platform.
25:04 Do you, or I don't know if you can even talk about it, but do you have groups of people who are basically running cloud computing services on top of Google App Engine and these types of things?
25:17 Like they're putting their own infrastructure and then reselling it as a service on top of what you guys are doing?
25:22 Well, you're talking about internally.
25:24 No, I'm talking like, is there a company that like is like DigitalOcean or Linode type of company?
25:31 Oh, like a second tier cloud provider just reselling services?
25:35 They've got some special layer on top of what you guys are doing.
25:39 I don't know if there's any of those that I could probably talk about, but we know there's services like, of course, public like Snapchat, for example.
25:46 You know, they built their platform.
25:48 And most cloud providers, what you'll either see is either they'll use like our storage, you know, if you need petabytes and petabytes of storage at scale, you'll see something like that.
25:59 And maybe they turn around and sell that as some other thing that they call by another name.
26:03 Sure.
26:03 But typically to be a cloud provider reselling another cloud provider, I think you'll get destroyed on margins.
26:09 The best you could probably do is what Rackspace is currently doing, where they're providing like supports, where it's your account.
26:15 You'll be paying the cloud provider and they put their support premium on top.
26:21 Yeah, yeah, that makes sense.
26:21 I mean, the margins are, it's a super competitive business.
26:24 Yeah.
26:24 I mean, like, why would you buy a VM knowing that it's running on like another cloud provider and pay more for it?
26:29 Like, that's going to be tough.
26:30 Yeah, there has to be a secret sauce, something extra.
26:32 Okay.
26:33 Yeah.
26:34 Yeah.
26:34 Very interesting.
26:35 So it feels to me like if I use Kubernetes, I'm a little bit more removed from the infrastructure of any one cloud provider.
26:45 So can you maybe speak to like the relative lock-in of like one cloud provider versus the other and using, say, Kubernetes?
26:51 Yeah.
26:51 So this lock-in thing is like, I look at it and I think I made a quote recently.
26:56 If you try to switch products to avoid lock-in, that's just another form of lock-in.
27:00 And Kubernetes, the API is open.
27:03 The project happens to also be open source.
27:06 So if you run it on your own data center, on your laptop, Google Cloud, Azure, or Amazon, you're going to get the same API.
27:12 So essentially, you can be locked into the Kubernetes API across all these environments.
27:17 So the trade-off there is maybe for some people, that's better than being locked into, say, one endpoint like ECS or Heroku.
27:25 So I think when we talk about lock-in, we ask ourselves, what trade-offs and compromises are we willing to make?
27:31 And Kubernetes offers enough value that if you lock into it, right, to go with the terminology we're using,
27:38 then you feel like you can be a little bit more portable than what we were doing with virtual machines.
27:42 Sure. Yeah, I totally agree.
27:43 Also, it depends on how much you kind of buy into the full set of APIs, right?
27:49 Like if you use Google Blob Storage and hosted databases and all of the specific Google App Engine APIs, well, you're more tied into Google's cloud than if you just use Linux and straight up stuff and write to your own disk and your own server.
28:06 Exactly. I think a lot of us are starting to ask ourselves what the trade-off is worth.
28:09 So when I look at the spectrum of trade-offs around lock-in, you've got a couple of options.
28:14 Like let's say you use a hosted database service.
28:16 Well, they manage the database for you.
28:18 They back it up.
28:18 They control the version.
28:20 But you feel it's a good compromise for your time because you're using an open protocol.
28:24 I can go and talk to the MySQL protocol in any environment that I want.
28:29 So you know what?
28:30 You can host that.
28:30 Now, it gets a bit more tricky when you're dealing with something like maybe, let's say, Cloud Spanner.
28:35 That is its own thing.
28:37 It offers its own capabilities.
28:38 Tell people what Cloud Spanner is.
28:40 So Cloud Spanner is this, like, we are challenging this idea that if you have SQL, that we can scale it horizontally.
28:46 You can also have it multi-regional.
28:48 We can also do distributed transactions.
28:51 So this would be, you have the SQL language you know and love.
28:54 So imagine having a SQL or a Spanner database in Asia, in California, and Europe, and you're able to write to them
29:03 and then actually have all the data be available in all the regions and be able to do things like distributed transactions.
29:09 So you're not making the tradeoff between no SQL where you have eventual consistency from, you know, the traditional database stuff.
29:16 So that's one of those things where people look at it and say, you know what?
29:19 It's really hard to maintain my own MySQL shards and all the magic that goes behind that.
29:24 Maybe I'm willing to trade off a little bit for this kind of proprietary thing that only runs in one cloud provider.
29:30 You just got to make that decision.
29:31 Yeah, that sounds, I remember what that is now.
29:33 That sounds really, really advantageous for global companies, right?
29:38 And if you're on the internet, maybe global, right?
29:41 Yeah, what we're hearing is best of breed.
29:42 So what we'll see now is that there'll be some companies that say, hey, we're using Amazon because we like a few of these things where we got a bunch of data inside of RDS, you know, their MySQL management thing.
29:52 And they say, no, we're going to leave those apps there.
29:54 But you know what?
29:54 We like BigQuery.
29:55 So we're going to deploy some stuff and just use BigQuery on the Google side.
29:59 And you're starting to see, you've seen this.
30:01 Companies are huge.
30:02 They may have like 50 different departments producing their own apps, their own business units.
30:07 So what you'll see is each of them choose their own story or how they want to do things.
30:11 And they'll end up having usage on all platforms.
30:14 And maybe there's no reason to reconcile when they're going for best of breed.
30:18 Yeah, you know, this service works really good for this.
30:20 That service works good for that.
30:22 For a Cloud Spanner, it makes a lot of sense to have this geolocated and maybe starting by region like Asia and Europe and US.
30:30 And so when you do a query, it like hits the local version.
30:34 How do you get your web app near each one of those regions?
30:38 So for each query or each interaction with the database is kind of local.
30:40 So the goal is, of course, you would try to optimize in a way where you write the local stuff to it's local to Asia.
30:47 But if you're in the US and let's say you failed over and you need to query some of that data, ideally we're replicating this, right, synchronously across these regions when it's time.
30:57 So the goal of Spanner is that you don't want to make that tradeoff.
30:59 You know that failures are going to happen.
31:01 Ideally, you want your data in as many places as possible.
31:04 So Spanner kind of gives you that ability to.
31:06 So you don't think about that in your app.
31:08 You don't think about partitions the same way.
31:10 So Spanner tries its best to do the partitioning for you under the covers.
31:14 It does the scaling underneath the covers for you.
31:17 Depending on how much money you have, you can keep scaling horizontally.
31:20 But some people want that.
31:21 The last thing you want to do is stop everything and repartition your database.
31:25 That's a nightmare for people that have ever done that before.
31:27 If you have petabytes of data.
31:28 Yeah, for some people, you just start a whole new cluster and you're like, we'll just phase that thing out.
31:34 So I think it's one of those things where you just got to think of a tradeoff opportunity in time.
31:38 Yeah, I mean, we feel like we have fast internet and these cloud, they're very fast and whatnot.
31:43 But, you know, Amazon, I don't know about Google, but Amazon has a service where you can literally FedEx hard drives as like the fastest way to upload data.
31:51 Yeah.
31:52 I mean, you're going to see this quite a bit.
31:53 I mean, the more data people produce, you need these white glove services where someone shows up with a truck and maybe they use that as the, you know, our sink.
32:01 You know, you just like copy the big file and the our sink from there.
32:04 And I think that's going to be until we get super fast pipes, we're going to have to figure that out.
32:08 But I think that will probably be the fastest way in some cases is to just ship things around at that scale.
32:13 That's awesome.
32:13 You just gave me a vision.
32:15 So, you know, a lot of these cloud providers like Google and Azure, they've got basically what look like shipping containers, which contain the servers.
32:23 I can just see trucks that just drive up with the full container full of servers.
32:27 You fill it with data, it drives over to the next data center and it unloads.
32:32 Yeah.
32:32 Yeah.
32:32 It's possible, right?
32:34 Yeah.
32:34 I think a lot of backup companies have to do this, right?
32:36 Like if you're truly doing backups, they're off site.
32:38 And if you have lots of data, you will have a vendor come around in a truck and securely grab your store's medium and go lock it down somewhere.
32:46 Wow.
32:46 All right.
32:47 That's awesome.
32:48 So we talked about Kubernetes and it's solving the lock-in problem.
32:53 One of my biggest pet peeves with working with these cloud things, especially when you deeply tie into their APIs, is I like to go work at a coffee shop.
33:02 Or on a plane or I'm on a trip and I have crappy internet access because, you know, I don't have a good international data plan or something.
33:09 Is there a way to like run this stuff locally on my laptop?
33:12 On your laptop, you can run a project called Minikube.
33:15 Okay.
33:16 So Minikube basically takes a lot of inspiration from Docker.
33:18 So Docker for Mac or Docker for Windows.
33:21 Yeah.
33:21 This idea that you'll have this kind of lightweight VM.
33:23 Docker will be installed there.
33:24 And Minikube just says, all right, let's install all the Kubernetes components on a single node because you get the same API.
33:30 So whether you have one node or five, you get the same API.
33:33 And for a lot of people, I guess you could do that.
33:35 But me personally, I develop using my normal flow.
33:38 I don't even use containers during development time.
33:40 Like I'll use like Homebrew on my Mac, give me Postgres, give me Redis, get those protocols up.
33:46 And I just write my app outside of all the container stuff.
33:49 Once it works, then I think about packaging and then making sure that I can deploy it on Kubernetes.
33:55 So I kind of decouple those workflows.
33:57 I know some people want to make it part of just the incident workflow.
34:00 But I look at that like running integration tests.
34:02 I run unit tests locally, integration tests on the integration environment, not on my laptop all the time because they may be too big or take too long.
34:10 Yeah, that makes a lot of sense.
34:11 Okay.
34:11 Yeah, very cool.
34:12 So if I'm going to use Kubernetes, how much of an expert do I need to be in like DevOps type stuff?
34:18 So there's two parts to this.
34:20 There's I want to install Kubernetes and manage it and upgrade it.
34:24 Then you should probably learn quite a bit about Kubernetes, right?
34:27 And I think a lot of people are looking for the 10 minute, like, give me the tool where I can just twist all the knobs.
34:32 That's not reality right now, right?
34:35 There's some hosted offerings where you click the button and then they'll do everything for you.
34:39 Right.
34:39 That's GKE.
34:41 GKE, Tectonic from CoreOS.
34:43 Red Hat has, you know, OpenShift and, you know, some things to help you with Kubernetes.
34:47 But if you want to be the cluster administrator, meaning the company comes to you when Kubernetes breaks or needs to be upgraded or something doesn't work.
34:54 Yeah, you're in for a learning curve, right?
34:57 Like, have you ever watched a developer use Vim for the first time?
35:00 They can't get out.
35:02 This is a text editor, right?
35:04 And I think when you think about a fully distributed system that has a distributed database, has all these moving parts, you need to expect to study a little bit if you want to manage a cluster.
35:12 Now, if you just want to kick the tires in demo mode, then yeah, install Minikube, go at it, find some Hello World tutorials and you can get off the ground in less than a day for sure.
35:22 But if you just are a developer and you just want to use Kubernetes, this is where someone is managing it for you.
35:27 There's an API.
35:28 Then you install a little bit of tooling, kubectl on your laptop.
35:31 You look at a few examples on how to create your, you know, package your app and describe how you want it to run.
35:37 There's also things like Helm.
35:39 So Helm is a package manager for Kubernetes.
35:41 You can say Helm install etcd, Helm install Zookeeper or MySQL.
35:46 And then that will go out and get all the things you need to do, like the service, the volume, the deployment object and deploy it to Kubernetes and manage it as like a single package.
35:56 So then I can like start that pod, which is just.
35:59 Yeah, exactly.
35:59 So you can say Helm install MySQL, Redis and Kafka.
36:02 That's the things I depend on.
36:04 Then write your app, package it up, and then you can refer to them as MySQL, Redis and Kafka because service discovery is all built in.
36:11 And for a lot of people, that is the magic moment.
36:13 It's like, wow, I didn't have to even touch all that stuff.
36:15 Yeah.
36:16 So we're used to doing that at the application level, but you're kind of doing this at the server level.
36:22 Like, yeah, I install this server machine and every bit of its infrastructure.
36:26 Yeah, because some of these projects are huge.
36:28 Like if you think about a production ready to go Kafka setup, you're talking multi nodes.
36:32 They need to be configured.
36:34 They need to be set up a certain way.
36:35 And you may not want to learn all of Kafka just yet, but you may want something that's a little bit more bulletproof so you can actually test how your client does fail over.
36:42 So it's really nice to have something like Helm install Kafka with three nodes.
36:47 And then you can test out that your client does connect to all three of them and fails over correctly.
36:51 Yeah, that's really nice.
36:52 So basically, I guess to sum it up is if someone is managing it within your environment, it's pretty easy to just like deploy your app and use it.
37:01 But if you want to be the person that maintains it, that's pretty risky, right?
37:05 Because now the entire company is sort of balanced upon your...
37:10 Well, think about your Linux distro, right?
37:12 Like, you know, how many people really know how to build a Linux distro anymore?
37:15 Most people do not know that you got to bootstrap and get the kernel, get the user land, get the package manager, make sure it all works.
37:22 You just like, you use Ubuntu and you're kind of beholden to every distro of Ubuntu works right.
37:27 The upgrades don't break everything.
37:28 And infrastructure is a lot like that, like the networking.
37:31 You realize that someone has all your routes set up that you can actually go out to the internet and back again.
37:36 So we tend to think that these things are...
37:40 We think we have control of them, but they just work and they're largely invisible.
37:44 So if you get a Kubernetes setup that works the same way, you can almost forget about it.
37:48 But if you want to be the management of it, then, you know, it's going to be front and center.
37:51 Yeah, of course.
37:52 Of course.
37:53 Okay, cool.
37:54 Let's talk about your keynote.
37:55 Awesome.
37:56 I heard so much good feedback on the internet about your keynote and people really loved it.
38:02 The comments on the YouTube video, right?
38:04 So it's on YouTube.
38:05 People can go and watch it.
38:06 Yeah.
38:07 So maybe just sort of let's talk about what you did.
38:10 So you start out with a Flask app, right?
38:12 Super simple Flask app.
38:13 You said, great, it runs here.
38:15 Well, you kind of riffed a little bit on the challenges of actually running an app, right?
38:20 Because it's one thing to have Python.
38:22 It's another to have all the dependencies and all the configurations and stuff, right?
38:25 Yeah.
38:26 So I think a lot of people, when we say, hey, just use Docker.
38:28 And you just tell the person that's doing Pythons, just use Docker.
38:31 And then when they go out and look around about what's required, they're like lost.
38:36 Because if you look at just a single container and you got to bundle in Apache, UWSGI, Flask,
38:42 shut the Unix socket, then fork them off in the background.
38:45 Oh, did you forget to change the permission on the Unix socket?
38:48 I don't know why it doesn't work.
38:49 Yeah, exactly.
38:50 So by the time you do all that and you only have one Docker file to express all of this,
38:54 you're like, something's not right.
38:55 This doesn't feel like I would do on a VM.
38:57 Because on a VM, you would have two separate processes.
39:00 And you think about them separately, right?
39:02 So I started with the Hello World.
39:04 And you go from Hello World to let's do Hello World with UWSGI and Apache sitting in front.
39:10 And then once people have that understood, and it's okay, now I get it.
39:13 I see that you're using pip to manage your dependencies.
39:15 What does it take to put that in the container?
39:17 And what we did was say, hey, let's show two.
39:20 There's one for your app and a separate one entirely for Nginx.
39:24 Let's not mix the two.
39:25 Right.
39:25 So you basically set up one Kubernetes pod for Nginx and one for MicroWizGui plus your app.
39:34 And you said, let's run those two together and they can one passes along.
39:37 So slightly differently.
39:38 So the pod concept is to capture what it means to be a logical application.
39:42 Okay.
39:42 So we know that Nginx belongs to this one instance of my app.
39:46 They're inseparable, right?
39:47 So in that case, in a pod, you can actually have a list of containers.
39:50 So we know we want Nginx to be the front door.
39:53 Nginx will mount the Unix socket from the other your app container.
39:57 And then those two should be expressed in the same Kubernetes manifest.
40:01 That's one pod.
40:02 And then I can scale that together horizontally if I want five copies running.
40:06 Nice.
40:07 And do those two containers always run on the same machine?
40:09 Yes.
40:09 They run what we call the same execution context.
40:12 I see.
40:12 So imagine having a virtual machine.
40:14 You would just install Nginx there and your app there, right?
40:17 So they all live in the same place.
40:19 But in this case, since we're using containers, we're going to put them in their own independent
40:23 true roots and file systems.
40:24 So they're still independent from that aspect.
40:26 But where we do start sharing things is the network.
40:28 So they share the same IP address.
40:30 That means Nginx can talk to your app over local host.
40:34 Now, my app is exporting a Unix file socket.
40:37 So it's a file.
40:37 So what I can do there is say, hey, we're going to write that file to a shared file system.
40:42 On wherever that happens to be running on that Unix machine.
40:45 Yep.
40:46 So on that machine, we'll say, I want a temporary host mount.
40:49 And once that host mount is created by Kubernetes, it's going to have a unique name.
40:53 So no one else mounts it.
40:54 And then we're going to give it to both containers.
40:56 So in their own separate worlds, they see the same path.
40:59 Container A, your app, writes its unit socket.
41:02 The Nginx container says, oh, there's a file in this mount point.
41:06 And I'm just going to send traffic to it.
41:11 This portion of Talk Python is brought to you by us.
41:13 As many of you know, I have a growing set of courses to help you go from Python beginner
41:18 to novice to Python expert.
41:19 And there are many more courses in the works.
41:21 So please consider Talk Python training for you and your team's training needs.
41:26 If you're just getting started, I've built a course to teach you Python the way professional
41:30 developers learn by building applications.
41:32 Check out my Python jumpstart by building 10 apps at talkpython.fm/course.
41:38 Are you looking to start adding services to your app?
41:40 Try my brand new consuming HTTP services in Python.
41:43 You'll learn to work with RESTful HTTP services as well as SOAP, JSON, and XML data formats.
41:49 Do you want to launch an online business?
41:51 Well, Matt McKay and I built an entrepreneur's playbook with Python for Entrepreneurs.
41:55 This 16-hour course will teach you everything you need to launch your web-based business with
42:00 Python.
42:00 And finally, there's a couple of new course announcements coming really soon.
42:04 So if you don't already have an account, be sure to create one at training.talkpython.fm to get notified.
42:09 And for all of you who have bought my courses, thank you so much.
42:13 It really, really helps support the show.
42:15 So we started with that and you got it working in Kubernetes.
42:18 And then you said, well, let's scale it, right?
42:20 Right.
42:21 So now it's like you got this pod manifest.
42:23 And I guess before we did that, we made sure our container worked with Docker.
42:26 Right.
42:26 Right.
42:26 And then now we said, let's, how do we get this thing to scale inside of Kubernetes?
42:30 So we make this pod manifest and then we submit it to Kubernetes and we watch it just place
42:35 it somewhere.
42:35 So now that it's placed on the machine, you can now say now Kubernetes is in control of this.
42:40 If I delete it, Kubernetes brings it back, right?
42:42 To my original specification.
42:44 That's way better than getting woken up in the middle of the night.
42:46 Of course.
42:46 Like this whole on-call thing is overrated.
42:48 Like you've never done it before.
42:49 Trust me.
42:50 It's not something you really want to do.
42:51 And I guess to spice things up a bit, I decided to show what does the world look like when you
42:56 have an API to control it all, right?
42:59 We're not just talking bash scripts.
43:00 So this is for the first time I broke out the OK Google, scale the particular application.
43:05 And I think a lot of people, if they've been doing this long enough, never thought we would
43:10 get to a point where you can just ask for something and do it the correct way.
43:15 So part of that keynote, not only did we deploy the app, we scaled it horizontally while having
43:20 like curl run in the background, showing the output from the app.
43:24 And then we just did an in-place update of the application, all voice controlled.
43:28 Yeah.
43:29 And I think people were like, wow, we're there.
43:31 It was really such a cool demo.
43:33 I mean, if you guys haven't watched it, those of you who are listening, you basically pulled
43:37 up your phone and you said, okay, Google, connect to Kubernetes.
43:41 And you would set up an app or something in the background to like wire your cluster over
43:46 to...
43:47 Yeah.
43:47 So some of the magic behind the scenes is when you have something like the Google assistance
43:50 or Google Home, what you're really doing is saying, I'm going to send my speech and
43:55 then there'll be a converter that's text to speech or speech to text.
43:58 And it sends it to, in my case, I was using a thing called API.ai.
44:01 And this is where you design your conversation, all the logic branches.
44:05 And that does a bit of ML to take the speech and say, all right, this is your parameters.
44:10 This is your intent.
44:11 This is your action.
44:12 Right.
44:12 And then it sends it to a webhook.
44:14 And that webhook I just had running written in Python.
44:16 I had it running inside of my Kubernetes cluster that would get my intents, like create this
44:21 deployment.
44:22 Well, what's the name of the deployment?
44:23 What's the version of the deployment?
44:24 And it will take that and then it will interact with Kubernetes.
44:27 And send a response back, which is text.
44:29 And when the text hits your phone, it will read it back to you.
44:32 So that's how we got the whole theatrical, it reading back what it was doing.
44:36 That was beautiful.
44:37 It wasn't just like you could talk and it worked.
44:39 It would have a conversation with you, right?
44:41 Yeah.
44:41 So that was kind of the fun part where we would say, you know, I hope the demo guys are on
44:44 your side.
44:45 And that brings a bit of personality to the system.
44:48 Whereas when you run a command, it either works or it doesn't work.
44:51 It's very minor.
44:52 Status code zero.
44:52 Everything okay.
44:53 Exactly.
44:54 But when you have this thing saying, you know, hey, deployment complete, that was awesome.
44:59 And then people were like, oh, wow, that's a bit unique.
45:02 Yeah.
45:02 So you basically got one of them running in Kubernetes and you said, okay, Google, scale
45:06 this five times and five instances of it scaled up.
45:10 Then the real trick that I think really impressed people was the zero downtime deployment.
45:15 Because I know a lot of companies, when they schedule like deployments, they'll have a page
45:20 and say, our site is down for maintenance.
45:22 Our people will come in on the weekend and we're going to try to get it back with this four hour
45:27 window.
45:27 But if it might take 12 hours to deploy it, like that's the wrong way, right?
45:31 Yeah.
45:32 I think that's a way where we just didn't have a cohesive system, right?
45:35 We talked earlier in the show about having that virtual IP sit in front of everything.
45:39 So given that, and we have these controls behind the scenes.
45:42 So when we do our rolling update and on, and part of the demo, I showed how in engine X, there's
45:46 a pre stop hook where you can say, drain the connections before shutting down.
45:51 And then we can reject new traffic.
45:53 So when you combine all that stuff together, Kubernetes gives us enough cover where we can take down
45:58 an instance and then remove from the load balancer, cleanly draining the connections and replace
46:03 it with the new version and just do this over and over again till it's complete.
46:07 That's where the, like the magic comes.
46:09 I see.
46:10 That really does make it work well.
46:12 I was wondering if you just got lucky and it like, there was that microsecond where one
46:16 was down and it wasn't one other, you know, like there was a transitional period where it
46:20 wasn't quite right.
46:20 But no, you had this very carefully, gracefully shutting down and it already spins up the new
46:26 one.
46:26 And then it starts.
46:27 Yeah.
46:27 So Kubernetes understands those semantics.
46:29 So what we did in the demo, when I asked Kubernetes to update the version, of course, that goes
46:33 with the whole pipeline and we create a new deployment object that has a new version.
46:36 And once that's put into the system, Kubernetes says, okay, let's walk through this.
46:41 The first thing we're going to do is launch the new version and make sure that it's all
46:44 up and running.
46:45 So we go from three to let's say four.
46:47 So the fourth one is now in the low balancer and traffic is flowing through.
46:51 And then we're safe to shut down, let's say number three.
46:54 And then we give it as clean shut down hooks and then it's gone.
46:57 But while we initiate that process, we make sure no new traffic is allowed to flow.
47:01 And this is why we don't actually see those blips or hangups.
47:04 That's perfect.
47:05 Yeah.
47:05 I mean, it's so nice.
47:06 Yeah.
47:07 That's like out of the box.
47:08 So I think now that this is what you get out of the box, we're going to start to see new
47:12 functionality as the years evolve or years go by.
47:15 People start to try new challenges.
47:16 What kind of apps can we deploy?
47:18 Maybe even new ways of working develop now that this is automatic and not somebody like
47:24 running manually a bunch of things, right?
47:26 Exactly.
47:26 I did a talk in Paris at a conference called .go and that talk, I had a self-deploying application.
47:32 So in that demo, we talked about making a container and you put it on the registry and you write
47:37 a Kubernetes manifest and you tell it to do things.
47:40 But in that particular talk, I showed an app that deploys itself.
47:43 So it's a statically linked binary.
47:45 And if you run it on your machine, it will just, hello world.
47:48 But if you give it the --Kubernetes flag, it will sense the Kubernetes API that
47:53 you may have either mapped in a config file.
47:55 And then it will do is create its own container in the background.
47:59 It will upload itself, create its own image, and then make all of the configs necessary to
48:04 run in Kubernetes and sit there.
48:05 And if you have five copies running, Kubernetes has an API to get all the logs.
48:09 So you can imagine this thing is now in your laptop running in the foreground and it goes
48:13 out to Kubernetes and says, okay, where am I deployed?
48:15 Give me all three streams of logs and put them together and stream them to the laptop.
48:20 So it looks like it's just running on your machine when in reality it's running on this
48:24 distributed system somewhere in the world.
48:25 That's a really interesting point.
48:27 The logs is also like a big challenge because if you're, anytime you're scaling horizontally,
48:32 putting, especially in microservices where it's going from here to there to there, right?
48:36 It's like, how do I put back together these requests?
48:39 Yeah.
48:40 So that's a thing where when people say they want to move to microservice and it's like,
48:43 we got to have a conversation.
48:44 You're going to go from a monolith where you only have one thing to really worry about to
48:49 breaking it up into a bunch of pieces.
48:50 So now you got to stitch back together your logs.
48:53 And this is where things like trace IDs come in handy.
48:56 You also need to do things like, where's it slow, right?
48:59 First, you were just kind of in memory, making all these calls between, you know, what's on
49:03 the stack or whatever function you need to resolve to, to now it's over the network.
49:07 Well, is it on the same machine?
49:09 Is it on a different machine?
49:10 How do you know?
49:11 So this is where things like tracing become very important.
49:14 So a client hits your front door.
49:15 You may have to talk to three other services.
49:17 You want to have some metric to tell you, you've been spending 25% of your time in service
49:22 A.
49:23 That's your bottleneck.
49:24 You might want to think about moving that either closer to the app or putting it back
49:28 into the app because maybe you cut too deep.
49:29 Right.
49:30 It's too chatty.
49:31 You're making a hundred calls to it.
49:33 That is the number one thing people go from, people have like 10,000 lines of Jason just
49:39 flying around the infrastructure because you're passing all these objects back and forth.
49:42 And you're like, why is my bandwidth going through the roof?
49:44 Or why is it so slow?
49:45 Why is it slow?
49:46 Yeah.
49:46 You just have now Jason parsing as a service.
49:48 Jason parsing as a service.
49:50 I got it.
49:50 Awesome.
49:51 Yeah.
49:52 So I guess the finale of your demo was to just say, okay, Google, upgrade to version
49:58 two.
49:58 And it just, all five instances just switched magically.
50:01 Right.
50:02 And this is the nice thing about the demo is that when I ask it to do that, and when you're
50:07 building these voice integrations, you have this idea of what we call a follow-up.
50:10 Right.
50:11 So if I say a certain word or phrase, it will go down a different logic branch.
50:16 So when it worked, I was able to say thank you.
50:19 And then it was able to respond.
50:20 I got to admit that was pretty dope.
50:22 Yeah.
50:23 And everyone, you know, gave a good round of applause and then you can just end it right
50:27 there.
50:27 But if it didn't work, I was not going to say thank you because it would be silly for this
50:31 thing to say that was pretty dope.
50:32 Thank you for ruining my demo.
50:34 Yeah, exactly.
50:34 Yeah.
50:36 That was really good.
50:37 So people listening, definitely go check it out.
50:39 I'll link to it in the show notes.
50:40 It'll be great.
50:41 So maybe we'll leave it there for Kubernetes.
50:45 If people want to get started, they have good resources.
50:48 You wrote a class, right?
50:49 A course?
50:49 Yeah.
50:50 So I did it with one of my colleagues, Carter Morgan.
50:52 We did a Udacity course on kind of a workshop that I used to give hands-on out in the field
50:58 for a few years.
50:58 And we decided to turn it into a Udacity course so people can learn at their own pace.
51:02 And we kind of go through a narrative like you saw in the keynote where you package a
51:06 container, you play with Docker a little bit, and you go from Docker to Kubernetes.
51:10 And you just kind of go through the basics.
51:11 And there's like sections in there where you do a little hands-on.
51:14 Then we have some slides and you kind of see how things work at a high level illustrated.
51:19 And the goal there is like you learn at your own pace and you kind of really understand what
51:23 this thing is trying to do for you.
51:24 All those things you have to do hands-on to learn.
51:26 You can't just kick back and then know it.
51:28 Yeah, exactly.
51:28 I think there's going to be a mix of learning.
51:30 I tell people it's going to be a little while before you get it all.
51:32 But everyone has a different way they like to start.
51:35 And I think the course is great for people who like to kind of learn visually a little
51:38 bit of hands-on and then have that as a kickstart to go into the deeper material like the Kubernetes
51:43 documentation.
51:44 I also have a book coming out with O'Reilly, Kubernetes Up and Running.
51:47 That's written with the co-founders of Kubernetes, Joe Beta and Brendan Burns.
51:51 So we just put the wraps on that.
51:54 So that should come out pretty soon.
51:56 Yes.
51:56 Writing a book is a tall order.
51:58 But hopefully it helps more people kind of learn and get up and running.
52:01 Yeah, yeah.
52:02 I'm sure it will.
52:02 And your course, it's free, right?
52:04 Yes, of course.
52:05 We try to make as much of the documentation free.
52:07 And the course definitely falls into that bucket.
52:10 All right.
52:10 So just because the timing of PyCon and all this was very, very near, I think it was the
52:15 same week as Google I.O.
52:17 Did you happen to go to Google I.O.?
52:19 I'm sure you watched it, right?
52:21 Yeah, I watched a bit of Google I.O.
52:22 That's a big event for Google.
52:24 It does a lot around the consumer side of things, mobile, a lot of the ML stuff that we're doing
52:29 at Google in general.
52:30 But you know, Python's a special event.
52:32 So we actually had a lot of Googlers show up there as well.
52:34 You know, a lot of people are into Python.
52:36 Python's a big presence on App Engine, the libraries, you know, Google Cloud, the command
52:40 line tools written in Python.
52:42 So Python has a very storied history at Google.
52:45 Lots of people are Pythonistas and will always be.
52:49 So that is also a big event that we all mark in our calendars at Google.
52:53 That's awesome.
52:54 What did you make of like the specialized chips that Google's making and the whole machine
52:59 learning AI?
53:00 Well, Google's just trying to show a little bit of their secret sauce.
53:02 Like Google has been pushing boundaries for a long time.
53:05 I think there's a saying at Google, Google doesn't have all the solutions, but we definitely
53:09 have all the problems.
53:11 So a lot of our problems, if you think about them at a large scale, you get into the situation
53:17 where you need to do specialized things like come up with your own chips to process things
53:21 faster.
53:21 Right?
53:22 Like imagine like when you do a Google search, you want instantaneous.
53:25 Like if it took even 500 milliseconds, you'd be like, this, what's wrong with my internet?
53:30 It was broken.
53:31 Yeah, exactly.
53:31 So people are very impatient at this point.
53:34 So we're always finding a way to reduce perceived latency.
53:37 We got to do things and give you accurate results faster than ever.
53:40 And that pushes us boundaries all the time.
53:43 You've spoiled people for other websites.
53:45 You can go to websites that seem like they're not really big or doing much.
53:49 And they're like two or three, five seconds to respond.
53:52 Like what is wrong with this website?
53:53 Oh, two seconds, people leave.
53:54 Like if your website comes up for two seconds, people won't put their credit card on a website
53:59 that slow.
53:59 They just have an instant mistrust and say, you know what?
54:02 If you can't load the page quickly, I am not putting my credit card in there.
54:06 It's a really interesting point.
54:07 I saw a study that says that people trust from a security perspective, software that's faster
54:13 and software that is functionally accurate.
54:16 They believe it is more secure, even though it's kind of cross-cutting anyway.
54:20 Yeah, and I think there's a good reason for that.
54:22 I mean, you figure that security is probably the hardest part of the problem.
54:25 And if the thing that you think should be the most straightforward part isn't right,
54:29 you really have your doubts about the security part.
54:31 Yeah, that's a really, really good point.
54:32 All right.
54:33 So I guess we'll wrap it up.
54:35 That was such a great conversation.
54:36 I have two more questions before I let you out of here.
54:39 Okay.
54:39 If you're going to write some code, what editor do you open up?
54:43 Vim.
54:43 Vim.
54:44 And Vim with no plugins.
54:46 Just straight.
54:47 No syntax highlighting either.
54:48 Like, it's just straight dark with white letters and no helpers.
54:54 And I tend to pay attention to every line of the code.
54:56 I don't have any blind spots.
54:58 Like, you know, my variables are this color or my comments are this color.
55:01 So I end up just reading everything top to bottom.
55:04 It's probably slower.
55:05 But I feel like I know the code base better because I'm not ignoring things anymore.
55:09 It's not magically just barely completing with a different word than what you meant.
55:13 Nope.
55:13 Yeah.
55:14 Awesome.
55:15 And a notable PyPI package.
55:18 There's over 100,000 of them now.
55:20 Like, one they maybe run across lately that was really cool.
55:22 Well, for the keynote, I actually used the Kubernetes package, which allows you to write integrations for Kubernetes and Python.
55:28 And that was a very interesting package, mainly because it's generated from the Swagger API of Kubernetes.
55:34 So if you've never used Swagger before, it's a way to kind of instrument your API so they can be, you know, you have this machine representation of, hey, here's every endpoint.
55:43 Here's the input.
55:44 Here's the output.
55:45 And that Python library was generated from that.
55:48 And then the documentation is also generated.
55:50 So there's an example for every function you have.
55:53 And it has pretty good coverage across the board of doing things in Kubernetes.
55:57 So that's a well put together package.
56:00 And that's what I've been using lately.
56:01 All right.
56:02 That sounds really, really cool.
56:03 People want to check that out.
56:04 So good place for final call to action.
56:06 Like, people are excited about Kubernetes.
56:07 How do they get started?
56:08 I think a good way to get started is make sure that you need Kubernetes.
56:12 There's a nice little tweet I saw the other day.
56:14 A guy had kind of this full size tractor trailer flatbed.
56:18 And it was just like a little piece of like wood on the flatbed that it was hauling.
56:23 And this is what happens when people are like trying to deploy a very single WordPress on a thousand node Kubernetes cluster.
56:30 Like, listen, it's okay if you want to learn.
56:32 You know, that's cool.
56:33 Yeah.
56:33 But you don't need it for everything.
56:35 So I would say if you want to learn, Minikyu is a great way to start.
56:39 But it's also okay to look at Kubernetes and say, you know what, it's not required for what I'm doing and exit there.
56:45 But if it does, it's a good fit.
56:46 Like, you're running web instances or you want to have mixed workloads.
56:50 You can also do batch jobs on Kubernetes.
56:51 You can do these cron jobs on a timer.
56:53 So Kubernetes kind of gives you this framework for thinking about running multiple workloads on a single set of machines.
56:59 And then you can start going higher up the stack.
57:01 So Minikube, find examples in your language of choice, like how to run my Python app in Kubernetes.
57:07 That's a really kind of good Google search.
57:08 And just take your time with it, right?
57:10 You're going to have a long time to really master this thing.
57:13 Yeah.
57:13 It's something you can get started quick, but it takes a long time to master.
57:16 Exactly.
57:16 Hello, world.
57:17 I always challenge people.
57:18 I say, you know, Kubernetes.
57:19 This is what I want you to do.
57:20 In the language of your choice, I want you to write hello, world, package it, and deploy it in Kubernetes.
57:25 And you watch how many people fall over because they don't actually remember all the commands.
57:29 And they're still looking for their cheat sheet to put things together.
57:32 So make sure you really understand the basics before you jump into all the complexities.
57:35 All right.
57:36 Well, fantastic.
57:37 Thank you so much.
57:38 It's been really great to chat about Kubernetes.
57:40 And congrats on the keynote.
57:42 It was amazing.
57:42 Awesome.
57:42 Thank you.
57:43 Yeah.
57:43 Thanks.
57:43 Bye-bye.
57:44 This has been another episode of Talk Python to Me.
57:48 Today's guest was Kelsey Hightower.
57:51 And this episode has been brought to you by Rollbar and Talk Python Training.
57:55 Rollbar takes the pain out of errors.
57:57 They give you the context and insight you need to quickly locate and fix errors that might have gone unnoticed until your users complain, of course.
58:05 As Talk Python to Me listeners, track a ridiculous number of errors for free at rollbar.com slash Talk Python to Me.
58:12 Are you or a colleague trying to learn Python?
58:15 Have you tried books and videos that just left you bored by covering topics point by point?
58:19 Well, check out my online course, Python Jumpstart, by building 10 apps at talkpython.fm/course to experience a more engaging way to learn Python.
58:28 And if you're looking for something a little more advanced, try my WritePythonic code course at talkpython.fm/Pythonic.
58:36 Be sure to subscribe to the show.
58:38 Open your favorite podcatcher and search for Python.
58:40 We should be right at the top.
58:41 You can also find the iTunes feed at /itunes, Google Play feed at /play, and direct RSS feed at /rss on talkpython.fm.
58:51 Our theme music is Developers, Developers, Developers by Corey Smith, who goes by Smix.
58:56 Corey just recently started selling his tracks on iTunes, so I recommend you check it out at talkpython.fm/music.
59:02 You can browse his tracks he has for sale on iTunes and listen to the full-length version of the theme song.
59:08 This is your host, Michael Kennedy.
59:11 Thanks so much for listening.
59:12 I really appreciate it.
59:13 Smix, let's get out of here.
59:16 I'll see you next time.
59:37 Bye.
59:37 you Thank you.