Brought to you by Linode - Build your next big idea @ linode.com


« Return to show page

Transcript for Episode #126:
Kubernetes for Pythonistas

Recorded on Friday, Jun 9, 2017.

0:00 Michael Kennedy: Containers are revolutionizing the way that we develop and manage applications. These containers allow us to build, develop, test and even deploy on the exact same system. We can build layered systems that fill in our dependencies. They even can play a crucial role in zero downtime upgrades. This is great, until you end up with five different types of containers, each of them scaled out and you need to get them to work together, discover each other, and upgrade together. That's where Kubernetes comes in. Today we'll meet Kelsey Hightower, a developer advocate on Google's Cloud Platform. This is Talk Python To Me, Episode 126, recorded June 9th, 2017. Welcome to Talk Python To Me, a weekly podcast on Python, the language, the libraries, the ecosystem, and the personalities. This is your host, Michael Kennedy. Follow me on Twitter where I'm @MKennedy, keep up with the show and listen to past episodes at TalkPython.fm, and follow the show on Twitter via @TalkPython. This episode's brought to you by Rollbar, and us at Talk Python Training. Be sure to check out what we're offering during our segments, it really helps support the show. Hey everyone, just a quick heads up on this episode. I recorded it on site, on location with Kelsey Hightower, which is a great experience, cool opportunity, but it turns out the audio was a little echoey, and not the same as back in the studio where I normally record, so, you'll have to forgive a bit of an echo on this one. It's a great conversation, and I hope you learn a lot from Kelsey. Now, let's chat with him. Kelsey, welcome to Talk Python.

1:51 Kelsey Hightower: Awesome, it's awesome to be here. A fantastic show, so I'm honored to be a guest.

1:54 Michael Kennedy: Thanks. It's an honor to have you a guest. You had such a cool keynote at PyCon. It was really fun.

2:01 Kelsey Hightower: Yeah, I was honored to do that one, that was my I think completing the mission. Python Meet Up was where I first started, I didn't have an honor to do a closing keynote at PyCon 2017 was amazing.

2:12 Michael Kennedy: You've completed the circle of your Python journey.

2:14 Kelsey Hightower: Yeah, I think so, I think you just think about what just over the years it was probably maybe six or seven years in between my very first talk and giving that Python keynote. So a lot has changed for me personally during that timeframe.

2:26 Michael Kennedy: Yeah, I'm sure it has. So, I totally want to talk about Python, but let's start with your story. How did you get into programming in Python?

2:33 Kelsey Hightower: I think, like most people, I was looking for my first real programming language, so I started with Bash as a system administrator. And you kind of run into some limitations with Bash, things you can and can't do. So I reached for things like Python. So I was working in the financial industry, and I just needed a tool that actually replaced some of our Cobol jobs.

2:52 Michael Kennedy: Okay.

2:52 Kelsey Hightower: So we had these old things that like transformed data from the mainframe, and...

2:56 Michael Kennedy: Can you believe that stuff still runs...

2:57 Kelsey Hightower: Oh yeah.

2:57 Michael Kennedy: On that?

2:59 Kelsey Hightower: And fast.

3:00 Michael Kennedy: Yeah.

3:00 Kelsey Hightower: Right? People still make a ton of cash on it. So my first, dipped my toes in the water and I learned packed decimal, right, because I had to convert alot the formats for the mainframe. So you're dealing ebcdic, fixed length formats and then Python was like a really straightforward language, but it also had great libraries to kind of deal with a lot of the math stuff and things that you would do, even in a mainframe world.

3:20 Michael Kennedy: Right, absolutely. Okay. And so you started out on Python, and, now, well started on Bash, got into Python, and now, are you doing mostly Go?

3:31 Kelsey Hightower: Yeah, so GoLang is kind of my language of choice, you know, something happened in the industry where I used to think back, most of the tools you were using as a system, running in Python, where it was like, app get, or yum, or things like ansible. It was kind of like the system-man's go-to language, you know, after Perl there was Python. And you know, something happened where GoLang just became like the thing you'd use for like distributed systems and containers, so kind of made us switch to that a couple of years back.

3:59 Michael Kennedy: That's cool, 'cause yeah, you're all about the containers these days, right?

4:02 Kelsey Hightower: Yeah, containers, distributive systems, this whole new world where we're like, packaging our apps in this universal format, and then just giving it to the system to deploy, so that's been pretty interesting during.

4:12 Michael Kennedy: Yeah, yeah, it's very interesting, and I think we're just at the beginning of this really, right?

4:16 Kelsey Hightower: Yeah, I think, a lot of times I think these are patterns we've always been doing. If you think about like in the Python world, we were dealing with virtual env, we were trying to create these self-contained deployments, so virtual env, having pip, and when you put those things together, thing like Docker is the natural evolution where you say, hey, I created this virtual environment, everything I need is in there and self-contained, how about we just package that up and ship that around?

4:38 Michael Kennedy: Right. Instead of creating a virtual machine, or even a real machine on real hardware, setting it up, setting up a virtual environment, get it all going. Now you just have, what, like a Dockerfile?

4:50 Kelsey Hightower: Yeah, and I think the Dockerfile, when you think about what it looks like, if you've never written one before, it's essentially like an organized Bash script in many ways. There's some semantic meaning to some labels and things you can do in there, but for the most part, you're making a repeatable set of steps, maybe you install Python, or you have a base image where Python's already installed, and then within that Dockerfile, you will say things, pip install my requirements.txt, and then when you're finished, you can just say, here's how you start my app. So it just gives us a repeatable process to what we've already been doing.

5:20 Michael Kennedy: Sure. It's like scripting automating all of the best practices we should be doing, but maybe aren't?

5:27 Kelsey Hightower: Yeah, I think so, and also usually on the team you have one person who does know how to build the app from scratch, right, you usually keep this like build my app type script inside of the directory, and the Dockerfile has become that, and it also now produces an artifact so that once you build it once, ideally you can reuse it in multiple environments.

5:46 Michael Kennedy: Right, yeah, that's cool. It lets you test closer to production, for example. But you definitely don't want to lose that person that knows how to build that thing.

5:54 Kelsey Hightower: Oh yeah, we're not replacing people. I think what we're doing is also, there's a lot of open source projects, right? You can imagine going to get your favorite web app builder, like Django, for example, and then having to deal with what version of Python, what dependencies, how to build it. Sometimes people just want to play with it right now, so an alternative, and this is key, 'cause some people are starting to replace their documentation, their scripts were just Docker files, and that's infuriating to a lot of people. So I think you can complement your installation, by saying, hey, here's the raw, maybe if you're still doing eggs or wheels for the actual application. But alternative could be, oh, here's a Docker container, if you just want to just run it on your, kind of your Docker machine.

6:33 Michael Kennedy: Yeah, yeah, absolutely, absolutely. Let's talk really quickly about PyCon before we dive deep into the Kubernetes. So you got to go to PyCon and give this presentation. That was super amazing. What was your favorite thing about the conference?

6:49 Kelsey Hightower: Well, I think the thing I liked the most about PyCon, most people are there because they want to be. Right, and I think that's a unique thing that we don't see at a lot of tech conferences. Either tech conferences can be expensive, so you really need to be reimbursed by your company, or you have to take time off work to do it, and since PyCon is on the weekend, a lot of people choose to be there. Some people, a lot of people pay their own way. So just the experience of the, I call it the, I guess kind of the hallway track, right, where you just walk around, you meet new people, I met people with the, what do they call it? The, the holy staff, or whatever. This thing travels the world.

7:23 Michael Kennedy: Yes, that's right.

7:23 Kelsey Hightower: The staff of enlightenment.

7:24 Michael Kennedy: That was Anthony Shaw, he brought it from Australia.

7:27 Kelsey Hightower: Yeah, so you have people like that walking around PyCon like, hey, we just want to get a picture of you with the staff of enlightenment. And you know that there's this organic feel to why people are there. It's a deeper community. It's like one of these communities that have been around for so long, this is a family reunion for most people at PyCon. So it just feels different.

7:44 Michael Kennedy: Yeah, yeah, I just had Kenneth Brights on the show, some bunch of Python staff and said, this is my favorite time of the year. Right, this is actually like the favorite week, and so I feel that way as well, it's really, really, really amazing, we get great keynotes like yours, but I really, besides the keynotes, I honestly try to just do the hallway track. Like these chances of meeting people who you would not normally get to talk to, you could just stop, and then I drop in on YouTube and watch the presentations.

8:11 Kelsey Hightower: Yeah, and I think the other element I didn't get to enjoy it this year, but my very first real conference was Pycon in Atlanta, and the sprints. You know, where you're actually working side by side, with I guess you can call them the big names in Python, the people that are actually doing the hard work, maintaining the packages, maintaining the core, and they just make space for you to come in, learn, do your first contribution, that was my first contribution to Python. I was working on PyPI at the time. And pip, so some of the distutils, and then those tools. So I think that's kind of the thing that's really unique to PyCon, is that you can actually sprint with the people that are working on this stuff.

8:47 Michael Kennedy: That's actually a really good point because most conferences don't have that, the big names go up, they speak at you, and then they leave, and that's that.

8:52 Kelsey Hightower: That's right.

8:53 Michael Kennedy: Yeah, so very, very cool. Okay. There's also the after party event. We went to like, it was the dinner at the art museum. For example, we got to meet up. And we've played this really cool game, the classical programmer painting naming game, which I think any programmer must do when they go to an art museum, right?

9:12 Kelsey Hightower: Yeah, I think for those that have never played the game before, is you go to a museum, where it's supposed to be serious business.

9:17 Michael Kennedy: Very proper formal art, Monet and so on.

9:20 Kelsey Hightower: Yes, people there are appreciating art and then, you show up, and then you start to give names or meaning to paintings you know nothing about. And I think you start to learn more about people as they pick names, or they're giving explanations for what they see. It's like, wow, this person really needs to take a vacation.

9:35 Michael Kennedy: Yes, exactly. Maybe there's like a picture with like a burning field in the background, people working, and you're like, these are like Java enterprise developers like finishing a sprint or something like that.

9:44 Kelsey Hightower: Yes, or you give it a name of Docker 001, right? It's on fire all the time.

9:48 Michael Kennedy: Awesome, awesome. All right, cool, so let's talk about Kubernetes. We spoke about Docker, and we spoke about containers. How does Kubernetes relate to that? What is Kubernetes and how does it relate to Docker?

10:00 Kelsey Hightower: So I think when you think about Docker, you think about about have a single machine, and if we think about the way we used to do deployments, or still do, I don't think it's that far removed. You can copy some bits to a machine, if you don't have the environment set up, you can probably do something like virtual env, get all your dependencies there. Maybe you use init system to start it up, or maybe just fork it to the background. So what Docker does is it says, hey, let's contain all that, make a reusable package, and then provide an API, so no more logging into servers, right? You got a Docker machine, you have the Docker command wipe tool, you say, Docker run my web app. Okay, Docker run MySQL, packer run all my dependencies. And then you look and that and you say, well, how do I scale that across a lot of machines? And I think that's where Kubernetes comes in, has a new API, you can say things like, hey, give me five of these. Decouple from a particular machine. And you can also do things like service discovery being built in, right? So that's kind of a deeper topic where you already have MySQL deployed, and this Kubernetes system you have no idea where it's going to land, it's going to get its own unique IP. How do you connect to it? And those are the kind of features that are built into Kubernetes from a bigger cluster management st

11:05 Michael Kennedy: Yeah, that's fantastic and it solves such a great problem, because Docker's pretty straightforward when you're doing one container, and your app is in the Docker container, but if you've got different tiers, a database tier like a web tier, a load balancing tier, like different services like if you're doing microservices it gets really crazy, like how do you even identify all of these things? And so Kubernetes is kind of like that, both the management of all the containers plus that glue that binds them together.

11:36 Kelsey Hightower: Yeah, it's like a big framework for you can do all these policy-based decisions, like you can have things like auto-scalers or core conceptual side of Kubernetes.

11:43 Michael Kennedy: Okay.

11:43 Kelsey Hightower: You could say whenever this aggregate set of my containers reaches 10% of CPU utilization, scale them up based on the step function. And you can define that by just using simple yaml you put it in place, and the whole system works that way. So it gives you a way to think about a lot of computers, like a single computer. Just based on the declaritive policy-based way of thinking about computing.

12:06 Michael Kennedy: All right, so let's see, we take a Dockerfile, let's start with just one for now. We take a Dockerfile, we want to run that in like a cluster, we can go to Kubernetes and go, Kubernetes, run this, scale me five of them, and you just get like an end point, and a port, and you just, you just treat it as one thing and it just round robin load balances around?

12:26 Kelsey Hightower: Yeah, so I think for most people it's great to think about this in layers, right, so you may use Docker for your day to day development, and you're going to produce some artifact. Let's call it Web App 001, and you'd push that to a registry. So the price of admission at Kubernetes is that container in a registry. Because that's how it pulls things down, that's just kind of the fundamental building blocks. And then you may have a deployment hey, deploy three of these, and at that point, you just have three of these things sitting there with their own IP addresses, they're going to land on the right set of machines, and the next layer is maybe you want to expose this to the outside world, then you can create what we call echo service. And say, hey Kubernetes, anything that looks like that web app, which I identify by labels, you know, app=foobar, and then the service will say, okay, I'll go and find those, and keep that curated list, so if one of them crashes and comes back up, it knows to replace them on the load balancer. And then that layer will handle incoming traffic and make sure that it routes directly to the right pods to handle it.

13:21 Michael Kennedy: That is really super nice because that's one of the concerns, right? Like, I'm going on a vacation, pretty soon, I really don't want my sites to go down, and just vanish from the internet, but I also don't want to be like woken in the middle of the night to try to bring it back, and this is not something that happens often, but it's something that's really bad if it does, and so we used to solve this problem by sticking pagers on people, right?

13:47 Kelsey Hightower: Yeah, so I think the whole on call thing is born from, especially for the simple cases. It's okay to get on call, like for something that is like, oh wow, no-one could have imagined that, that's a true emergency. But some of these on call pages, it's like, hey you're one instance of your site is down. And you're like, so why are you calling me? Like how about you just respin that up, we know how to do that.

14:08 Michael Kennedy: Exactly. Did you try rebooting the server?

14:09 Kelsey Hightower: Yeah exactly.

14:09 Michael Kennedy: How many times?

14:11 Kelsey Hightower: And lots of people wake up and they do that, they wake up and say, you know what? Let me just kick the server, and then go back to sleep. And that's a thing that kind of disappears in a system like Kubernetes, where you just say, well, I want three, no matter what. So if a machine goes down, it just knows that, hey, let me just spin one up, because the declaration you used to create the first one is still there, and it can use the same template over and over again.

14:32 Michael Kennedy: Nice, so you basically pin it to a version, like right now you should have three of version three of our app, if it goes down, recreate that, right?

14:39 Kelsey Hightower: Yeah, and that's the key about this whole container thing. Is people say immutable, and that's probably a good way to think about it. But while it's running, of course there's memory being mutated. But on disk, you're starting from the same point over and over again. So if you do have to restart two or three, you're always going to get back to the same point that you expect.

14:58 Michael Kennedy: Yeah, that's awesome. So, really you're almost to the point where the only reason that wouldn't come back alive okay is some other service it depends on changes, right? Like if you're doing microservices and the API completely freaks out your app, then you've got to fix it, but if something went wrong with that machine that just, well, it's just going to fix itself, right?

15:15 Kelsey Hightower: Yeah.

15:16 Michael Kennedy: And then it'll come back and it will be fine.

15:18 Kelsey Hightower: You bring an important point. Kubernetes doesn't solve all the problems, right?

15:21 Michael Kennedy: It's not magic.

15:21 Kelsey Hightower: It's not magic.

15:22 Michael Kennedy: It doesn't sprinkle magic on us?

15:23 Kelsey Hightower: No, no, no. It's definitely not magic, you have people, like you're right, if someone were to deploy a new version of a dependency that you have, and the API is fundamentally different, yeah, Kubernetes will say everything looks good to me, and your app will be down. So you have to think about where Kubernetes starts and stops, and then what becomes an application concern, that's where your classic metrics monitoring come into play.

15:44 Michael Kennedy: Yeah, but it does make it easier to test as like kind of a cluster, right? So, if you have this microservice then with five services, you can just locally, or in some staging area, spin up all the various pieces, make sure they click together, and then just make sure what you deploy is all those pieces, right?

16:01 Kelsey Hightower: Yes, we're definitely getting into a place where now we have an API to reason about this kind of thing. Right, you have five different deployments that need to talk to each other. You can associate, aka, a service, give every deployment a name, and then just reference those services by name, whether there's one or five of them, and you can be sure they connect to one that's available.

16:20 Michael Kennedy: Nice.

16:20 Kelsey Hightower: So that's just much easier to do versus tribal knowledge scripts and just knowing how to do things.

16:25 Michael Kennedy: Yeah, it sounds better, for sure. How do you know from what's in your app, like suppose I have a Flask app like you had in your demo, and it talks to, say, database server that's also running in a different container, but also managed by Kubernetes. Do I talk to it by name, by IP address? How do they find each other?

16:42 Kelsey Hightower: So, today in Kubernetes what you would do is ideally just say, hey, I want to connect to MySQL by name, MySQL, on port 3306. And what will happen is, if you have a deployed container in Kubernetes, ideally you'll have a service in front of it, and the service name will be called MySQL, so what that does is, Kubernetes has these control loops, so what the control loop will do is, all right, let me find the MySQL server, get its IP address, and update the internal DNS server that we have, cluster wide. So all you have to do is call MySQL, and the name of the IP address associated to that service, that IP will be fixed for the life of the container, so even though the container may go up and down to get a new IP, you'll have this kind of virtual IP that matches to it.

17:25 Michael Kennedy: And it kind of runs its own DNS so you talk to it by the name you've given, even maybe that load balancer group of things, and it just finds it?

17:32 Kelsey Hightower: Exactly, so then it becomes a virtual IP that allows us to have a stable naming convention. Kind of the key to doing service discovery correctly.

17:39 Michael Kennedy: Okay. This sounds pretty cool, does this work pretty well in your own data center? Like if I had 10 big Linux machines, could I just set this up

17:48 Kelsey Hightower: Yeah, so that's always been the goal with Kubernetes. We think of it like, some people refer it to the Linux of distributive systems.

17:53 Michael Kennedy: Okay.

17:54 Kelsey Hightower: Right, so the goal is it doesn't really matter too much what distro you have, or where you run it, we have all these pluggable, what we call them cloud providers, so if you're in Amazon or Microsoft, or Google, that would kind of detect that and like integrate with all the stuff that's there, so you say, give me a load balancer, it'll spin up the proper load balancer for that environment. If you're on premise, you're free to make your own integration, so maybe you have something like a F5 load balancer, you can, you know, do your integration there, but for most functionality, it doesn't matter, if it's on your laptop, or your own data center, you install it and you hook it up correctly, you have a Kubernetes cluster.

18:25 Michael Kennedy: And you guys at Google also have, you guys do some cloud computing, right?

18:29 Kelsey Hightower: Yes. We do a lot of cloud computing.

18:31 Michael Kennedy: And you guys have a Kubernetes service, as part, like kind of you got Google compute engine, you have Google's Kubernete, right?

18:39 Kelsey Hightower: Yeah, we call it Google container engine.

18:41 Michael Kennedy: Container engine, right.

18:43 Kelsey Hightower: So it's largely the open source Kubernetes, deeply integrated into Google Cloud, and we just try to give you all the things we know you would need, like access to load balancers, storage metrics, monitoring, logging, audit logs, that kind of thing.

18:55 Michael Kennedy: Okay, and, so tell me about the persistent stuff. This is almost more of a Docker question that it is a Kubernetes question, but if I run MySQL on one of the containers, ultimately I want that data not be transient, right? Where does that stuff go? On one hand I could obviously just hook into like RDS and Amazon or some day to day service, but assuming I'm not doing that, if I'm on disk, where does it go?

19:19 Kelsey Hightower: So this question is probably the biggest source of confusion, because it's the default. If you take Docker out of the equation, and I tell you, I have a server, and I put, and I install MySQL in the server, app get install MySQL, and you write data, where does it go? And it goes to whatever vault you write to on that server. And if the server dies, there goes your data. Now let's add Docker to the equation. You say, Docker run MySQL. Now the default in Docker is that you're going to get a temporary file system, and you won't be able to write

19:49 Michael Kennedy: Inside the container?

19:50 Kelsey Hightower: Inside the container, but really inside your true root, right? Still on the disk. It's at first going to get its own unique name, and by design, by default, we're just going to clean up when that container dies. But if you

20:02 Michael Kennedy: Here's the temp files, we don't need it anymore.

20:03 Kelsey Hightower: Yeah, exactly, now if you wanted to do this, you could just say the same /var/lib/mysql, just mount that into the containers, you're going to say, Docker run MySql, mount in the host, /var/lib/mysql, into the container, /var/lib/mysql, and everything you know about writing data to disk is pretty much going to be the same.

20:20 Michael Kennedy: Okay.

20:21 Kelsey Hightower: So there's no magic there, it's just that the default for container is complete isolation.

20:26 Michael Kennedy: I see, so yeah right, so you basically configure your MySQL to write to like /datastore, or /var/datastore, whatever, and then, as long as you map that somewhere on your main machine, you could throw away that container, recreate it, it'll read from there again, and it'll carry on, right?

20:42 Kelsey Hightower: Exactly, I think we're all spoiled by just having full access to a machine, and then wherever it writes, that's where it writes. But inside of the containers, that you can run multiples of these containers at one time, you kind of want your own file system space to do whatever you want, but remember, you can always just mount things, you just have to be explicit, vs being an implict contract.

21:01 Michael Kennedy: Right, and you jus put that in the Dockerfile?

21:03 Kelsey Hightower: So that's the thing, so in Docker files, that's where Kubernetes starts to be very advantageous to people, or something like Docker Compose. Some of these semantics around, run it like this, it's kind of where Kubernetes starts to shine. So when you look at a Kubernetes manifests, you say, run this container, and oh, these are the volumes that come from the host, but I want them mounted here into the container. So you look at the full spec, you can see, oh, this is what should happen. And you know that'd be the right semantics, versus, either Docker you can express some volumes, but you really need to make sure that you do the right thing when you say Docker run, mount all these things up, or use something like Docker Compose.

21:40 Michael Kennedy: Right, and it's a little more like your Kubernetes yaml file can just put that all together, right?

21:44 Kelsey Hightower: Yeah, you want to be the whole contrast. When we think about a pod, we say, a pod is the network, the container, and the volumes that it needs.

21:52 Michael Kennedy: This portion of Talk Python To Me has been brought to you by Rollbar. One of the frustrating things about being a developer is dealing with errors. Ugh, relying on users to report errors, digging into log files, trying to debug issues, or getting millions of letters just flooding your inbox and ruining your day. With Rollbar's full stack air monitoring, you get the context, insight and control you need to find and fix bugs faster. Adding Rollbar to your Python app is as easy as pip install rollbar. You can start tracking production errors and deployments in eight minutes or less. Are you considering self-hosting tools for security or compliance reasons? Then you should really check out Rollbar's compliance sass option. Get advanced security features and meet compliance without the hassle of self-hosting, including HIPPA, ISO27001, Privacy Shield and more. They'd love to give you a demo. Give Rollbar a try today. Go to TalkPython.fm/rollbar, and check 'em out. How do you guys use Kubernetes at Google? I saw that this sort of was born out of the Borg, which is a pretty awesome name. And, now it's an open source project, and it's hosted by, what is it, the Cloud Native Computing Foundation?

23:04 Kelsey Hightower: Yep so, Kubernetes was born, CNCF is a foundation kind of designed for all these cloud native tools, FluentD, Prometheus, Kubernetes, Open Tracing, this is where you start to do application level tracing on how web requests float through a system. So all of these collection of tools we think of make up the cloud native stack, kind of cloud native idea, and where does Kubernetes come from? Internally we have a thing called Borg, but that's kind of a bit misleading. People say Borg, and they mean like maybe six or seven layers of stuff.

23:33 Michael Kennedy: Okay.

23:34 Kelsey Hightower: So Kubernetes represents one layer of that kind of stack, right? So Kubernetes would be the part that knows how to manage resources across a bunch of machines, knows how to deploy containers, and then starts as a framework of information for things like metrics and monitoring. And you would bring in other tools for that. So I think when you say Borg internally, it means a lot to Google, as kind like of like a catch-all, but there's lots of stuff in there. If you want something like Borg in the real world, you would say, Kubernetes plus Prometheus, plus, there's a new project called Istio that works really great for micro-searches, so Istio's idea is that you have these sidecars that know how to do exponential backoff, retries, TLF mutual, and authentication between microservices, and policies, you take that and maybe some more, then you get what we call Borg. For the most part.

24:22 Michael Kennedy: Yeah, so is Kubernetes like actually running inside Google as part of this thing that's called the Borg now?

24:27 Kelsey Hightower: No, so Borg is its own system. It has a lot of features, and it actually has a lot of, the way Google works, the way Google infrastructure works. Hyper optimized, right? So if you think about Kubernetes, it's used a lot in our cloud offerings, like if you think about what the CloudML and the TensorFlow team is doing. So they use Kubernetes for their stuff. Right, and you can also imagine a lot of our customers are of course running on top of Kubernetes. Kubernetes can also be used for other product offerings. You can imagine something like building hosted services or something like cloud functions. So Kubernetes gives us a really nice API for managing containers in a cloud platform.

25:04 Michael Kennedy: Do you, or I don't know if you can even talk about it, but do you have groups of people who are basically running cloud computing services on top of Google app engine and these types of things? Like they're putting their own infrastructure, and then reselling it as a service on top of what you guys are doing?

25:23 Kelsey Hightower: Well, you're talking about internally?

25:25 Michael Kennedy: No I'm talking like, is there a company that like is Digital Ocean, or Linode type of company...

25:31 Kelsey Hightower: Oh, like a second tier like...

25:33 Michael Kennedy: Yeah like, they've got some

25:34 Kelsey Hightower: Just reselling services?

25:35 Michael Kennedy: They've got some special layer on top of what you guys are doing?

25:39 Kelsey Hightower: I don't know if there's any of those that I could probably talk about, but we know their services, like of course, probably like Snapchat for example. They've built their platform, and most cloud providers, what you'll either see is, either they'll use our storage. If you need petabytes, petabytes of storage at scale, you'll see something like that. And maybe they turn around and sell that as some other thing that they call by another name.

26:03 Michael Kennedy: Sure.

26:04 Kelsey Hightower: But typically to be a cloud provider, reselling another cloud provider, I think they'll get destroyed on margins. The best you'll probably do is what Rack Space is currently doing, where they're providing like support, whereas your account, you'll be paying the cloud provider, and they put their support premium on top.

26:19 Michael Kennedy: Okay, yeah yeah, that makes sense. I mean the margins are, it's a super competitive business.

26:23 Kelsey Hightower: Yeah, of course. I mean like why would you buy a VM, knowing that it's running on another cloud provider, and pay more for it?

26:29 Michael Kennedy: Yeah there has to be a secret sauce, something extra. Okay yeah, very interesting that. So it feels to me like if I use Kubernetes, I'm a little bit more removed from the infrastructure of any one cloud provider. So, can you maybe speak to like the relative lock in of one cloud provider versus the other, and using say, Kubernetes?

26:51 Kelsey Hightower: Yeah, so this lock in thing is like a, I look at it and, I think I made a quote recently, if you try to switch products to avoid lock in, that's just another form of lock in. And Kubernetes, the API is open, the project happens to also be open source, so if you run it on your own data center, on your laptop, Google Cloud, Azure, or Amazon, you're going to get the same API. So essentially you can be locked into the Kubernetes API, across all these environments. So the trade-off there is, maybe for some people, that's better than being locked into say, one endpoint like ECS or Horoku. So I think we talk about lock in, we ask ourselves what tradeoffs and compromises are we willing to make, and Kubernetes offers enough value that if you lock in to it, right, to go with the terminology we're using, then you feel like you can be a little bit more portable than what we were doing at virtual machines.

27:42 Michael Kennedy: Sure. Yeah, I totally agree. Also it depends on how much you kind of buy into the full set of APIs, right? Like if you use Google, blob storage and hosted databases and all these things and all of the specific Google app engine APIs, well, you're more tied into Google's cloud, then if you just use Linux and straight up stuff, write your own disk and your server, right?

28:07 Kelsey Hightower: Exactly, I think a lot of us have started to ask ourselves what the trade off is worth. So, when I look at the spectrum of trade offs around lock in, you've got a couple of options, like let's say you used a hosted database service, well they manage the database for you, they back it up, they control the versioning, but you feel it's a good compromise for your time, because you're using the open protocol. I can go and talk to the MySQL protocol in any environment that I want, so you know what? You can host that. Now it gets a bit more tricky when you're dealing with something like maybe, let's say, Cloud Spanner. That is its own thing, it offers its own capabilities.

28:39 Michael Kennedy: Tell people what Cloud Spanner is.

28:40 Kelsey Hightower: So Cloud Spanner is this like, we are challenging this idea that if you have SQL, that we can scale it horizontally. You can also have a multi-regional, we can also do distributive transaction, so this would be, you have the SQL language you know and love. So imagine having a SQL or a spanner database in Asia, in California, and Europe, and you're able to write to them, and then actually have all the data be available in all the regions, and be able to do things like distributive transactions, so you're not making the trade off between nosql, where you have eventual consistency, from the traditional database stuff. So that's one of things where people look at it, and say, you know what? It's really hard to maintain on my own MySQL shards and all of the magic that goes behind that, maybe I'm willing to trade off a little bit for this kind of propriety thing that only runs on one cloud provider, you just got to make that decision.

29:32 Michael Kennedy: Yeah, that's sound, I remember what that is now. That sounds really, really advantageous for global companies, right? And if you're on the internet, maybe global, right?

29:41 Kelsey Hightower: Yeah, what we're hearing is best of breed. So what we'll see now MYSQL management thing. And they say, we're going to leave those apps there, but you know what? We like Big Query, so we're going to deploy some stuff and just use Big Query on the Google side. And you're starting to see co You've seen this. Companies are huge. They may have 50 different departments producing their own apps, their own business units. So what you'll see is each of them choose their own story, or how they want to do things, and they'll end up having usage on all platforms. And maybe there's no reason to reconcile, when they're going for best of breed.

30:18 Michael Kennedy: Yeah, this service works good for this, that service works good for that. For Cloud Spanner, it makes a lot of sense to have this geolocated, maybe sharding by region, like Asia, and Europe and the US. And so when you do a query it hits the local version. How do you get your web app near each one of those regions, so for each query or each interaction with the database, it's kind of local?

30:41 Kelsey Hightower: So the goal is, of course you would try to optimize in a way where you write the local stuff to, it's local to Asia, but if you're in the US and let's say you felled over and you need to query some of that data, ideally we're replicating this, right? Synchronously across these regions when it's time. So the goal of Spanner is you don't want to make that trade off. You know that failures are going to happen, ideally when you want your datas in as many places as possible, so Spanner kind of gives you that ability to, so you don't think about that in your app, you don't think about partitions the same way. So spanner tries its best to do the partioning for you under the covers. It does the scaling underneath the covers for you. Depending how much money you have, you can keep scaling horizontally, but some people want that. The last thing you want to do is stop everything and repartition your database. That's a nightmare for people that have ever done that before.

31:28 Michael Kennedy: If you have petabytes of data.

31:29 Kelsey Hightower: Yeah, for some people, like you'd start a whole new cluster, and you're like, we'll just phase that thing out. So I think it's one of those things where you just got to think of a trade off opportunity and time.

31:38 Michael Kennedy: Yeah, I mean we feel like we have fast internet in these cloud, they're very fast and whatnot, but Amazon, I don't know about Google, but Amazon has a service where you can literally FedEx hard drives as like the fastest way to upload data.

31:51 Kelsey Hightower: Yeah, I mean you're going to see this quite a bit, I mean the more data people produce, you need these white glove services where someone shows up with a truck, and maybe they use that as the, rsync, you just like copy the big file and you rsync from there. And I think that's going to be, until we get a super fast pipes, we're going to have to figure that out, but I think that will probably be the fastest way in some cases as to just ship things around at that scale.

32:13 Michael Kennedy: That's awesome. You just gave me a vision. So you know, a lot of these cloud providers like Google, and Azure, they've got, basically, what look like shipping containers, which contain the servers. I can just see trucks that just drive up with the full container full of servers, you fill it with data, it drives over to the next data center, and it unloads your server.

32:32 Kelsey Hightower: Yeah, yeah.

32:33 Michael Kennedy: Wow, that possible, right?

32:34 Kelsey Hightower: Yeah, I think a lot of backup companies have to do this, right, if you're truely doing backups, they're offsite, and if you have lots of data, you will have a vendor come around in a truck and securely grab your storage medium, and go walk it down somewhere.

32:45 Michael Kennedy: Wow. All right, that's awesome. So, we talked about Kubernetes, and it's solving a lock in problem. One of my biggest pet peeves with working with these cloud things, especially when you deeply tie into their APIs, is I like to go work at a coffee shop, or on a plane, or I'm on a trip, and I have crappy internet access 'cause you know, I don't have a good international data plan or something. Is there a way to run this stuff locally on my laptop?

33:12 Kelsey Hightower: On your laptop you can run a project called Minikube.

33:15 Michael Kennedy: Okay.

33:16 Kelsey Hightower: So Minikube basically takes a lot of inspiration from Docker, so Docker for Mac, or Docker for Windows, this idea that you have this kind of lightweight VM, Docker will be installed there, and Minikube just says, all right, let's install all the Kubernetes components on a single node, 'cause you get the same API. So whether they have one node or five, you get the same API. And for a lot of people I guess you could do that, but me personally, I develop using my normal flow, I don't even use containers during development time. I'll use like home brew on my Mac, give me Postgres, give me Reddis, get those protocols up. And I just write in my app outside of all of the container stuff. Once it works, then I think about packaging and then making sure I can deploy it on Kubernetes. So I kind of decouple those workflows. I know some people want to make it part of just the end to end workflow, but I look that running integration tests. I'm running unit tests locally, integration tests on the integration environment, not on my laptop all the time. 'Cause they may be too big or take too long.

34:10 Michael Kennedy: Yeah, that makes a lot of sense. Okay, yeah, yeah, very cool. So if I'm going to use Kubernetes, how much of an expert do I need to be in like Devops type stuff?

34:18 Kelsey Hightower: So there's two parts to this, there's, I want to install Kubernetes, and manage it and upgrade it, then you should probably learn quite a bit about Kubernetes, right? And I think a lot of people are looking for the 10 minute, like give me the tool where I can just twist all the knobs. That's not reality right now. There's some host it offerings where you click the button and then they'll do everything for you.

34:38 Michael Kennedy: Right, that's...

34:39 Kelsey Hightower: GKK, Tectonic from Coral West, Red Hat has Open Shift, and some things to help you with Kubernetes, but if you want to be the cluster administrator, meaning the company comes to you when Kubernetes breaks, or needs to be upgraded, or something doesn't work, yeah, you're in for a learning curve, right? Like, have you ever watched a developer use Vim for the first time? They can't get out. This is a text editor, right? And I think, when you think about a fully distributed system that has a distributive database, it has all these moving parts, you need to expect to study a little bit if you want to manage a cluster. Now if you just want to kick the tires in demo mode, then yeah, install Minikube, go at it, find some Hello World tutorials, and you can get off the ground in less than a day, for sure. But if you're just a developer and you just want to use Kubernetes, this is where someone is managing it for you, as an API. Then it's all a little bit of tooling, kubectl on your laptop, you look at a few examples on how to create your, package your app and describe how you want it to run. There's also things like Helm, so Helm is a package manager for Kubernetes, you can say Helm, install etcd, Helm install Zookeeper, or MySQL, and then that will go out and get all the things you need to do, like the service, the volume, the deployment object, and deploy to Kubernetes, and manage it as like a single package.

35:56 Michael Kennedy: So I can like start that pod, which is just...

35:58 Kelsey Hightower: Yeah, exactly, so you can say Helm, install MySQL, Reddis and Kafka, that's the things I depend on. Didn't write your app, packing it up, and then you can refer to them as MySql, Reddis and Kafka, because service discovery's all built in, and for a lot of people, that is the magic moment, it's like, wow, I didn't have to touch all that stuff.

36:15 Michael Kennedy: Yeah, so we're used to doing that at the application level, but you're kind of doing this at the server level, like

36:22 Kelsey Hightower: Yeah exactly.

36:23 Michael Kennedy: if you install this server machine, and there'll be, that of its infrastructure.

36:27 Kelsey Hightower: Yeah, 'cause some of these projects are huge, like if you think about a production ready to go, Kafka setup. You're talking multi-nodes, they need to be configured, they need to be set up a certain way, and you may not want to learn all of Kafka just yet, but you may want something that's a little bit more bulletproof, so you can actually test how your client does faill over. So it's really nice to have something like Helm install Kafka. With three nodes. And then you can test out that your client does connect to all through them and fails over correctly.

36:51 Michael Kennedy: Yeah, that's really nice. So basically, I guess, to sum it up is, if someone is managing it within your environment, it's pretty easy to just like deploy your app and use it, but if you want to be the person that maintains it, that's pretty risky, right? Because now the entire company is sort of balanced upon your

37:10 Kelsey Hightower: Well think about your Linux distro. Right, like you know, how many people really know how to build a Linux distro anymore? Most people do not know that you got to bootstrap to get the kernel, get the user lang, get the package manager, make sure all of it works. You just like you use Ubuntu. And you're kind of beholden to every distro, Ubuntu works right, the upgrades don't break everything. And infrastructure's a lot like that, like the networking, you rely that someone has all your routes set up, that you can actually go out to the internet and back again. So we tend to think that these things are, we think we have control of them, but they just work, and they're largely invisible. So if you get a Kubernetes set up that works the same way, you can almost forget about it. But if you want to be the management of it, then it's going to be front and center.

37:52 Michael Kennedy: Yeah, of course, of course. Okay, cool. Let's talk about your keynote.

37:55 Kelsey Hightower: Awesome.

37:56 Michael Kennedy: I heard so much good feedback on the internet about your keynote. People really loved it, the comments on the YouTube video, right, so we follow YouTube, people can go and watch it. Yeah so, maybe just sort of... Let's talk about what you did, so you start out with a Flask app, right. Super simple flask app, you said, great, it runs here. Well, you kind of riffed a little bit on the challenges of actually running an app, right? 'Cause it's one thing that Python, it's another to have all the dependencies and all the configuration stuff, right?

38:25 Kelsey Hightower: Yeah, so I think a lot of people, when we say, hey, just use Docker. And you just tell the person that's doing Python to just use Docker, and then when they go out and look around about what's required, they're like lost because when you look at just a single container, and you got to bundle in Apache, uWSGI, Flask, shut the unix socket, then fork them off in the background.

38:45 Michael Kennedy: How did you forget to change the permission on the unix socket?

38:48 Kelsey Hightower: Exactly.

38:48 Michael Kennedy: I don't know why it doesn't work .

38:50 Kelsey Hightower: Yeah, exactly, so by the time you do all that, and you only have one Dockerfile to express all of this, you're like, something's not right, this doesn't feel like I would do on a VM, because on a VM, you would have two separate processes. And you think about them separately. Right, so I started with the hello world, and you go from hello world to, let's do hello world with uWSGI and Apache sitting in front. And then once people have that understood, and it's okay, now I get it, I see that you're using pip to manage dependencies. What does it take to put that under a container? And what we did is we'd say, hey, let's show two. There's one for your app and a second one entirely for Nginx, let's not mix the two.

39:25 Michael Kennedy: Right, so you basically set up one Kubernetes pod for Nginx and one for uWSGI, plus your app and you say let's run those two together and they can one, passes along

39:37 Kelsey Hightower: So slightly differently. So the pod concept is to capture what it means to be a logical application. So we know that Nginx belongs to this one instance of my app, they're inseparable. Right, so in that case, in a pod, you can actually have a list of containers. So we know we want Nginx to be the front door, Nginx will mount the unit socket from the other your app container, and then those two should be expressed in the same Kubernetes manifest. That's one pod, and then I can scale that together, horizontally. If I want five copies running.

40:06 Michael Kennedy: Nice, and do those two containers always run on the same machine?

40:09 Kelsey Hightower: Yes, they run on what we call the same execution context.

40:12 Michael Kennedy: I see.

40:13 Kelsey Hightower: So imagine having a virtual machine, you would just install Nginx there, and your app there, right? So they all live in the same place, but in this case, since we're using containers, we're going to put them in their own independent routes and file systems, so they're still independent from that aspect, but where we do start sharing things is the network. So they share the same IP address, that means Nginx can talk to your app over local host. Now my app is exporting a unix file socket, so it's a file. So what I can do there is say, hey, we're going to write that file to a shared file system

40:42 Michael Kennedy: So wherever that happens to be running on that unix machine.

40:46 Kelsey Hightower: Yep, so on that machine, we'll say I want a temporary host mount, and once that host mount is curated by Kubernetes, it's going to have a unique name so no one else mounts it, and then we're going to give it to both containers. So in their own separate worlds they see the same path, container A, your app, writes its unit socket, the Nginx container says, oh, there's a file in this mount point, and I'm just going to send traffic to it.

41:11 Michael Kennedy: This portion of Talk Python To Me is brought to you by, us. As many of you know, I've a growing set of courses to help you go from Python beginner, to novice, to Python expert. And there are many more courses in the works. So please consider Talk Python Training for you and your team's training needs. If you're just getting started, I've built a course to teach you Python the way professional developers learn. By building applications. Check out my Python Jump Start by Building 10 Apps at TalkPython.fm/course. Are you looking to start adding services to your app? Try my brand new Consuming HTTP Services in Python. You'll learn to work with RESTful HTTP services, as well as Soap, Json, and XML data formats. Do you want to launch an online business? Well, Matt Makai and I built an entrepreneur's playbook with Python For Entrepreneurs. This 16 hour course will teach you everything you need to launch your web-based business, with Python. And finally there's a couple of new course announcements coming really soon. So if you don't already have an account, be sure to create one at training.talkpython.fm, to get notified. And for all of you who have bought my courses, thank you so much, it really, really helps support the show. So we started with that, and you got it working in Kubernetes. And then you said, well, let's scale it, right?

42:21 Kelsey Hightower: Right, so now it's like you've got this pod manifest, and I guess before we did that we made sure our container worked with Docker. Right? And then now we said, let's look at how to get this thing to scale inside of Kubernetes, so we make this pod manifest, and then we submit it to Kubernetes and we watch it just place it somewhere. So now that it's placed on the machine, you can now say, now Kubernetes is in control of this. If I delete it, Kubernetes brings it back, right? To my original specification.

42:44 Michael Kennedy: That's way better than getting woken up in the middle of the night.

42:46 Kelsey Hightower: Of course. Right, this whole on call thing is overrated. Like if you've never done it before, trust me, it's not something you really want to do. And I guess to spice things up a bit, I decided to show what does the world look like when you have an API to control it all? Right, we're not just talking bash scripts. So, this is for the first time, I broke out the OK Google, scale the particular application, and I think a lot of people, if they've been doing this long enough, never thought where we'd get to a point where you can just ask for something, and it do it the correct way. So part of that keynote, not only did we deploy the app, we scaled it horizontally, while having my curl run in the background, showing the output from the app, and then we just did an in-place update of the application. All voice controlled. And I think people are like, wow, we're there.

43:31 Michael Kennedy: It was really such a cool demo. I mean, if you guys haven't watched it, those of you that are listening, you basically pull up your phone, and you said, OK Google, connect to Kubernetes. And you would, you'd set up an app or something in the background to wire your cluster over to...

43:47 Kelsey Hightower: Yeah, so basically some of the magic behind the scenes is, when you have, something like the Google Voice Assistant or a Google Home, what you're really doing is saying, I'm going to send my speech, and then there'll be a converter to does text to speech, or speech to text. And it sends to, in my case, I was using a thing called API.ai, and this is where you design your conversation, all the logic branches, and that does a bit of ML, to take the speech and say, all right, this is your parameters, this is your intent, this is your action. And then it sends it to a web hook, and that web hook I just had running, written in Python, I had it running inside of my Kubernetes cluster that would get my intents, like, create this deployment, well, what's the name of the deployment? What's the version of the deployment? And then we'll take that and it will interact with Kubernetes, and send a response back, which is text, and when the text hits your phone, it will read it back to you, so that's how we got the whole, theatrical, it reading back what it was doing.

44:36 Michael Kennedy: That was beautiful. It wasn't just like you could talk and it would work, it would have a conversation with you, right?

44:41 Kelsey Hightower: Yeah, so that was kind of the fun part where we would say, you know, I hope the demo Gods are on your side, and that brings a bit of personality to the system, whereas when you run a command and it either works or it doesn't work

44:51 Michael Kennedy: Status code zero, everything okay's.

44:53 Kelsey Hightower: Exactly, but when you have this thing saying, you know, hey deployment complete, that was awesome. And people are like, oh wow, that's a bit unique.

45:02 Michael Kennedy: Yeah, so you basically had got run, one of them running in Kubernetes, and you said, OK Google, scale this five times, and five instances of it scaled up. Then the real trick I think really impressed people was the zero downtime deployment. Because I know a lot of companies, when they schedule like deployment, go over a page, say, our site is down for maintenance, our people come in on the weekend, and we're going to try to get it back with this four-hour window, but it might take 12 hours to deploy it. Like, that's the wrong way, right?

45:32 Kelsey Hightower: Yeah, and I think that's a way where we just didn't have a cohesive system. We talked earlier in the show about having that virtual IP sitting in front of everything. So given that and we have these controls behind the scenes, so when we do our rolling update, and in part of the demo I showed how in Nginx there's a pre stock hook where you can say, drain the connections. Before shutting down, and then we can reject new traffic. So when you combine all that stuff together, Kubernetes gives us enough cover where we can take down an instance, and then remove from the load balancer, cleaning draining the connections, and replace it with a new version, and just do this over and over again 'til it's complete. That's where the magic comes in.

46:09 Michael Kennedy: I see, that really does make it work well. I was wondering, if you just got lucky, and it like, there was that microsecond where one was down, and it wasn't, one other, like there was a transitional period where it wasn't quite right, but no, you have this very carefully, gracefully shutting down, and it already spins up the new one, as, then it

46:27 Kelsey Hightower: Yeah, so Kubernetes understands those semantics, so what we did in the demo, when I asked Kubernetes to update the version, of course that goes with the whole pipeline, and we created new deployment object that has the new version, and once that's put into the system, Kubernetes says, okay, let's walk through this. The first thing we're going to do is launch the new version, make sure that it's all up and running so we go from three, to let's say, four, so the fourth one is now in the load balancer, and traffic goes flowing through, and then when we're safe to shut down, let's say number three. And then we give it these clean shut down hooks, and then it's gone, but while we initiate that process, we make sure no new traffic is allowed to flow. And this is why we don't actually see those blips oo hang ups.

47:04 Michael Kennedy: That's perfect. It's so nice.

47:06 Kelsey Hightower: Yeah, and that's like out of the box, so I think now that this is what you get out of the box, we're going to start to see new functionality as the years evolve, or the years go by, people start to try new challenges, what kind of apps can we deploy?

47:18 Michael Kennedy: Maybe even new ways of working develop now that this is automatic and not somebody like running manually a bunch of things, right?

47:26 Kelsey Hightower: Exactly, I did a talk in Paris at a conference called dotGo. And that talk I had a self deploying application. So in that demo, we talked about making a container, and you put it in the registry, you write a Kubernetes manifest, and you tell it to do things, but in that particular talk, I showed an app that deploys itself, so it's a statically linked binary, and if you run it on your machine, you'll just, hello world, but if you give it the --kubernetes flag, it will sense the Kubernetes API that you may have either mapped in a config file, and then what it'll do is it'll create its own container in the background, it will upload itself, create its own image, and then make all of the configs necessary to running Kubernetes, and sit there, and if you have five copies running, Kubernetes has an API to get all the logs. So you can imagine this thing is now in your laptop, running in the foreground, and it goes up to Kubernetes and says, okay, where am I deployed? Give me all three strains of logs, and put them together and string them to the laptop. So it looks like it's just running on your machine, when the reality it's running on the distributed system somewhere in the world.

48:26 Michael Kennedy: That's a really interesting point is, the logs is also like a big challenge 'cause if, anytime you're scaling horizontally, putting, especially on microservices where it's going from here to there to there, right, it's like, how do I put back together these requests?

48:39 Kelsey Hightower: Yeah so that's a thing where, when people say they want to move to microservices, it's like, we've got to have a conversation. You're going to go from a monolith where you only have one thing to really worry about, to breaking it up into a bunch of pieces. So now you got to stitch back together your logs, and that's where things like trace ID's come in handy.

48:55 Michael Kennedy: Yeah.

48:56 Kelsey Hightower: We also need to do things like, where is it slow? Right, first you were just kind of in memory, making all these calls between what's on the stack, or whether the function needs to resolve to, to now it's over the network. Well is it on the same machine? Is it on a different machine? How do you know? So this is where things like tracing become very important. So a client hits your front door, you may have to talk to three other services, you want to have some metrics to tell you, you've been spending 25% of your time on service A, that's your bottleneck, you might want to think about moving that either closer to the app, or putting it back into the app, because maybe you cut too deep.

49:30 Michael Kennedy: Right. It's too chatty. You're making 100 calls to it.

49:33 Kelsey Hightower: Oh, that is the number one thing, people, people go from. People have like 10,000 lines of json just flying around their infrastructure, because you're passing all of these objects back and forth. And you're like, why is my bandwidth going through the roof? Or why is it so slow?

49:45 Michael Kennedy: Why is it slow?

49:46 Kelsey Hightower: Yeah, you just have now, json parsing as a service.

49:48 Michael Kennedy: josn parsing.

49:49 Kelsey Hightower: I got it.

49:51 Michael Kennedy: Awesome. Yeah, so I guess the finale of your demo was to just say, OK Google, upgrade to version two, and it just, all five instances, just switched, magically, right?

50:02 Kelsey Hightower: And this is the nice thing about the demo, is that when I ask it to do that, and when you're building these voice integrations, you have this idea of what we call a follow up. Right, so if I say a certain word, or phrase, it will go down a different logic branch, so when it worked, I was able to say, thank you. And then it was able to respond, I got to admit, that was pretty dope.

50:22 Michael Kennedy: Yeah.

50:23 Kelsey Hightower: And everyone gave a good round of applause, and then you can just end it right there. But if it didn't work, I was not going to say thank you, because it would silly for thing to say, that was pretty dope.

50:32 Michael Kennedy: Thank you for ruining my demo.

50:33 Kelsey Hightower: Yeah, exactly.

50:35 Michael Kennedy: Yeah, that was really good. So people listening, definitely go check it out, we'll link to it in the show notes, it'll be great. So maybe we'll leave it there for Kubernetes. If people want to get started, they have good resources. You wrote a class, right? A course?

50:50 Kelsey Hightower: Yeah, so I did, one of my colleagues Carter Morgan, we did a Udacity course on kind of a workshop that I used to give hands on out in the field for a few years. And we decided to turn it into a Udacity course. So people can learn at their own pace, and we kind of go through a narrative, like you saw in the keynote, where you package a container, you play with Docker a little bit, and you go from Docker to Kubernetes, and you just kind of go through the basics, and there's like sections in there where you do a little hands-on, then we have some slides, and you kind of see how things work at a high level. Illustrated. And that, the goal there is like, you run at your own pace, and you kind of really understand what this thing is trying to do for you.

51:24 Michael Kennedy: All these things you have to do hands on to learn, you can't just kick back and then know it.

51:28 Kelsey Hightower: Yeah, exactly, I think there's going to be a mix of learning. I tell people it's going to be a little while before you get it all. That everyone has a different way they like to start. And I think the course is great for people who like to kind of learn visually a little bit of hands-on, and then have that as a kickstart to go into the deeper material like the Kubernetes documentation, also have a book coming out with O'Reilly, a Kubernetes up and running. That's written with the co-founders of Kubernetes. Joe Beta and Brendon Burns. So we just put the wraps on that, so that should come out

51:55 Michael Kennedy: Oh congratulations.

51:55 Kelsey Hightower: pretty soon, yes. Writing a book is a tall order. But hopefully it helps more people kind of learn and get up and running.

52:01 Michael Kennedy: Yeah, yeah, I'm sure it will. And your course is free, right?

52:04 Kelsey Hightower: Yes of course. We try to make as much of the documentation free, and the courses definitely falls into that, to that bucket.

52:10 Michael Kennedy: All right. So just 'cause the timing of PyCon, and all this was very, very near, I think it was the same week as Google IO. Did you...

52:17 Kelsey Hightower: Yes.

52:18 Michael Kennedy: Did you happen to go to Google IO, or I'm sure you watched it, right?

52:21 Kelsey Hightower: Yeah, I watched a bit of Google IO. That's a big event for Google. It does a lot around the consumer side of things, mobile, a lot of the ML stuff that we're doing at Google in general. But you know, PyCon's a special event, so we actually had a lot of Googlers show up there as well. You know, a lot of people are into Python. Python's a big presence on the app engine, the libraries, Google cloud, the command line tool's written in Python. So Python has a very storied history at Google, lots of people are Pythonistas, and will always be. So that is also a big event that we all mark on our calendars at Google.

52:53 Michael Kennedy: That's awesome. What did you make of the specialized chips that Google's making, and the whole machine learning AI?

53:01 Kelsey Hightower: Well Google's just trying to show a little bit of their secret sauce, like Google has been pushing boundaries for a long time. I think there's a saying at Google: Google doesn't have all the solutions, but we definitely have all the problems. So a lot of our problems, if you think about them in a large scale, you get into the situation where you need to do specialized things, like come up with your own chips to process things faster. Right, like imagine like, when you do a Google search, you want instantaneous, like if it took even 500 milliseconds, you'd be like, this, what's wrong with my internet access?

53:30 Michael Kennedy: What's broken?

53:31 Kelsey Hightower: Yeah, exactly, so people are very impatient at this point, so we're always finding a way to reduce perceived latency. We got to do things that give you accurate results faster than ever. And that pushes those boundaries all the time.

53:43 Michael Kennedy: You've spoiled people for other websites. You can go to website that seem like they're not really big and doing much, and they're like two or three seconds, five seconds to respond, like what is wrong with this website?

53:54 Kelsey Hightower: Oh, two seconds, people leave, like if your website comes up for two seconds, people wouldn't put their credit card in a website that slow, they just have an instant mistrust in saying, you know what? If you can't load the page quickly, I'm not putting my credit card in there.

54:06 Michael Kennedy: That's a really interesting point, I saw a study that says that people trust from a security perspective, software that's faster than software than is functionally accurate, they believe that's more secure. Even though it's kind of cross cutting anyway.

54:20 Kelsey Hightower: Yeah, and I think there's a good reason for that. I mean, you figure that security's probably the hardest part of the problem, and if the thing that you should be the most straightforward part isn't right, you really have your doubts about the security part.

54:31 Michael Kennedy: Yeah, it's a really, really good point. All right. So I guess we'll wrap it up. That was such a great conversation. I have two more questions before I let you out of here.

54:39 Kelsey Hightower: Okay.

54:40 Michael Kennedy: If you're going to write some code, what editor do you open up?

54:42 Kelsey Hightower: Vim.

54:43 Michael Kennedy: Vim?

54:44 Kelsey Hightower: And Vim, with no plugins. Just straight, no syntax highlighting either. Like it's just straight doc with white letters, and no helpers, and I tend to pay attention to every line of the code. I don't have any blind spots, like my variables are this color, or my comments are this color. So I end up just reading everything, top to bottom. It's probably slower, but I feel like I know the code base better 'cause I'm not ignoring things anymore.

55:09 Michael Kennedy: Right, it's not magically, just barely completing with a different word than what you meant?

55:13 Kelsey Hightower: Nope.

55:14 Michael Kennedy: Yeah, awesome. And, a notable Python PyPI package, there's over 100,000 of them now. Like one that maybe you've run across lately that was really cool?

55:23 Kelsey Hightower: Well for the keynote I actually used the Kubernetes package. Which allows you to write integrations for Kubernetes and Python, and that was a very interesting package, mainly because it's generated from the Swagger API of Kubernetes, so if you've never used Swagger before, it's a way to kind of instrument your API so that it can be, you have this machine representation of, hey, here's every endpoint, here's the input, here's the output, and that Python library was generated from that, and then the documentation was also generated, so there was an example for every function you have. And it has pretty good coverage across the board of doing things in Kubernetes, so that's a well put together package, and that's what I've been using lately.

56:01 Michael Kennedy: All right, that sounds really, really cool, people want to check that out. So, good place for a final call to action, like people are excited about Kubernetes, how do they get started?

56:08 Kelsey Hightower: I think a good way to get started is make sure that you need Kubernetes. Right, there's a nice little Tweet I saw the other day, a guy had kind of this full-sized tractor trailer flatbed, and it was just like a little piece of wood on the flatbed it was hauling, and this is what happens when people like try to deploy a very single Wordpress on a 1000 node Kubernetes cluster, like listen, it's okay if you want to learn, that's cool, but you don't need it for everything. So I would say if you want to learn, Minikube is a great way to start. But it's also okay to look at Kubernetes and say, you know what? It's not required for what I'm doing, and exit there. But if it does, it's a good fit, like you're running web instances or you want to have mixed workloads, you could also do batch jobs on Kubernetes. You can do these cron jobs on a timer, so Kubernetes kind of gives you this framework for thinking about running multiple workloads on a single set of machines, and then you can start going higher up the stack. So Minikube, find examples in your language of choice, like how to run my Python app in Kubernetes, that's a really kind of good Google search. And just take your time with it, right? You're going to have a long time to really master this thing.

57:13 Michael Kennedy: Yeah, it's something you can get started quick, but it takes a long time to master?

57:16 Kelsey Hightower: Exactly. Hello world, I always challenge people: I say, you know Kubernetes, this is what I want you to do. In the language of your choice, I want you to write hello world, package it, and deploy it in Kubernetes. And you watch how many people fall over, because they don't actually remember all of the commands, and they're still looking for their cheat sheet to put things together. So make sure you really understand the basics before you jump into all the complexities.

57:35 Michael Kennedy: All right, well, fantastic. Thank you so much, it's been really good to chat about Kubernetes and congrats on the keynote, it was amazing.

57:42 Kelsey Hightower: Awesome, thank you.

57:43 Michael Kennedy: Yeah, thanks, bye bye. This has been another episode of Talk Python To Me. Today's guest was Kelsey Hightower. And this episode has been brought to you by Rollbar and Talk Python Training. Rollbar takes the pain out of errors. They give you the context insight you need to quickly locate and fix errors that might have gone unnoticed until your users complained, of course. And as Talk Python To Me listeners, track a ridiculous number of errors for free at rollbar.com/talkpythontome. Are you or a colleague trying to learn Python? Have you tried books and videos that just left you bored by covering topics point by point? Well check out my online course, Python Jumpstart by Building 10 Apps at TalkPython.fm/course, to experience a more engaging way to learn Python. And if you're looking for something a little more advanced, try my Write Pythonic Code Course at TalkPython.fm/pythonic. Be sure to subscribe to the show. Open your favorite podcatcher and search for Python, we should be right at the top. You can also find the iTunes feed at /itunes, Google Play feed at /play, and direct RSS feed at /rss on TalkPython.fm. Our theme music is Developers, Developers, Developers, by Cory Smith, who goes by Smix. Cory just recently started selling his tracks on iTunes, so I recommend you check it out at TalkPython.fm/music. You can browse his tracks he has for sale on iTunes and listen to the full length version of the theme song. This is your host, Michael Kennedy, thanks so much for listening, I really appreciate it. Smix, let's get out of here.

Back to show page