#9: Docker for the Python Developer Transcript
00:00 It's time to jump into the brave new world of containers and Docker with Patrick Shanizen.
00:05 This is Talk Python to Me, episode number nine, recorded Monday, May 4th, 2015.
00:10 Hello.
00:40 And welcome to Talk Python to Me, a weekly podcast on Python, the language, the libraries,
00:46 the ecosystem, and the personalities.
00:48 This is your host, Michael Kennedy.
00:50 Follow me on Twitter, where I'm @mkennedy, and keep up with the show and listen to past
00:54 episodes at talkpythontome.com.
00:57 This episode, we'll be talking with Patrick Shanizen about Docker.
01:01 Before we get to Patrick, I'm really excited to tell you that this episode is brought to
01:05 you by Codeship.
01:06 Codeship is a platform for continuous integration and continuous delivery as a service.
01:11 Please take a moment to check them out at codeship.com or follow them on Twitter, where they're
01:16 at Codeship.
01:17 Now, let me introduce Patrick.
01:18 Patrick Shanizen is a member of the technical staff at Docker Inc.
01:22 He helps build Docker, an open platform for distributed applications for developers and sysadmins.
01:28 Patrick is a software developer and storyteller.
01:31 He has spent 10 years building platforms at Netscape and Sun, 10 more evangelizing platforms
01:36 at Google, VMware, and Microsoft.
01:39 His main professional interest is in building and kickstarting the network effects for these
01:44 wondrous two-sided markets called platforms.
01:46 He worked on platforms for portals, ads, commerce, social, web, distributed apps, and of course,
01:53 the cloud.
01:53 Patrick, welcome to the show.
01:56 Hey, Michael.
01:57 Thanks for having me.
01:58 Hey, I'm really glad that you're here, and I'm super excited to have Docker on the show.
02:04 Ever since I learned about Docker, there's a few times I've learned about some technology,
02:09 and it has completely taken over me, and somehow I'm obsessed to learn it.
02:14 And when I figured, when I came across Docker, when you guys announced it, I don't know, was
02:19 it a year ago?
02:20 Two years, about two years ago, right?
02:22 Yeah, I came across it, and I'm just, oh my gosh, I need to learn this.
02:27 I need to understand this, because this is going to be huge.
02:29 And I really think Docker and these container technologies are going to change the way we
02:34 build and deliver software.
02:36 So let's start with telling the world, what is Docker, for those who don't know?
02:41 Sure.
02:42 So Docker is an open source project that was announced by Solomon Haidt, the founder of
02:50 .cloud and Docker, two years ago at PyCon.
02:53 So it's relevant for Python developers, because it all started at PyCon.
02:59 And so Solomon and his team were building this platform as a service called .cloud, and they
03:06 were struggling with that option.
03:08 It's a time when, at that time I was at Google, I was doing a Google App Engine.
03:13 There was Heroku, who's been bought by Salesforce.
03:17 And so .cloud had trouble attracting developers to their platform as a service, which allowed
03:23 you to deploy applications in PHP, Python, and multiple languages.
03:28 Well, at the time, a platform like Google App Engine, for example, was limited to Python
03:33 and Java as runtimes, and Go as well.
03:37 And so they were struggling with adoption, but they had this gem inside of .cloud that they
03:43 were using to build their platform as a service that was called Docker.
03:49 And that was an easy way to package and distribute, build, ship, and run Linux containers.
03:58 And so they open-sourced the project at PyCon two years ago.
04:03 And then it took the world by storm.
04:05 Like all developers starting adopting it.
04:08 And the whole industry restructured around that notion of Linux container.
04:14 So Docker essentially is a set of tools that allow you to build, ship, and run distributed applications based on Linux containers.
04:25 So there's the Docker command line, which is at the same time, it's a unique binary that has a client and a daemon.
04:34 And the daemon lets you get images for containers on your local machine or on a server and run them in a Linux container, like isolated from other workloads on that machine.
04:49 So you can have your own file system, your own users, your own network namespace.
04:54 And so Docker is based on Linux namespaces and C groups.
04:59 And then you have Docker machine that lets you provision machines running the Docker engine on any cloud provider, like Google, Amazon, Microsoft, but also locally on your machine on VMware, VirtualBox, or Hyper-V.
05:17 So Docker itself, Docker machine to provision the machines.
05:23 Then there's another project called Docker Compose that allows you to assemble all these containers that you're using to build your microservices into a single unit that you can run on your laptop for development or ship to the cloud as a single unit.
05:41 So typically you would have, for your typical Python app, maybe you're using Django or Flask.
05:47 You would have one container that has Django, your Django application with its code in it.
05:53 And you would link that to another container that has Redis on MySQL.
05:57 And then you can start all this with a single command, Docker Compose.
06:03 So you can, with Docker Compose, you can just start.
06:06 Yes, exactly.
06:07 So in Docker Compose, you say, I have my image for my Django app.
06:12 Let's call it like, let's call it like, my Django app.
06:18 I want to launch that with port 80 on the machine, on the server on which it's running, mapped to port 8080, maybe internally, where I'm running it.
06:28 And then I want to tie that to a Redis instance that's using the Docker image Redis that's on Docker Hub.
06:35 And then inside of my code, I'm just going to refer to Redis and automatically it will just tie that to the other container that's running.
06:53 CodeChip is a hosted continuous delivery service focused on speed, security, and customizability.
06:59 You can set up continuous integration in a matter of seconds and automatically deploy when your tests have passed.
07:05 CodeChip supports your GitHub and Bitbucket projects.
07:08 You can get started with CodeChip's free plan today.
07:11 Should you decide to go with a premium plan, Talk Python listeners can save 20% off any plan for the next three months by using the code TALKPython.
07:19 All caps, no spaces.
07:22 Check them out at CodeChip.com and tell them thanks for sponsoring the show on Twitter where they're at CodeChip.
07:29 So before we get into the details of how we run all that, you know, it's interesting that Docker kind of grew out of what was supposed to be kind of a larger project, right?
07:48 Yeah.
07:48 A platform as a service company, if you will.
07:50 It was just, it was just, that was how you guys were trying to internally achieve greater density for your hosting.
07:55 Yeah, that's exactly right.
07:57 And the notion of running Linux containers has been around for a really long time.
08:03 Or actually containers themselves have been around for a long time with three BSD gels.
08:10 In the, like, 2005, there was already the notion of Solaris containers in Solaris.
08:18 So that notion of containers existing for a long time.
08:21 And that's the way Google is running its workloads inside of Google data centers, using Linux containers for isolation in order to be able to run workloads with a very high density,
08:36 because it's using Linux or operating system virtualization.
08:43 So that's a super important difference or distinction to make between these containers and virtual machines.
08:53 So if I create a virtual machine, maybe it's a couple of gigs for the file system and the operating system and all my data and everything I've done to it.
09:02 Maybe it uses two gigs of RAM when I start and stop it.
09:05 It's got to boot up like a regular machine.
09:07 You know, it's a big blocky thing.
09:10 How big are these Docker containers?
09:11 Yeah, they can.
09:12 So Docker containers compared to virtual machines can be much smaller.
09:16 They can be big as well, but usually they're much smaller.
09:22 And another advantage they have over VMs is that the image format for Docker images, they are layered.
09:32 So you build a layer, you can say, oh, I'm going to inherit from the Ubuntu layer, and then I'm going to install Python on that.
09:41 And then I'm going to use pip to install my requirements on that.
09:45 Each of these instructions generates a different layer, and when you're using the Docker pool to pull the image or Docker run to run that image,
09:55 what it's doing is that it's looking in its local cache to see, oh, hey, I have Ubuntu already in there.
10:02 So I'm just going to download the image that adds Python to it and then the one with the requirements for my app.
10:09 So there's an additional density in terms of storage as well that goes with it.
10:15 While when you're packaging your app for a VM, you're just building that monolithic VM image with everything from the version of Linux or Windows that you're running in there,
10:26 the full operating system, plus your application code and packaging, and every VM you start has all these data.
10:35 While if you're running like 10 different Python applications package as a container on one host,
10:43 you would have one time the data for the Ubuntu VM, that's your base, and then maybe one layer for Python,
10:53 and then small layers 10 times for the requirements and the code base for your Python application.
11:00 So the images are lighter.
11:02 And what that means is that they download much faster.
11:05 Another aspect is that at runtime, it's much faster.
11:09 So starting a container, once your image is cached locally, just take like a few milliseconds.
11:15 While, as you know, to start a VM, it takes a good 20 to 30 seconds in good cases.
11:24 Yeah, that's really amazing.
11:25 And I like the layering concept.
11:28 So every layer you add brings you closer and closer to the specific type of operating system and environment you need for your app.
11:36 So at first, it's just Linux, and that could be anything.
11:40 Yeah.
11:40 And then it's Linux plus your version of Python.
11:43 And so it's a little closer.
11:44 And then maybe it's that plus Nginx.
11:47 And then it's a little closer.
11:49 And then it's that version of Linux plus Python plus Nginx plus all the packages your web app depends upon.
11:56 And then finally, maybe it's your code.
11:57 And so you can use every one of those layers as a building block to create new ones and things like that, right?
12:04 Yeah, it's very composable.
12:05 So once you have created your base layer, anybody in your team or if you make it public on Docker Hub can inherit from that and add their own stuff.
12:17 Or you can go up in the chain of inheritance of layers and say, oh, I don't like the official Python layer.
12:27 I'm going to see what operating system they're using to run it, and maybe I'm going to create my own.
12:33 So you can branch off at any time.
12:35 It's a very flexible system that gives you total flexibility.
12:39 And at the same time, to me, the big differentiation is that the speed aspect to it, the fact that downloading images is much faster because images are layered.
12:53 And starting a container is like 10 times or maybe 100 times faster, depending on the cases.
13:00 When you make things faster, usually it changes the experience.
13:04 And that's what developers found or realized when they start.
13:10 When a developer starts to use Docker, usually they have this aha moment.
13:13 Oh, then I can build all the time and create my own images and start breaking my application that was monolithic into smaller microservices that are easier to compose with something like Docker Compose.
13:26 So it goes hand in hand with two big shifts in software engineering that happened in the past five years.
13:37 One of them is microservices.
13:38 One of them is microservices, the fact of breaking up your application into smaller services that can be scaled and developed independently.
13:47 And the other one is DevOps, where dev teams and ops teams are working together with continuous integration and continuous delivery using the same tools to achieve continuous delivery of their application.
14:05 And so the container is a really nice unit of collaboration between devs and ops.
14:09 I think you're right that the size makes that really even possible to some degree.
14:16 So let me give you my mental model for the DevOps story.
14:20 And maybe you can tell me how I have that slightly wrong or improve on it or whatever.
14:24 So in my mind, I'm thinking there's some developers building their app and probably they have some kind of continuous integration set up where they check in, some automated build happens, some test run, stuff like that.
14:35 So I could build instead of just running my code directly, checking in and running my test directly on the, say, the build server.
14:41 I could create a container.
14:42 And when I do a check in, the build server could actually create one of these containers, push my code into it, run that code in that actual container environment.
14:52 And if I'm happy, maybe I could automatically push that out to some cloud provider.
14:57 And the thing that's actually running is the thing I tested and built locally.
15:02 Yeah.
15:04 So there's no sort of environmental differences or anything like that.
15:08 It's just that is the thing.
15:09 I can actually ship almost the infrastructure.
15:11 Yeah.
15:11 I think you got it right.
15:13 One of the things I would add is that this kind of factoring out of applications into microservices works really well for people who follow the 12-factor app manifesto that Heroku published a few years ago
15:31 where you try to build your apps as stateless as possible.
15:35 You run data services in a centralized way.
15:38 And then you configure the connectivity between different microservices with environment variables.
15:47 And so typically what would happen is that you would run locally as a developer on a single box with a test MySQL container
15:57 that has a volume attached with test data in there.
16:01 And then when you just got pushed to a build server, the build server deploys that into a cloud QA environment where automated tests are deployed.
16:13 And then there the container would be tied to a test MySQL database that maybe is maintained by operations.
16:21 So this is where the split between dev and operations happens where the devs specify all the requirements that they have for their app in the container.
16:31 And then the ops are providing the right plugs to the right services in different environments.
16:37 And then eventually you would push that to production where typically there would be someone, a sysadmin, that would decide, hey, this build is ready to go to production.
16:50 I'm just going to trigger the fact that in production the image is going to be pulled.
16:56 It's the same image that I had on my dev box.
16:59 But there it's configured to go to my production MySQL cluster.
17:03 That's really amazing.
17:07 So how do I deal with persistent data?
17:11 You know, I've got this.
17:12 Let's say I create one of these Docker containers that contains like MongoDB.
17:18 MongoDB has to have a data store somewhere on the file system.
17:22 How do I kind of deal with inversion that?
17:25 Is that actually in the container?
17:26 Is that somewhere else?
17:28 How do I, you know, if the database schema changes as the new container comes up, what do I do?
17:34 That seems a little tricky.
17:35 Yeah, yeah, yeah.
17:36 Definitely.
17:36 So Docker has this notion of values where in your container you can mount a directory that is on your host machine.
17:47 And this is where you would keep the data.
17:49 And typically there's also another notion that is called a data container where you just launch a container that just has a volume attached to it.
18:00 And then you mount the volumes from that container into your runtime container.
18:07 And then changing database like from development to production would just be a matter of changing the volume of the container you're mounting your volume from.
18:18 So that's the notion of volume that lets you have persistent containers.
18:24 There's one issue with it in Docker today that lots of people are running into when they try to deploy in clusters, which is volumes can be attached only for local file systems, on the engine on which you are triggered, on the host on which you are triggered.
18:42 And there's a company called cluster HQ, which is building an add-on to Docker called Fluckr.
18:49 We're working with them to make it a plugin, like a real plugin in Docker.
18:55 So there's work underway to make that happen right now.
18:59 And what Fluckr is doing is that they're addressing this issue of moving persistent container from one host to another.
19:06 If you're deploying in a large orchestrated Docker setup, where when the orchestrator tells the engine, hey, I want to move that Docker engine that is tied to your,
19:20 Docker container that has a persistent volume from host A to host B, what they're doing is that they're using ZFS under the hood,
19:31 and they're doing a ZFS snapshot, which are pretty fast.
19:34 Then they're moving the snapshot to the other machine that can take a few hours.
19:38 And once the snapshot is moved, they can start your server, take a diff of the snapshot and push it on the other side, which is faster.
19:47 And then they restart your container on the other side.
19:50 That sounds really helpful because it seems like there's a lot of complexity, more and more containers that get involved,
19:54 a more horizontal scale that gets in there.
19:56 Yeah.
19:56 Yeah.
19:57 Container persistence is really an interesting issue where there's lots of research right now and lots of companies looking into that and how to enable this.
20:05 Excellent.
20:06 So you've spoken a little bit about Docker Hub.
20:08 Can you tell the listeners what that is?
20:10 Yeah, definitely.
20:11 So Docker itself is a client and a demon that's running either on your local machine or on your server.
20:19 But what you're building with the Docker client is an image of the system on which you're running.
20:26 Once your image is ready, typically you would push it to Docker Hub, which is a hosted service by Docker.
20:32 So it's software as a service.
20:34 It's a hosted service that Docker, that lets you share your images either with the world.
20:41 You can make your image public.
20:42 You can make your image public.
20:42 And there are lots of open source and community images in there.
20:47 Or you can host them privately.
20:50 So Docker Hub is free for public images.
20:54 But if you want a private image, you can buy a subscription that allows you to have several private images for your own projects.
21:03 And then your colleague can just download the images from the hub as well.
21:08 And it's integrated with Docker client.
21:10 We also have a version that's called Docker Hub Enterprise.
21:14 That enterprises who want to adopt Docker for their internal workflow can install behind the firewall.
21:20 So they have complete control over the policies for the images that they're creating.
21:25 Right.
21:25 That makes sense.
21:26 You might not want to just go grab a bunch of public images and download them.
21:29 Is this actual source code or actual binary executables that I'm grabbing?
21:34 Or is there another way to get an image from Docker Hub?
21:37 So what you grab from Docker Hub, these images are actually tables that are gzip for each of the layers with a bunch of metadata explaining the Docker Engine how to reconstitute the image of your operating system.
21:53 Okay, excellent.
21:54 So it might tell me how to install Nginx or how to install Python 3 or something like that.
21:58 Yeah, exactly.
21:59 So we host official Docker images for many open source projects.
22:05 So for example, for Python, there's an excellent...
22:08 I used to take images from the community initially.
22:11 And then I switched to the official image, which is called Python.
22:15 And so in order to run Python in Docker, you just say Docker run Python.
22:22 Eventually, you could qualify that to say if you want Python 2 or Python 3.
22:27 Or there are labels on these images that let you specify a more fine-grained version of Python.
22:34 And once you run the image, it comes with a...
22:40 I don't remember which flavor of Linux.
22:42 And then Python all installed with the tools for developing there or for running Python workloads.
22:49 That sounds like a really easy way to get started for the Python developers.
22:52 That's cool.
22:53 Yeah, and that's one of the things that I love.
22:56 So I joined Docker just two months ago.
23:01 Before that, I was at Microsoft where my role as part of the developer experience team
23:08 was to bring all the Docker partner ecosystem on Azure.
23:12 And so as part of doing that, I built a bunch of tools to deploy clusters on Azure.
23:19 And the tool I built for that was in Python.
23:22 And I interacted with lots of startups using Python.
23:25 And when I started talking about my tools to other Microsoft people and I started giving trainings about that,
23:32 these guys were on Windows.
23:34 They didn't have Python installed.
23:36 They said, how do I run that script?
23:38 And in order to make that easier, I just bundled all my script in a Docker image.
23:45 And I just told them, hey, you just run that image and you'll have Python 2 installed,
23:51 all the requirements for the tool that I built, like the Azure Python SDK, all pre-installed.
23:57 And you don't need to install anything.
23:59 It's just a Docker run away from running.
24:03 That's really cool.
24:05 It sounds like an interesting world to live in, working with a lot of the Windows stuff.
24:09 And actually, one of the things I'd kind of like to talk about in a minute is the Docker view from different operating systems.
24:16 I mean, if I'm on Linux, it makes perfect sense.
24:18 It's kind of like a Linux thing.
24:19 I can run Docker right on Ubuntu, things like that.
24:22 Yeah.
24:23 But if I'm on, say, let's say, on my Mac, what do I do there?
24:27 Yeah, so on a Mac, you can download an installer from docker.com.
24:31 So we always release all our versions on Linux, Mac, and Windows.
24:38 And so you download that installer and what it's doing because the engine relies on primitives in the Linux kernel.
24:47 It needs to run on Linux.
24:48 And so we have this tool that's downloaded as part of the installer that's called Boot2Docker
24:53 that starts behind the scene a VirtualBox VM running Linux where the engine is installed.
25:01 And then you run the client on your Mac, and your client is targeting that VirtualBox instance.
25:07 And then by changing the environment variables, you can just target, instead of targeting your local machine,
25:15 your local VirtualBox instance on your Mac, you can target a box in Azure or Google Compute Engine or Amazon.
25:24 So that's pretty easy.
25:26 Yeah, in terms of operating system, one of the things that I'm super happy about that happened,
25:31 that was launched a few weeks ago and that was announced at the build, conference last week from Microsoft, that Microsoft really bet heavily on Docker.
25:44 And what they've done is that they have contributed a lot of code to Docker so that now the Docker client runs on Windows.
25:52 So you can run the Docker client and then target Linux machines.
25:56 You can use Boot2Docker on Windows as well.
25:59 So you can run your Linux workloads on Windows with that and develop.
26:02 That's cool.
26:04 So that kind of puts it on par with OS X maybe.
26:06 Yeah, exactly.
26:07 Puts it on par with OS X.
26:09 But they're going to run the Docker client.
26:20 So that's cool.
26:30 So that's really a big news.
26:32 So Microsoft likes the Docker workflow for devs and ops so much that they wanted to give that to Windows developers and Windows sysadmins.
26:43 So they modified Windows server to have the same isolation primitive that exists in Linux.
26:50 And then they implemented the Docker daemon in terms of that.
26:53 So that's not out yet.
26:56 They just gave a demo last week.
26:58 But it's going to shift with the next version of Windows server.
27:02 And there will be a preview this summer.
27:05 So that means that you'll be able to develop .NET applications running on Windows server inside of Docker containers.
27:13 Which means that for developers and sysadmins in enterprise who are typically working half and half with Java and .NET,
27:21 they'll be able to just deploy or to build their microservices in the technology that fits the best.
27:30 Yeah, that's really cool.
27:32 I'm glad you brought that up.
27:33 I was watching the Build conference last week as well.
27:35 And just for the listeners, because of time shifting, we're recording on Monday, May 4th.
27:40 And Build, I think this was announced last Wednesday, so four or five days ago.
27:43 And one of the very first things they announced at the entire conference was your CEO, Ben Golub, coming out and saying, you know, from Docker, saying,
27:55 look, we're working with Microsoft doing amazing things.
27:57 And we started out maybe trying to bring it on par with OS X.
28:01 But the Microsoft guys wanted to go even farther and actually add features to Windows, the OS,
28:07 that would allow it to have containers for its operating system and its processes.
28:11 Because until this moment, there was no possibility of something like Docker on Windows for Windows apps.
28:18 Yeah, yeah, it's pretty incredible.
28:20 So, I mean, to me, that means like now we have the two major operating systems that run software and the servers in the data centers,
28:28 both supporting containers, both working with Docker.
28:31 And that's a huge boost for you guys, right?
28:32 Yeah, yeah, it is.
28:33 It is definitely.
28:34 And I think it's very good for the Windows ecosystem as well, because it will bring them the same kind of tools that Linux developers and admins have been enjoying for a long time.
28:45 And it really allows for more dense workloads on your servers and in the data center.
28:51 Yeah, that's really cool.
28:52 So the DevOps story just gets way better over there, I think.
28:54 Yeah, exactly.
28:55 So I'd like to talk a little bit about Python with you.
28:58 But before we do, what do you think the future holds for Docker and containers?
29:01 Where are things going in this whole world?
29:04 Yeah, so we have DockerCon, which is our big conference that's happening in June.
29:09 And one of the things we've been working on, as I explained to you before, Docker takes care of a lot of aspects, but it still needs to mature in lots of areas.
29:20 And especially when people start putting it in production, there are lots of needs that are still unfulfilled by the tool.
29:27 So one of the big aspects that we've been working on recently is to make it pluggable.
29:33 So we're working on a plugin model with people like Weave, who are building virtual networking, or ClusterHQ, who are building portable volumes,
29:43 so that they can run as plugin in Docker Engine, and so that you would be able to define networks for your containers that span your own data center,
29:57 and maybe Azure and Google, or like two public clouds.
30:02 So that's on the Weave side, and on the ClusterHQ side, it's like moving volumes from one to the other.
30:08 We're also working on making sure Docker Machine lets you provision Docker daemons everywhere.
30:14 There's been lots of improvement to Docker Compose.
30:18 And also, one of the big topics that's happening right now is once you start building all your apps as microservices running in containers,
30:26 the next step is how do you orchestrate them?
30:29 And I have a whole deck on that that I'd send you where there's at least six or seven containers in that space.
30:39 Again, as I used to say, the whole industry reorganized around this.
30:44 Before, people were building platforms as a service, and that whole industry refocused towards orchestrating Docker containers.
30:53 So the unit now is the microservice package as a container.
30:58 And in the orchestrators, you have Docker Swarm, which is one from Docker itself.
31:04 So it's a native orchestration solution that works multi-cloud.
31:07 There's Kubernetes from Google that's inspired by what's running internally at Google, and that's open source.
31:15 There's Apache Mesos, done by Mesosphere.
31:18 And they're running in production in lots of different places.
31:22 They're coming from Twitter originally.
31:24 I think Netflix runs it as well.
31:26 And there are smaller players like DS or Tutum, which is running on Azure as well.
31:34 So there's lots of activity in that space of our Docker container orchestration.
31:38 Yeah, that seems like where the real challenges lie.
31:41 And if that gets easier and easier, then, you know, the sky's the limit.
31:45 So that's amazing.
31:46 Yeah, on the Docker side, we're trying to make the whole experience easier, like with machines,
31:52 like one command to provision new machines in different clouds.
31:55 And then it's integrated with Swarm.
31:58 So you can say, oh, I want all these machines to be in a single cluster.
32:01 And then with Docker Compose, you can say, oh, I want to, or there's the start of an integration
32:08 where you can say, my app that's composed of multiple services, please deploy that to that Swarm instance.
32:14 And then it will just provision it in the right place in the cluster where there's some room.
32:19 So we're really trying to make the experience much easier for developers and sysadmins.
32:24 Cool.
32:25 Yeah.
32:25 The more you guys encourage and make it easy to break our apps into small little services,
32:30 then the challenge becomes linking them together, right?
32:33 So it sounds like you're doing some great work there.
32:35 Fantastic.
32:35 Yeah.
32:36 And one advantage I will have to add that there is to this microservice approach and that Docker enables
32:42 is once you start breaking your app in microservices, you can start innovating in terms of what language
32:51 and platform you're using for each of the microservice.
32:54 So as opposed to being stuck, for example, with a Django app running in Python 2.7 and you made that choice
33:02 and there's some legacy code that cannot be ported to Python 3, you could say, oh, I just break my app in 10 different microservices
33:10 and for this microservice, I don't have any legacy.
33:12 I'm going to write it in Python 3.
33:14 And for the developers, it's just a matter of saying, oh, I'm inheriting from Python 3 as opposed to Python 2 when you're creating your image.
33:25 Oh, that's really neat.
33:26 So maybe it's a gateway for a little more flexibility in the technology to...
33:31 Yeah.
33:31 And I think especially for the Python community where that big gap between Python 2 and Python 3 developers happen,
33:40 I think this breaking up of large monolithic applications into smaller microservices where you have more freedom to test new stuff
33:49 may be a good way of introducing Python 3 to your environment.
33:53 For sure.
33:53 Do you see anything like that happening in the data side of things?
33:57 So, you know, if I'm using MySQL and relational databases in some big monolithic way,
34:02 like, could my different microservices be using, like, NoSQL, say, MongoDB or Redis here and there,
34:09 and then maybe one other part still using a relational database?
34:12 Oh, yeah.
34:12 I've seen a lot of that where people are starting...
34:15 Once your microservices...
34:17 Your microservice has a smaller footprint and, like, a single functional goal,
34:23 then it's easier to say, oh, I'm going to back that up with Redis or MongoDB
34:28 as opposed to storing everything in MySQL because maybe I'm building an e-commerce application and I need the relational aspect for parts of it,
34:40 but maybe for, like, tracking referrals or the social media aspect of how people are tweeting about products on my website,
34:51 maybe these small services can use different types of databases or data backends.
34:57 Yeah, that seems like a perfect way to choose the right database, the right job there.
35:01 Cool.
35:03 So you guys, do you use Python internally?
35:06 You know, obviously you can run Python in Docker containers really well,
35:09 but do you guys use it as a language for yourself?
35:11 Actually, we do.
35:12 Not everybody.
35:14 So Docker itself is written in Go as well as Docker Machine and Swarm.
35:20 But Docker Compose is actually written in Python, so it's a Python project.
35:26 Nice.
35:26 And how's that?
35:26 People are pretty happy with Python internally?
35:28 Yeah, and I think they even tried to call it to Go at some point, but then they said, hey, we have high velocity with Python,
35:37 we have a code base that people understand well, so let's stick with that.
35:42 So I think Compose for now will stay in Python, and the developers who are working on it are pretty happy with it.
35:49 Yeah, that's really cool.
35:50 So if, you know, I'm a listener and I'm out there hearing this, and I'm like, this is awesome, I have to get started.
35:56 What do I do?
35:56 How do I get started?
35:57 Yeah, to get started, you go to docker.com.
36:00 We have lots of tutorials.
36:02 If you're on a Mac, you just download Docker for Mac, for Linux.
36:06 Depending on the distro, either there are packages or you can download the binaries directly.
36:11 And for Windows, there's an installer as well.
36:13 There's also another project you may want to take a look at, that's called Ketmatic.
36:18 That one runs only on macOS, and it's a Docker client GUI.
36:25 So it shows you all the containers that you can find on Docker Hub, and you can say, oh, just start this one,
36:32 launch a terminal where I can script it or connect to it, and it shows you the logs, the volume that you have attached,
36:41 so you can go edit the files and all that.
36:43 So that's a very easy way to get started, Ketmatic, I would say.
36:48 And else?
36:49 Yeah, I have Ketmatic installed, and I enjoy it.
36:51 It's nice.
36:52 It's clean and works well.
36:53 Yeah, there's lots of stuff to add in there, but it's really a good start, and it's very easy to get started.
36:59 Excellent.
37:00 So I think it's probably a place to call it a show, Patrick.
37:05 What else would you like to add while we've got a chance to talk to everyone?
37:08 I mean, thanks very much for the opportunity.
37:10 I really encourage Python developers to go take a look at that.
37:16 To me, one of the angles that's most interesting for Python developers is giving you the portability for your customers to run your code anywhere
37:26 without having a complicated environment to set up.
37:30 As long as they have Docker, they can just start your app packaged as a container.
37:36 And the other angle that's, I think, pretty important is starting to introduce Python 3 microservices into your app.
37:44 If all your apps are packaged as containers, it just doesn't matter.
37:49 You don't need to set up complicated environments.
37:52 It's just a change of the label in the from directive in your Docker file
38:00 to start using Python 3.
38:02 I am very excited about Docker and this whole container world.
38:07 And I appreciate you taking the time to share it with everyone.
38:09 Thanks, Michael.
38:10 You're welcome, Patrick.
38:11 Bye.
38:12 Bye.
38:13 This has been another episode of Talk Python to Me.
38:16 Today's guest was Patrick Shanazin, and this episode has been sponsored by Codeship.
38:22 Please check them out at Codeship.com and thank them on Twitter via at Codeship.
38:26 Don't forget the discount code for listeners.
38:28 It's easy.
38:29 Talk Python, all caps, no spaces.
38:31 Remember, you can find the links from this show at talkpythontome.com slash episodes slash show slash nine.
38:40 And if you're feeling generous, please check out our Patreon campaign at patreon.com slash mkennedy to contribute and support the show.
38:48 And be sure to subscribe to the show.
38:49 Visit the website and choose subscribe in iTunes or grab the episode RSS feed and drop it into your favorite podcatcher.
38:56 You'll find both at the footer of every page.
38:58 This is your host, Michael Kennedy.
39:00 Thanks for listening.
39:13 Bye.
39:14 Bye.
39:14 Bye.
39:14 Bye.
39:14 first, develop first, develop first, develop first, develop first.
39:20 Developers, developers, developers, developers.
39:22 you Thank you.