Monitor performance issues & errors in your code

#33: OpenStack: Cloud computing built on Python Transcript

Recorded on Friday, Oct 16, 2015.

00:00 You've probably heard of infrastructure-as-a-service cloud providers such as Amazon's AWS and to a lesser degree Microsoft's Azure cloud platform. But have you heard of OpenStack? It's an incredibly powerful infrastructure-as-a-service platform where you can buy it as a service or install it in your own data center to build your own private cloud. Yeah, private cloud, that's a thing.

00:00 Flavio Percoco who works at Red Hat and spends his days writing Python code for OpenStack is here to tell us all about it!

00:00 This is Talk Python To Me, episode #33, recorded October 16th, 2015

00:00 [music]

00:00 Welcome to Talk Python To Me, a weekly podcast on Python- the language, the libraries, the ecosystem and the personalities.

00:00 This is your host, Michael Kennedy, follow me on Twitter where I am at @mkennedy, keep up with the show and listen to past episodes at talkpython.fm and follow the show on Twitter via @talkpython.

00:00 Let me introduce Flavio. Flavio Percoco is a software engineer at Red Hat, where he spends his days working on OpenStack. In his spare time, he speaks at conferences, contributes to Rust, plays with MongoDB smokes his coffee and drinks his pipe.

01:34 Flavio, welcome to the show.

01:35 Thank you Michael. Thanks for having me here, it's really exciting.

01:37 Yeah, it's really exciting, and it's great to catch up with you. It's been a few years since we've met up at the last MongoDB master summit in New York city, right?

01:46 Yeah, it is, I think it's exactly like two years from that, yeah.

01:50 Cool. So, we are not going to talk about MongoDB today, even though it is excellent. We are going to talk about OpenStack and your work at Red Hat, and cloud computing and all that kind of stuff, right.

02:01 Yeah. I'm really excited about sharing more stuff about OpenStack, since I spend most of my time there.

02:11 Yeah, that's excellent. So, before we get into what OpenStack is and all the details, what's your story, how did you get into programming?

02:17 I guess my story about how I got into programming is not like I guess there are many people like me out there, but it's not like most of the cases where you were born knowing that you wanted to be a programmer. In my case, I actually didn't know what I wanted to do until very late. I started digging into different areas, I wanted to study medicine and then I wanted to be a psychologist, and then I wanted to learn foreign languages. And at some point, I said like I saw this Lamp course somewhere and I was like ok, doing web pages it sounds cool, and I just gave it a try and I just fell in love with it, and I started doing more.

03:00 I guess that is how I actually started. I tried to go to college, and it just didn't work for me actually. And I just dropped out of college at some point and actually that was very early in my career, in my studies, and I just went straight to working. I already knew how to program, and I had done a couple of courses anyway, and I got to my first job and started improving and learning a lot from OpenSource. And so, I guess it would be fair to say that my background in programming is actually just OpenSource. I started learning from all their projects, and sharing everything I did, I joined the RCI channels and spent a lot of time there and learned from other many good people out there. And I guess I just made my way through it, and got where I am at today.

03:53 Yeah, that's excellent. You know, I had a similar path in that- I sort of became an accidental programmer. I just learned programming because I was trying to do something else and I needed to know it and I realized, "Wait, this is way better", so yeah there is a lot of us out there like that. Yeah I feel really lucky to have just sort of stumbled into such a cool area.

04:14 Yeah, same here.

04:15 Yeah, so that's how you got into programming. How did you get into Python?

04:17 Well, like I said, my first approach to programming was that Lamp course I did, a long time ago. So that obviously means I learned PHP there, and from PHP I just decided that I wanted to create my own web browser, so I started learning CSharp for some reason because I didn't know enough then, and I just like started playing with it and I had all the plugins and stuff. And then I realized that that wasn't actually very good idea. And I found Python through a friend of mine; and he just started explaining it to me, and I also learned Python 101, and I just took off from there, and my first job was actually on Python and it allowed me to learn way more about it and improve my skills, and I just kept doing Python ever since then.

05:10 Yeah, that's excellent. And today, you get paid to write OpenSource code with Python, right?

05:16 Yeah.

05:18 That's awesome, so what do you do?

05:18 I get paid to write OpenSource code and it's 100% upstream, so I'm even happier because sometimes I feel like I am giving back to all those guys that at some point gave me a lot and allowed me to learn. And so, one of the things I love is not just the technology I am working on, but the fact that I know that it will be used by some other people to learn how to program. Like mentoring all these people, and there are several programs in OpenStack that do that and I'm always trying to participate there, so that is part of what excites me the most about current job actually.

05:55 Yeah, that's great. And you work at Red Hat, right?

05:57 I work at Red Hat, I'm a software engineer there. And like I said, I work 100% upstream on OpenStack, mostly oriented on storage projects. OpenStack has many pieces of 6:11 now.

06:12 Yeah, there is something like 375 GitHub repositories or something like this right.

06:17 Yeah, it is insane. Many of those are services, tons of them, probably most of them are like libraries and then we have a lot of projects that we use for CI that we have written ourselves. And there are many of the projects that started off as part of OpenStack, but they just improved and became stand alone services and some other s that just died and we just kept a code.

06:49 Let's start with what is OpenStack, and I should just tell everybody, I really don't know a whole lot about OpenStack, I looked at it a little bit, but I am basically new to it as well, so tell me- what is OpenStack?

07:01 I'm going to tell you how OpenStack started. It started as an infrastructure as a service provider. And it just focused a lot on making sure that you could use all the hardware to provide services. So it would use all your computer servers and power, all your computer power, and it would just allow you to use it as a service, and it would do the same for your networking browsers and everything that you have in there and it will create like private networks, IP addresses and everything that you would need to have a cloud, and it will do the same with the storage and everything. But it was very focused on infrastructure-as-a-service. Infrastructure-as-a-service is basically using all your [metal?] to provide like a cloud service that you would just reuse all that and make sure you can run several services and virtual machines and everything in there.

07:54 If I had like Python web app I wanted to host, just like a simple blog or something, I probably wouldn't go to OpenStack, but if I wanted to create a better data center or my hosting company or something like this, then I would look to use OpenStack to make that happen, is that accurate?

08:11 Right, that's quite accurate. And that is how it started actually, but there is more to it now. At some point, the community realized that there was more to OpenStack than just infrastructure. And I like to say that it just outgrow itself, to the point that it just became a cloud provider, entirely cloud provider, instead of just focusing on infrastructure. And you now have a whole lot of other services that are required to actually maintain a cloud. And this is something that the community learned from its own experience while working on OpenStack, and it is a fact that to run a cloud you need more than just infrastructure. If you want to be a good cloud provider or you want to have a cloud that you can manage easily without suffering a lot amendments and controlling in and ops and everything, you need more than that.

09:07 You need like DNS, you need to make your networking easy to use, you probably need database services if you have a big developers team, and you want those devs to be able to create data bases easily without spending much time maintaining those. But you can use that in production as well. So there are many other services that are not necessarily infrastructural, you wouldn't have like lob [9:27] search as a service so you can store all your files and things into your cloud. So there is a lot into cloud, cloud management and cloud services than just infrastructure. So that's something like the community learned by itself while working on OpenStack and it just became more than infrastructure and is now like a cloud provider, and that's how I like to think about it, that's how I like to present OpenStack, as a cloud provider and it will give you everything you need in the cloud.

09:57 To me it feels somewhat on par with something like EC2 and the related services at AWS much more so than like somewhere where you can just go and get a virtual machine, somewhere like Digital Ocean where they have great virtual machines but there is not lot more than that, around to help you, right, there is not-

10:20 Exactly, exactly.

10:21 Virtual private networks and the load balancing, and the storage and the persistent disks across machines, all that kind of stuff, right?

10:29 That is correct, and I've been giving this talk like this year it is called "Infrastructure as a service- beyond infrastructure", where I kind of like talk about what I think your cloud should have in order for it to be considered a cloud. Which doesn't mean that service like companies or services like Digital Ocean are not good, but they are not exactly clouds, they just provide a different kind of service, like Digital Ocean basically just provides you a bit of private services like the server story. So you create your server and you have to manage it yourself, you don't have everything that you would need for to run a cloud right there, right? And if you want to put your application in a cloud you would just go to either OpenStack or any of the cloud providers that have all those services that you would need to run your application and don't die while trying to run it.

11:22 Yeah, so I feel like services like Digital Ocean are excellent, but if you build increasingly complex and large scale systems, you will probably go into at some point algorithm and then you are going to start look around and say, "ok, well I need more than just a bunch of good VMs, I need all this orchestration in putting together". And a lot of people go to places like EC2, or maybe Azure, but OpenStack is- is there like an online service I can go pay a monthly fee and get access to the system or is this something I've got to put on just my data centers?

12:04 Oh, not at all, there are many public cloud providers running OpenStack right now, a good example probably- I'm actually not sure if it the oldest one, but it is probably one of the oldest ones, it's Rackspace, they run on OpenStack and there is also HP Cloud, there is Vexxhost , there are tons, there is Enter Cloud in Italy as well. So there are many public cloud providers running Open Stack right now, they have different versions they are not all ran into the latest version.

12:35 But when you have to decide where do you want to go like AWS, Azure or just use OpenStack, you have to ask yourself many questions. One of the things that I believe make OpenStack the best solution for you, is a fact that OpenStack is interpretable. And whatever your application looks like, if you run it in OpenStack cloud and you want to migrate it from this cloud to another cloud, and need still OpenStack, you are guaranteed to be able to do that in a painless way because different OpenStack versions are interpretable they keep backwards compatibility so you can be sure that whatever works in HP Cloud will also work in Rackspace Cloud, if you write scripts on top of it, and everything.

13:30 Yeah, that's a pretty unique proposition because AWS is entirely proprietary even though internally runs a ton of OpenSource stuff, it's all behind the scenes super secret, same thing with Azure and some of the other hosting places, right? So, the fact that you can take this maybe from a cheap posting company over to Rackspace and then maybe even later into your own data center, if you really want to do that, but that's a possibility and that you don't really have to worry about, right, because it's all OpenStack.

14:04 Exactly, and you don't just have to, it's not just about migrating your application from one cloud to another cloud, it also applies to using several clouds at the same time because not all public clouds are supposed to run all the services. And in order for them to be OpenStack applied, like I can decide to have my own public cloud running just Trove which is database as a service, and there are other clouds like HP Cloud that did grow on OpenStack. But as a user, if I want to have my internal cloud- so let me give an example, let me put in another way.

14:42 I may want to have all my webheads and my compute notes running in one of the public clouds, so let's say Rackspace, or HP Cloud. But I want all my data to be in my cloud, so I can use my own servers and install my database as a service there or whatever my own database, and have a hybrid cloud. So it is not just a matter of deciding what do you want to have, whether you want to use a public cloud or a private cloud, but it is a fact that you can also mix them together and you are going to be guaranteed that they will be interpretable and they can all use the same service with the same data and everything.

15:21 Really interesting point, I hadn't thought of just taking an individual piece and running that locally but yeah, that makes a lot of sense. So, cloud computing is of course important to Python developers, but OpenStack is especially relevant to Python developers because Python is used a little bit inside the development, right?

15:43 Yeah, I only say that like OpenStack right now- well, when I joined OpenStack it was like 500% Python, there was like this 0.00001% of Javascript because we have a UI dashboard, and now I say it's not true anymore because we now also maintain like puppet manifest ourselves, and that's not Python obviously, we maintain like Ansible scripts, like there are teams focusing a lot on operations and making OpenStack installable and easy to manage. And reduce the maintenance board from users. So it's not just like 500% Python anymore, but I can like all the services right now are written in Python and they are mostly pure Python, there are some services doing some experiments with Goal as well but most of them are Python, yeah. The community is definitely 100% Python oriented and there are probably other tastes for other programming languages.

16:53 Yeah, of course, that's really excellent, and I think that's a great testament to Python itself, right, that you can build such an amazing infrastructure-as-a-service system with it, right?

17:04 Yeah, it is, and I would say it just even goes beyond that, it's not just the fact that you can build such a cloud system with Python, but the fact that it has allowed us to make it easy enough, and to be more welcoming as a community, like to welcome more people because Python is easy to learn, because Python has a quite big community; even for beginners, we have many people just still in college or right out of college coming to OpenStack saying, "ok, I still don't have a job but I want to do something interesting", and they apply for all this mentorship programs, there are Google summer of code. And they would just learn as they go from OpenStack itself, and all the CI system and the fact that we can test all the programs, all the services easily and like the number of CI jobs that we run daily is insanely high and I already forgot the exact number when I heard it the first time it was like that's insane, like you wouldn't be able to do that with all the programming languages that would require you to compile this up before you can actually test it for instance.

18:30 It's interesting to compare OpenStack against places like AWS where if you are wondering how some service works, you really can go in there and look, and it's just out there for everybody to see what is going on, right?

18:43 Yeah it is. One other thing that I like and I also hate at the same time about OpenStack is the fact that is not opinionated in many different areas. By non opinionated I mean that you have- let's talk about Nova for example, Nova is the compute service and it is one that will allow you to create virtual machines. But there are many hypervisors out there, so several of the services in OpenStack are just provisioning services, they would sit on top of something else, and they would just manage that. So, the default hypervisor for Nova right now is KVM and- but you can also have hyper V and like VM Ware and like you can have different hypervisors under the hood and you would just pick whatever you want, when you want to deploy OpenStack.

19:38 And, what it gives you is that it gives you the opportunity to pick your own flavor, of what underlying virtualization layer should look like, and run whatever you prefer there. It is good to some extent because it just gives you all that flexibility; it just forces you to grow more and more abstractions on top of that. Which I don't think is necessarily bad in many areas but there are some other areas where being more opinionated would be good. And the reason I am saying these is because I am pretty sure AWS is just focused on a single hypervisor, whereas in OpenStack you can also look at how you can orchestrate not only the cloud service, but also a cloud service that can run on top of these several different underlying technologies, that would allow you to just pick your flavor to favorite flavor and just run with it.

20:38 Yeah, that is interesting. You know, that kind of stuff is good for the flexibility but it also makes it potentially harder for people who are getting started to know what to choose, right, because they have got to make a decision instead of just following some opinionated sort of guidance, right?

20:55 Exactly. And you just made a very good point, and that is something that we started looking on in the last I would say two cycles probably. And it is a fact that we wanted to have some kind of like "starter kit" for people that are coming to OpenStack and they say, "ok, I heard OpenStack is cool, so where should I start?" And, this starter kit is like four services, running in a very simple and like light way and so that you can just run those for, you can play with OpenStack a little bit, but you are not going to hit a dead end because as soon as you start liking it and you want to grow OpenStack and your own deployment you can just start from there and just install more services or change some configurations and make it just better. So that starter kit is really important because it just lets you know where to start from and, kind is of like essential services to actually have a cloud.

22:05 Yeah, that sounds really helpful. excellent. I'll link to that in the show notes as well.

22:09 Awesome.

22:09 [music]

22:09 This episode is brought to you by Hired. Hired is a two-sided, curated marketplace that connects the world's knowledge workers to the best opportunities.

22:09 Each offer you receive has salary and equity presented right up front and you can view the offers to accept or reject them before you even talk to the company. Typically, candidates receive 5 or more offers in just the first week and there are no obligations, ever.

22:09 Sounds pretty awesome, doesn't it? Well did I mention the signing bonus? Everyone who accepts a job from Hired gets a $2,000 signing bonus. And, as Talk Python listeners, it get's way sweeter! Use the link hired.com/talkpythontome and Hired will double the signing bonus to $4,000!

22:09 Opportunity is knocking, visit hired.com/talkpythontome and answer the call.

22:09 [music]

23:20 So, speaking of starter kits and so on, when would it make sense to think, “Ok it's time for me to stop just running a virtual machine here and there and try to focus on maybe bringing OpenStack into my software deployment data center type scenario”, like what kind of apps do I run, what kind of- I guess what sort of experience did you have with people getting started and then growing into this?

23:47 That's actually a very good question, because many people think that in order to run OpenStack- the requirement for running OpenStack is having a huge public cloud service and that is not true. Like we have people running small deployments of OpenStack, I myself run a small deployment of OpenStack in my test environment and that is what I used to create new virtual machines where I would test OpenStack and everything I am doing. So, I would say like the moment you start needing more than just one virtual machine, and that would be a good moment for you to start considering something like OpenStack. Or even starting from the moment when you need a virtual machine, and the reason I am saying this is because if you are working on a service you are likely going to scale it at some point, it's not going to scale enough with being a single virtual machine and you will need more compute power or sort of [??] network.

24:47 At least for durability, right, or-

24:50 Exactly. And so you likely need something that would allow you to add more compute or whatever to your deployment. And from that moment, you already need something like OpenStack, like a cloud provider. And if you are starting, you definitely don't want to give it other people but OpenStack it doesn't run on air, you obviously need some compute, and it really depends on what your needs are in that case, but seriously, when you start needing virtual machines to deploy your stuff, that is probably a very good moment to start using OpenStack.

25:33 Because it really will allow you to do that easily and it will manage all the virtual machines for you; and the fact that you don't have to worry about starting the virtual machine yourself, about making sure that is running there and that your data is in the right volume or whatever, and that your network 25:55 all of your networking which is very painful. Like all those things OpenStack makes easier for you. They will pay off later by saving you a lot of time on development and maintenance and deployment.

26:09 Yeah, that is good advice. And, one of the things that OpenStack has that I am not entirely sure that AWS has, I haven't seen it maybe I'm wrong is- like a local dev version of OpenStack, so if I am like disconnected from the internet or whatever and I just want to build out something I could actually do that locally, right?

26:31 Yeah, you can do that locally with OpenStack. You can install OpenStack on your own laptop and just have it there running. And that's also very helpful in OpenStack, the fact that you can just run it there and start not only playing with it but also use it for your own purposes and make something good out of it. That's something you don't get from AWS or any other public cloud that are managed by private companies because obviously they wouldn't allow you to access the code and install it locally. It's their business I guess.

27:10 Yeah it is their business. But, I think that is a really cool advantage, because I haven't seen anything like this for AWS and you are usually on the internet, but at the same time there are certainly situations where you have spotty connectivity and you would rather be able to just work, but also you don't necessarily want to pay for that, depending on whether the company is paying, you are paying, right, that might or might not be a consideration.

27:37 Sure. I mean, it's not just about like paying, paying is definitely a big pint here but it's not just about paying it's also about your resources wisely and by resources I also mean time. The other thing is that people think that you need a lot of power to run OpenStack. It may have been true in the past but it has come a long way on using less resources from your computing notes to the point where you can also just run OpenStack on top of Docker. And instead of using an hypervisor you can use just Docker and create Linux containers and which are cheaper for your laptop.

28:27 That's awesome. I definitely want to come back and talk about Docker. But maybe before we do, maybe we should talk about the various building blocks of OpenStack because people that are kind of new to it, they maybe don't know what it offers and what it doesn't.

28:41 If we start from top to bottom, from the user perspective, it goes that the first service that user would meet keystone which is the one that provides authentication. Whenever you want to log in to OpenStack cloud you are likely going to talk to keystone to get a token or something that will create a session for you so that you could talk to all the services and use that and get your all information you need from there.

29:08 Sure, and you guys have like web front end, like a UI and the command line interface, and an API right, so any of those-

29:15 That is correct.

29:16 Ok.

29:16 That is correct. I would say we have a UI and an API. We have a lot of CLI tools but you could also like write your own in your own language if you want. Right? And, the CLI tools is super useful and we all use it and we tend to think about it as a reference to access OpenStack, but the actual reference is the API that is deployed and then that's the way you actually talk to OpenStack. And so we have a UI, it used to be based in Django and now they moved a lot of things to just Javascript.

29:53 It allows you to access all the services that you have deployed, your OpenStack deployment- again, you don't need to have them all deployed to use UI you can just deploy the UI service and just have few of the OpenStack services running and it will just show you the tabs so to speak, for those services that you have enabled. But even to log into the UI you need to access keystone which is the one that provides you authentication.

30:20 So you have the UI like I said, and then you have the whole bunch of other services that will provide that cloud thing for you, one of them is Nova, Nova is the one that provides, that manages all the hypervisers that compute resources for you, and then you have Neutron, Neutron is the one responsible for doing all the networking, it will create network and it has support for different network layers; I'm honestly not a network expert so right now I'm just saying the things I know there are, but I don't necessarily know what all those things mean.

31:03 And, then you also have Cinder which is the one that provides block devices and it talks to different storage devices, and it allows you to also attach those devices, those volumes to the virtual machines you are running so you have all data in your storage. What else? You also have Swift, which is object store, it's pretty much like S3 in case people are not very familiar with object store, it has its own API but it also have the support for S3 APIs so if you have a service that, a script that talks to S3 and you want to give Swift a try you can just install the Swift S3 layer and you can talk to S3 even using the border 31:54 library.

31:54 Oh that's cool, that's a nice feature.

31:57 Yeah, some of the services have that as well. I forgot, one of the services is Glance. So Glance is the service that provides images for your cloud. So, whenever you want to start a virtual machine, you've got to tell Nova what image you want to run and you've got to tell Nova hey I want to run this fedora image, can you boot it for me? And in that moment, Nova would talk to Glance and it will say, hey Glance, you have this image, like I was asked to boot this image with this ID, and then Glance will just provide the image to Nova and Nova would do all the magic. So Glance is an image registry, you will have all your images there and whenever you create a snapshot from your virtual machine the snapshot will be created in the form of an image and it will be uploaded back to Glance so that you can boot it afterwards whenever you need it.

33:02 Nice, can you make images like locally on your machine and then upload them to be booted?

33:08 Absolutely, yes.

33:08 That's cool.

33:10 Yeah, and it has support for different image formats it is not tied to a single format, you can upload broad images, VHTs, whatever image format you want to upload there, as long as your hyperviser knows how to run those, I think you are going to be just fine.

33:27 Yeah, very nice. So those are all the services involved in sort of running and creating the virtual machines; and then you also have some deployment tools- you said there is actually people working almost dedicated to Ansible and chef and puppet work.

33:42 Yeah, there are many other services, so the ones that help with all the operations of maintain and deploy OpenStack, we have like puppet, manifest, we have Ansible, we have a service called TrippleO which is basically using OpenStack to deploy OpenStack. What else? We have other- there is several installers around the community maintained by different companies as well that will allow you to install OpenStack. And some fact about OpenStack is that the problem is not just to install it, installing OpenStack was hard at the beginning, it has come a long way, but what really matters and what is really interesting is the life cycle of an OpenStack deployment, it's not just installing it but also upgrading it. Like making sure it grows well, making sure there are not pieces falling apart in some of the OpenStack areas. So, all that life cycle of OpenStack is what installers are specializing themselves now.

34:56 Yeah, that can be a real challenge in infrastructure-as-a-service scenarios, right?

35:00 Right, especially because you are running actual software on top of it, like you are running your production environments in many cases or test or whatever, analyses environments and you want to make sure that your virtual machines don't die out of the blue, and that your data is still persistent and that you are going to be able to access it afterwards and it is not going to die in one of the upgrades from one mission to another. And in many cases, and there are many, a couple of folks working on these around the community to provide live upgrades and life migrations for some of their resources like life migrating virtual machines from one compute note to another compute note like migrating volumes from one storage to another storage, those are very interesting things.

35:50 Yeah, that is a big challenge but really super viable if you get it done right, yeah?

35:55 Yeah.

35:55 [music]

35:55 This episode is brought to you by Codeship. Codeship has launched organizations, create teams, set permissions for specific team members and improved collaboration in your continuous delivery workflow. Maintains centralized control over your organization's projects and teams with Codeship's new organization's plan.

35:55 And as Talk Python listeners, you can save 20% off any premium plan for the next 3 months. Just use the code TALKPYTHON.

35:55 Check them out at codeship.com and tell them "thanks" for supporting the show on Twitter where they are at @codeship.

35:55 [music]

36:48 So all of that sounds like infrastructure-as-a-service. But then, on top of that, you have some more platform as a service type of things, like queuing and DNS and so on, right. Do you want to talk about what you got there a little bit.

37:03 Absolutely. So, all the services that are in the OpenStack we call the OpenStack's Big Tent, like this huge big tent where we welcome projects that are targeting to make the cloud better. We have Trove which is database as a service, we have Zaqar which is messaging-as-a-service, we have Cue which is brokers-as-a-service, instead of providing messaging as a service it provisions brokers for you so you don't have to maintain your own Rabbit when you running, but it will maintain it for you and it allows you to create clusters and everything.

37:44 What else have we got? We've got DNS-as-a-service, we've got load-balancing-as-a-service, we've got Telemetry and meter in services so that you can monitor your cloud environment and- so every OpenStack service, or most of them, emit notifications every time something happens within the service, like whenever you put a new virtual machine it will say, hey, I have a new virtual machine.

38:13 Whenever you do that with the block device it will do the same and there is this metering service that will just get all those notifications and will create statistics for you or allow you to see what is going on in your cloud, when will those virtual machines start or not, and will also allow you to build on top of that, based on that information you can build like virtual machines and computer resource or storage or you've been using these images and you can also even build based on the images they are using like different features that can be enabled or disabled in the cloud environment. What else? There are many others and I may just forgot about them.

38:54 Yeah, there is a lot, aren't there? So, could you maybe walk us through what it's like to- I've already got my OpenStack all set up, but I would like to have a database and I would like to have 3 web front ends and a load balancer in a virtual private network. Like, what are the steps to make that happen?

39:11 See, you want to have a database, you want to have a load balancer and you want to have like 3 webheads.

39:17 Yeah, exactly, something like that.

39:19 So, I guess the first step would be creating your database, so that you get your 39:21 to your database, and after that you would create obviously your private network, so that whenever you boot your virtual machines they will connect, they will use a network and they will get some IPs assigned from that network, and once you have your network you will start your virtual machines, you will deploy your webheads there, obviously they will talk to your nearly created database, and then you'll create your, you'll configure your load balancer to balance the load across those three webheads, and you can do all that without installing- I am trying to make sure I'm not saying something wrong here, you can do all that without installing any single service that is not OpenStack. You can do that by using just CLI tools and say like give me a load balancer, balance load across these 3 notes, give me 3 virtual machines; you don't have to install a single service yourself, you don't have to install HA proxy yourself. Or EngineX, you don't have to do that. You can do that with just using OpenStack tools.

40:46 Yeah, that's really cool. Could I actually put all that together in like an Ansible playbook and just run it and make it all happen?

40:53 Yeah, that just makes it even better. You can just script that, because every single CLI tool is like libraries in OpenStack, all written in Python. You can get your libraries and you can write scripts on top of that. And either using Ansible puppet or even your own script.

41:26 Yeah yeah that's cool, it's amazing times we live in right?

41:30 Absolutely. I normally talk about this and I say like, "you know, if your cloud doesn't provide you a load balance as a service, I believe your cloud is incomplete". And now that you just ask me to feel that change up advance how they would work and just like wow, the fact that you can do it actually just using CLI tools and you can get your application running in no time basically, it's just amazing.

41:59 Yeah. It really is. So what's Red Hat's involvement in OpenStack, like why- you work for Red Hat but you work on OpenStack, what's the story there?

42:08 The story is that Red Hat is fully immersed in OpenStack. Like we are 100% on the project and we believe 100% in it. And we are betting a lot on cloud and different flavors and versions of clouds and whatever will make your cloud better, and your development workflows better, and that will allow you to scale your application and your environments either private or public. Which basically means that when we see something like OpenStack we say, "we've got to be there". And that's what we did- we just jumped into OpenStack, we started working in OpenStack full time upstream, right away, we got developers and we just put them there. And that's actually my case as well, and I am a part of that upstream only team that dedicates full time to making or trying to make OpenStack better every day.

43:09 That's a great mission. We talked a little bit about why it's good for OpenStack to be build with Python and has this great community, it's easy for newcomers to come on board and sort of learn from OpenStack, but also contribute to it. But what I recently learned that I thought was pretty interesting is it works the other way as well, like Python is benefiting from OpenStack, right?

43:35 It is, it is. I believe one of the good things that OpenStack is giving back to Python is actually Python. And it is a fact that we- the OpenStack community is huge, there is a lot of Python going on there. And, it's not just huge. We also have many of the key members of the Python community are also part of the OpenStack community. Which means we are interconnected with the Python community and everything we do, we do it in a way that we can give back to the community. In the OpenStack community, we are very against the reinventing the wheel and redoing things that exist already in the Python community.

44:19 Whenever we can just use a Python library that exists already and make it better which has happened a lot, we just do it. And if we end up writing our own library, we make it in a way that it is not OpenStack dependent, so that we can just give it back to the Python community. When it is OpenStack dependent, we would just mark it in a way that you would know, it's like all the Zope libraries that you know those are parts of Zope. Well, we do the same with OpenStack. There are some libraries that we have built that are meant to be used by OpenStack and it doesn't necessarily mean that you cannot use them outside OpenStack but it definitely means the API has been made for OpenStack you know, that will make OpenStack better or easier to consume the library across OpenStack.

45:09 So, I believe one of the best things that OpenStack is giving back to Python is actually Python, there is a lot of contributions back to the language itself, to the testing libraries, like I said before, like our CI system is huge and it brings a lot of patches. All of the things that have been written and found in our CI system that are not necessarily a problem in OpenStack but probably a problem outside OpenStack, all those things have been contributed back. Many new workflows from a test perspective, from my requirements independencies management perspective have been contributed back to the community somehow, even in a way of documentation.

45:53 I believe that's a huge benefit the Python community is getting from OpenStack. Many things like- it recently happened, like two weeks ago, there was a new release of Request, and you know Request has ID Vendors and the latest release just brought some of the OpenStack client libraries and we just found it, and we went back to Request and it was like you know what, you just broke OpenStack.

46:30 Call Kenneth Reitz and say, "hey Kenneth, we have a problem".

46:32 Exactly. We've got a problem. It turns out some of the Request maintainers are also part of OpenStack so it was even easier to just say hey, do you mind. And like the issue was resolved in no time, like the patch was written in one day, in no time, the release came a couple of days later, and in the mean time we kind of tried to apply the work around it. But, like we resolved the whole issue within a week. We are talking about Request which is a widely used library everywhere, and OpenStack which has a huge ecosystem which is hard to maintain consistently. Making all that happen in a single week means that we've managed to create a workflow to contribute back to the Python community in a way that is effective as well.

47:22 Yeah, that's really great story. And, one of the points that I heard someone make was you guys are really betting on Python and if for some reason say like a section, some operation or something in Python is too slow, instead of saying, "oh forget this, we are going to switch over to Go" or something like, you guys actually across Red Hat and Rackspace and so on, actually employ some of the core contributors and actually just make Python better to fix those problems and that benefits everyone, right?

47:52 Absolutely. I can talk about Red Hat because I am a part of it and I know how it works there, it's not just about taking about the company that pays my salary. But like we've hired people to take care OpenSource library and like to work on those OpenSource libraries that OpenStack consumes. They don't even work on OpenStack, and we do that because we believe in OpenStack and that library is a huge part of OpenStack, and if that library breaks, that person just decides to leave the library and drop it and never maintain it- I'm not going to say that the community won't pick it up, but why wouldn't you give that guy or girl who has dedicated a lot of time on that library an opportunity to just keep doing it and have OpenStack benefit from that, right. So it's not a matter of hiring people or having people working on OpenStack full time, but also having people working on libraries that OpenStack uses to make sure that OpenStack stays sane and those libraries won't break in the future.

48:57 Yeah, that's a great stabilizing force in the whole OpenSource industry, right?

49:02 Yeah, exactly, and like I said, OpenStack is a huge ecosystem and to keep it sane, and consistent and stable, it takes a lot of minds and resources to do that. And I'm not going to say like it is a rocket science, but definitely there are many separate independent pieces that need to talk to each other and if one of those pieces do not do that, it will just all fall down, right away.

49:27 Yeah. Thinking of all interdependencies between 375 different repositories, that's complicated.

49:34 Right, exactly.

49:35 But, you guys are awesome. So one thing I did want to come back to is Docker- what's the story with Docker and OpenStack?

49:44 We are just embracing it. It is not a matter of whether you have to pick Docker or you have to pick OpenStack, even if you have Docker, you still need to maintain all of your containers, right, there are all the tools to do that out there, but OpenStack is also one of them; you can have services that will just allow you to manage all your Docker containers like you would do with Nova and virtual machines basically. So you don't really need to pick one or the other. We believe in Docker, we believe Docker is an amazing tool and it gives you a lot of power and resources, it does that in a different way, it has its own issues as virtual machines have their own, but we believe that the containers and Docker itself is also part of what a good cloud is and which is why we have also services that would allow you to just use Docker.

50:42 So, much like I can go up to OpenStack and say I would like 3 virtual machines with this image and then put them into a private network; I can say I would like these 3 containers based on these images and I want them in a private network- is that possible?

50:57 Yes, absolutely. And you can also say I want two virtual machines and just one container. And make them work together.

51:04 That's awesome.

51:06 Yeah, you just pick your flavor and like I said, everything in our industry depends on the context and your needs, right? And that's probably one of the best excuses we have in our industry- whenever you don't want to answer something you just say that. That was a joke. But, even for OpenStack, it all depends on your needs actually, like if you don't want to, if you don't need all the security layers and features that the virtual machine provides and give you, you can also use Docker. It's not just a matter of resources, it's not like a virtual machine will use more resources than a Docker container. To some extent that is true, but the footprint a virtual machine has nowadays has been improved a lot and it isn't as cheap as running a Docker container, but it is not as expensive as it used to be.

51:59 Yeah, but Docker enables different kinds of workflows-

52:02 Exactly.

52:05 And super responsive scale right, 100 miliseconds you've got a container open and running, like there is just different things you can do and tradeoffs you can make so it's really cool you guys support it.

52:14 Yeah. Absolutely.

52:16 Awesome. So, Flavio we are coming up kind of near the end of the show, do you have nay like final call to actions or things you want to let people know about that we haven't spoken about yet?

52:26 What I would like to say again is that you don't need to have a huge public cloud for OpenStack to be working like just install it and improve your development and workflow using OpenStack. You don't need to deploy it on virtual machines you have a starter kit and you can just start small, give it a try and if you like it you can just grow from there.

52:46 All right, that's awesome. And they can go out and check out the starter kit to get started maybe, right?

52:51 Right, and if it doesn't work, you know where to find us.

52:54 That's right. All right, two final questions before you go: if you are going to write some Python code what editor do you open?

53:00 Right now, Emacs. I love both, Vim and Emacs, I used to be just only Vim developer but then just Emacs a try and there is a tramp mode that I love, especially when travelling. So, I run Emacs. That's what I open now.

53:18 Excellent, they've converted you, huh?

53:21 Yeah man. But I can go back, like I use both honestly, it's very weird, like sometimes I just run Vim and start doing it but right now most of the time is just Emacs.

53:32 Cool, and of all the thousands of PyPi packages out there, what are ones you think people should know about that maybe they don't?

53:43 There is this package that I was actually creating as part of OpenStack and I like it a lot, it's called Oslo messaging. It's an RPC library, it does not provide just pure messaging across brokers, it is an non-opinionated RPC library that will allow you to either use RabbitMQ, Qpid or some other broker that is supported to create your distributed system and the different parts talk through RPC calls. So, that's widely used in OpenStack and is probably one of those sticks that if you remove it OpenStack will just fall down and most of the services that have a distributed architecture in OpenStack are using Oslo messaging right now to make sure that all the pieces in the service can talk to each other. So if are in need of having different notes talking to each other and RPC is one of the things you might want to give a try, well Oslo messaging is an amazing library for that.

54:50 Perfect. So, very good recommendation. If you are out there listening, go check out OpenStack and check out that starter kit you can find on the homepage. Definitely cool project, and I love the whole Python component of it Flavio, thanks.

55:06 Awesome, thank you for having me here.

55:07 Yeah, thanks for being on the show, see you later.

55:07 This has been another episode of Talk Python To Me.

55:07 Today's guest was Flavio Percoco and this episode has been sponsored by Hired and CodeShip. Thank you guys for supporting the show!

55:07 Hired wants to help you find your next big thing. Visit hired.com/talkpythontome to get 5 or more offers with salary and equity right up front and a special listener signing bonus of $4,000 USD.

55:07 Codeship wants you to ALWAYS KEEP SHIPPING. Check them out at codeship.com and thank them on twitter via @codeship. Don't forget the discount code for listeners, it's easy: TALKPYTHON

55:07 You can find the links from the show at talkpython.fm/episodes/show/33

55:07 Be sure to subscribe to the show. Open your favorite podcatcher and search for Python. We should be right at the top. You can also find the iTunes and direct RSS feeds in the footer on the website.

55:07 Our theme music is Developers Developers Developers by Cory Smith, who goes by Smixx. You can hear the entire song on talkpython.fm.

55:07 This is your host, Michael Kennedy. Thanks for listening!

55:07 Smixx, take us out of here.

Back to show page
Talk Python's Mastodon Michael Kennedy's Mastodon