#166: Continuous delivery with Python Transcript
00:00 We've evolved from, if it builds, ship it, to continuous integration, where every check-in
00:06 is automatically verified by something like Travis CI.
00:09 Taking that further, some people today are using continuous delivery.
00:14 This means once a check-in is validated by the CI system, it's deployed automatically.
00:19 There are many moving parts in these processes.
00:22 On this episode, you'll meet Chris Medina, who has put together a world-class CI CD system.
00:27 And he's here to share how he did it and what tools and libraries are involved.
00:31 This is Talk Python to Me, episode 166, recorded June 11th, 2018.
00:36 Welcome to Talk Python to Me, a weekly podcast on Python, the language, the libraries, the
00:55 ecosystem, and the personalities.
00:57 This is your host, Michael Kennedy.
00:59 Follow me on Twitter, where I'm @mkennedy.
01:01 Keep up with the show and listen to past episodes at talkpython.fm.
01:05 And follow the show on Twitter via at Talk Python.
01:07 This episode is sponsored by Linode and Rollbar.
01:11 Please check out what they're offering during their segments.
01:13 It really helps support the show.
01:15 Chris, welcome to Talk Python.
01:17 Hi.
01:17 How's it going?
01:18 Glad to be here.
01:19 It's going great.
01:20 Yeah, I'm happy to have you here.
01:21 It feels like we were just hanging out in Cleveland just a little while ago, right?
01:27 Just a little bit ago.
01:28 Yeah, everybody's scattered back to where they came from.
01:31 And it's sad that PyCon is over.
01:33 But that was really fun to spend some time together there.
01:35 Yeah, for sure.
01:36 It was great to meet the PyBytes guys, too.
01:38 I hadn't met them before.
01:39 So that was pretty cool.
01:39 Yeah, they traveled quite far to get there.
01:42 From both sides of the globe.
01:43 So, yeah, I mean, PyCon is such a special place.
01:46 And PyCon US seems to be where the most gravity is.
01:49 I know EuroPython is also large.
01:52 But I feel like that PyCon US is probably the biggest.
01:55 Just get the sense that I get.
01:56 Anyway, I think just, you know, recommend that people next year, if they didn't get a chance
02:02 to go, definitely go.
02:03 Like, wouldn't you say?
02:04 Did you have a good time?
02:04 Yeah, definitely.
02:05 So, like, I started doing PyCon maybe three years ago now.
02:09 And it's definitely been a lot more interesting to so much stuff from so many different people
02:15 doing so many different things with the language.
02:17 Because Python has such a wide usage.
02:21 It's just great to just be out there and just see what everybody's doing at.
02:24 And I like just hanging out in the expo hall and just kind of just talking to everybody.
02:29 So you're like, oh, what do you do with Python?
02:31 And definitely the open sessions, people don't quite, you know, understand how cool that is
02:38 versus, you know, your average conference where everything is just kind of like pre-configured
02:42 for you.
02:43 Yeah, definitely.
02:43 Both you and I ran some open sessions, right?
02:46 That's true.
02:46 Yep.
02:47 Yeah.
02:47 What were yours on?
02:48 I did two.
02:48 I did one on just blogging and stuff, folks that are creators for Python.
02:53 And I did one on virtual reality, augmented reality, and to see how, you know, experiences
03:00 with people and Python and that type of stuff with those environments.
03:03 Yeah, that's pretty awesome.
03:05 So now that you're back to doing what you do day to day, maybe we could get your story
03:09 and your background.
03:09 Like, tell us how you got into programming in Python and what you do day to day.
03:12 My story is more like a classic story.
03:14 Dad grew up doing software.
03:16 He had a consulting business.
03:18 There's stories of two-year-old Chris sitting in his lap typing in the keyboard somehow.
03:23 Not that any of that was intelligible, obviously.
03:25 But yeah, so he started stuff with old IBM systems.
03:30 Like, you know, monochrome monitor, eight and a half inch floppy integrated keyboard,
03:33 no hard drive things.
03:34 No hard drive.
03:35 That still blows my mind that computers came with no hard drives.
03:37 Yeah.
03:38 So I remember the first time I actually got an idea of what, like, software was.
03:43 It's because my dad wrote something for me to sit in front of his computer and learn the
03:50 times tables.
03:52 And it was just a short little program, but I could go into the program and fiddle with it.
03:57 So I was really young for that.
03:59 And I still remember it.
04:00 It was pretty cool.
04:01 It was an IBM system 23.
04:02 Nice.
04:03 Would it, like, randomly pick, like, two numbers, multiple, you know, and ask you what the answer
04:08 was and say you got it right or wrong, basically?
04:10 Pretty much.
04:10 That's pretty nice.
04:11 Like interactive flashcards, basically.
04:13 I remember doing a data entry forum back in the days where you can't slurp anything in
04:17 from any APIs or anything like that, right?
04:19 Somebody would just hand deliver an invoice and you had to type it into the computer.
04:23 When computers were not actually connected to anything, they were just there on a desk.
04:28 How weird, right?
04:28 Right.
04:29 Exactly.
04:29 So some of the first stuff I did was system 360 basic.
04:34 My hello world was like a menu for, like, opening an invoicing app or something like that.
04:40 Back when you had to type line numbers and make sure you left enough room between them in
04:45 case you needed to add more lines before.
04:47 Yeah, that's right.
04:48 Like, if people don't know this, that used to be a big deal.
04:51 So it used to say, you know, 10 would be like a line number and you would put a command and
04:56 then 20, you'd put like another.
04:57 And the reason it didn't go one, two, three is you might have to do 11, 12, and 13 someday
05:02 and you don't have to rewrite the whole program.
05:04 It's so insane that you had to explicitly call out the line numbers.
05:08 But I guess with like the go to navigate, like a sort of branching mechanism, you had to say
05:14 go to this line.
05:14 So it had to be really clear what line that was.
05:17 Yep.
05:17 And then we added go sub.
05:19 Oh, man.
05:19 I was advanced concepts right there.
05:21 Huh?
05:23 That was amazing.
05:25 All right.
05:26 So you started there.
05:27 Like, did you go and get a computer science degree?
05:29 So I did computer engineering.
05:31 I actually was like, well, I kind of halfway understand at least some of the software stuff.
05:36 So let me see what all the software is built on.
05:38 And I went off and figured out how to do hardware.
05:41 So I did computer hardware, computer architecture, that type of stuff.
05:46 And, you know, from there I went into IBM, which was the first guys that hired me.
05:51 So I spent many, many years with IBM doing systems tests.
05:54 Were you doing hardware stuff for IBM?
05:56 I was on the side of we developed a new server.
05:59 It would come into our organization and we ran validation on it.
06:04 So before I made it out to the customer, we would go and check a bunch of things.
06:08 Like it was gigantic organization, tens of thousands of tests that would execute.
06:11 So we had like a small piece of that that had like integration with more on the integration
06:16 level of the server with the firmware and the hardware.
06:20 So I'd have to do hardware tools that I'd have to code and say like some sort of what is today
06:26 embedded systems, right?
06:28 All the way up to like high level software tools to, you know, maybe interface with that stuff
06:33 or even business apps too, which was the biggest thing I wound up doing,
06:37 which was kind of a management system for keeping track of all our test organizations and stuff,
06:42 including status reporting, test execution, procedures, all that type of stuff.
06:47 That sounds pretty interesting.
06:48 I don't know a whole lot about the hardware side of computers.
06:52 I mean, obviously I have some concepts, but like I couldn't, you know, design RAM or anything like that.
06:59 Right.
07:00 It's just, that's always, that's sort of like where, you know, the Darby Dragons sort of aspect
07:04 of programming is for me.
07:05 Like I have a conceptual idea of it and I don't know how close that is to reality.
07:09 So it's, it's pretty cool that you kind of get a bridge to that world.
07:12 Yeah.
07:13 So some of the most interesting, so why some of this stuff is important too,
07:16 is because you get to understand a little more, some of the more recent security things called
07:21 Rowhammer, if you've heard about that.
07:22 Oh yeah.
07:23 Tell people what Rowhammer is.
07:24 So Rowhammer is a way of essentially kind of hacking your machine to run code by accessing
07:33 certain parts of memory at certain speeds such that you would make an adjacent memory cell
07:39 have the data that you wanted or the code that you wanted to execute.
07:44 And then maybe that adjacent memory cell is the one that's in like the privileged memory.
07:49 I see.
07:49 That's so tricky.
07:51 Does it use like a cache hits and misses and stuff like that?
07:55 Are there the prefetch stuff or where, where does that come from?
07:57 So this is actually winds up in your actual, so it's a, it's a general problem with DDR3 in
08:03 general.
08:04 So anything that has a DRAM from DDR3, you could do that in as long as the refresh rates, you
08:12 know, memory, your processor kind of controls how often memory is refreshed.
08:16 So it would go in and blank out, you know, whatever charge is accumulated.
08:20 So if your refresh rates are, if you wait a long time between refreshing, you have a larger
08:26 window in which you can get in and make those changes.
08:28 So, so it'll actually happen on your DIMMs.
08:30 Oh, wow.
08:31 These are crazy ideas.
08:32 Like we're seeing a couple of these, right?
08:35 So there's a row hammer, there's spectra, there's meltdown.
08:39 I mean, these are like not even software problems, right?
08:42 These are down in the chips.
08:44 Yeah.
08:44 So then the other, the other ones you mentioned are down at the processor level.
08:48 And a lot of things, a lot of those are because optimizations to trying and execute instructions
08:55 faster.
08:56 And the way, the way that processor pipelines work, you want to prefetch some information.
09:01 So I know some, I know one of them, I don't remember which one, which one was what, but I
09:06 know one of them was more related to branch prediction.
09:09 So if you, if you're going through code and you have an if then else kind of thing, it'll
09:14 prefetch both sides of your if, whether it meets the condition or it doesn't.
09:18 And then you can play with that a little bit and have it prefetch some memory information
09:25 that, or data that, you know, it shouldn't have because it's at such a low level, stuff
09:31 like that.
09:31 Yeah.
09:32 It's crazy.
09:32 Yeah.
09:32 It's going to be interesting.
09:33 I think we'll probably see more of those types of things, but it, you know, it really gets
09:38 scary when you mix that with cloud computing and we're going to talk about cloud computing
09:41 a lot, actually.
09:42 So maybe start with, yeah, maybe start with what you do day to day, let people know what
09:46 you're working on.
09:47 So I'm part of a small group of folks that worked for a company called Nimble Storage that was
09:52 acquired by Hewlett Packard Enterprises last year.
09:55 And Nimble makes storage arrays.
09:58 So as in external enclosures where you have a bunch of hard drives that you access through
10:05 iSCSI or Fiber Channel.
10:06 And these are for like data centers, right?
10:07 These are not like NAS for your home, are they?
10:10 Right.
10:11 These are built for data centers.
10:12 So these are expensive things that have a performance, you know, way higher performance that you get
10:18 something out of a consumer product.
10:19 And so they have a bunch of systems management and stuff around that as well.
10:22 And a bunch of guarantees in terms of data savings due to deduplication or compression and
10:31 things like that.
10:32 Enterprise class features, snapshotting and things of that nature.
10:36 Okay.
10:36 So nice.
10:38 So if you run like a hundred VMs, like most of the OS is probably the same across all of
10:43 them.
10:43 You just need that one copy of those files.
10:45 Right.
10:46 So we do instant snapshots.
10:47 So if you, if you have a virtual machine, so one of the use cases is say you have like
10:51 a database in a virtual machine with like, say your database in it, and that's all contained
10:55 in a couple volumes of which you can group together.
10:58 You can go in and say snapshot that and it happens like instantly and then spit off a
11:05 new VM based off of that, those volumes you just made over there.
11:08 And now you have a copy of your data essentially.
11:10 And it keeps track of the diffs.
11:12 So it's kind of like a, if you, if you bring it back to the software world, it feels a little
11:17 bit like you were playing with Git really.
11:20 And, or Docker where you have like a special commits, which have your data.
11:26 And then you have the diffs of your data into the next set of commits kind of thing.
11:30 So when you do your snapshots and your clones, instead of having a, an entire duplicate of
11:35 all of the data, you only have the difference that you write afterwards.
11:38 So that's also pretty helpful.
11:40 So those are the arrays that we make.
11:42 And so one of the things that we're kind of experimenting with cloud stuff, we decided
11:46 to go off and built what is cloud volumes.
11:49 So the product is called HPE Cloud Volumes.
11:52 We can, and the purpose is if you have a, an Azure or an AWS virtual machine and you want
11:59 to tie it to some of our storage arrays, you can go to the website and request volume of
12:07 certain size and other characteristics along with it and configure it so you can plug it into
12:15 your, your VM.
12:16 That's pretty interesting.
12:17 So tell me why would I like pick that say, instead of just like creating a volume in AWS
12:23 or Azure in their mechanism, right?
12:26 Right.
12:26 So one of the main things is that you can go cross cloud.
12:29 So you could have your volume that you made with say your Mongo data or something.
12:34 And, you know, it's attached to your AWS, but you know, if you're running a super critical
12:39 application and say your AWS region goes down, you can just clone that volume and attach it
12:45 to Azure and you can use it on Azure side of the world.
12:49 The other thing you get out of it is we can do a higher IOPS when talking, when, when doing
12:56 the IOPS is a measure of how many storage operations, IO operations you do on your drives over the
13:01 array.
13:02 Then you can get out of a regular EBS.
13:04 And as far as I understand the instance snapshots as well, I forget if EBS does instance snapshots
13:11 or not.
13:12 I don't think they do.
13:13 So you also get that ability.
13:14 There's also extra stuff that we provide for our actual physical enclosure customers.
13:21 So if you own one of our physical arrays, you can actually replicate your data up into the
13:27 cloud through our service.
13:28 And you don't have to worry about the ingress costs through AWS or Azure.
13:32 So we provide a way of seeding information out to say different regions and things like that
13:37 as well.
13:37 Oh, that's pretty interesting.
13:38 Cause yeah, that can get pricey real quick.
13:39 Yes, it can.
13:40 Yep.
13:40 Yes, definitely.
13:41 I know a little bit about bandwidth charges and whatnot.
13:45 I think last month I paid $600 in AWS bandwidth.
13:48 Oh, geez, man.
13:49 You're getting lots of downloads.
13:50 Woo.
13:50 No, these are good problems to have, but that's a lot of bandwidth.
13:52 This portion of Talk Python to me is brought to you by Linode.
13:58 Are you looking for bulletproof hosting that's fast, simple, and incredibly affordable?
14:02 Look past that bookstore and check out Linode at talkpython.fm/Linode.
14:07 That's L-I-N-O-D-E.
14:09 Plans start at just $5 a month for a dedicated server with a gig of RAM.
14:13 They have 10 data centers across the globe.
14:18 They're going to run a lot of data center near you.
14:18 Whether you want to run your Python web app, host a private Git server, or file server,
14:23 you'll get native SSDs on all the machines, a newly upgraded 200 gigabit network, 24-7 friendly
14:31 support, even on holidays, and a seven-day money-back guarantee.
14:34 Do you need a little help with your infrastructure?
14:36 They even offer professional services to help you get started with architecture, migrations,
14:42 and more.
14:42 Get a dedicated server for free for the next four months.
14:45 Just visit talkpython.fm/Linode.
14:48 Another thing you do is you spend a little time writing some fairly popular articles on
14:55 your blog.
14:56 I know because Brian and I end up covering them often on Python Bytes.
15:00 Yeah, no, I appreciate that too.
15:01 So I don't know if it's like a chicken before the egg thing.
15:04 It's like something comes up a little bit, but then you guys post it up, and then it gets
15:07 a lot more reads.
15:08 But yeah, so I have tryacceptpass.org, and we do the posts include a bunch of stuff, usually
15:17 things I play with.
15:18 Most of the articles that get the most views usually are how-tos.
15:23 I try to do a lot on async.io stuff because I'm trying to do more async.io, and that is not
15:28 an easy concept as it is today in Python.
15:30 Yeah, the work that you're doing there is really nice because I feel like that's a lot of
15:34 areas where there's really not very much coverage.
15:37 I'm definitely planning on writing a course on async.io because I feel like either people
15:42 are just, they know about it and they're confused, or like, ah, it's too hard, or they just don't
15:47 even know, right?
15:47 They're like, I'm switching to Go because Go has better async than Python, but you don't
15:52 understand.
15:53 It does too.
15:53 You're just not using it.
15:54 I understand it's not as integrated into the web frameworks as it should be, but still.
15:58 Great.
15:58 But even that's changing now, so it's getting pretty good.
16:02 And so I just want to try to keep playing with that and post that up there.
16:06 And I also do a couple of things just on general engineering and software and a little bit of
16:11 testing since I spent so many years in tests.
16:13 I got this series going on called Practicality Beats Purity, mostly about that part of Python
16:22 and how to, one thing sounds great, but how good is it really when you implement it kind
16:28 of thing.
16:28 Right.
16:29 Like one of the popular articles you had was microservices versus monoliths and all
16:34 the interesting trade-offs you make there.
16:36 So we'll definitely dig into those, but I kind of want to focus on the whole reason we started
16:41 talking about having you on the show, which is the continuous delivery you're doing around
16:45 your work you're doing in HPE, right?
16:47 And so maybe let's just start with what is continuous delivery.
16:52 Like I know there's continuous integration and that's like something watches my repository
16:57 and does some sort of build verification on check-in.
17:00 So what's continuous delivery?
17:02 How's that typically work?
17:04 So continuous delivery expands on top of the continuous integration concept and say that
17:08 build I just built, I want to do everything that I need to do to that build to make it deployable
17:14 to production and have it available to be deployed to production, if not already deployed in production
17:20 automatically.
17:21 And so the idea is to set up your, be able to deliver code into production as quickly
17:27 as possible in a way that that's maintainable while still having a set of status checks around
17:33 that, that you can, that makes your job easier.
17:36 Right.
17:36 Okay.
17:36 So maybe the Holy grail is like, I've got a GitHub repository and it has different branches.
17:42 So maybe a branch is called production and a branch is called staging.
17:46 And if you commit into staging it, you know, after some delay of the builds and creation
17:53 of the servers and whatnot, there is now a new staging server or services based on top
17:58 of that where you did nothing but wait a little bit.
18:00 And that's kind of how we have our stuff set up.
18:03 We have two branches and every, everything that's on our production stuff.
18:06 Production branch is things that have gone already out to production.
18:11 So everything there is a known working build that we can deliver to a customer.
18:14 And we have our, we actually use the master branch for all our staging stuff, which is things
18:19 that are fully tested or all the, we've guaranteed, we can guarantee that all the basic functions
18:25 are tested.
18:26 And the whole thing is completely built into a deliverable that we can actually go and put
18:33 out in staging.
18:34 And so as we go through our, our pull request cycle flow, GitHub flow type thing, get flow,
18:41 we can automatically put the build up in our staging environment.
18:46 Right.
18:46 Okay.
18:47 So you guys use the get flow style of work.
18:49 Now this is really common in open source.
18:51 Like some random outside person wants to make a contribution to a project.
18:56 They don't have right privileges.
18:58 So they'll fork the repository, make the change, do a PR back, and then the people can review
19:02 and accept it.
19:03 But some organizations, sounds like you guys, do that even for yourself on your own projects
19:08 as a way of sort of like formalizing it, right?
19:11 Right.
19:11 So we use GitHub enterprise internally, but things like GitLab also have a similar concept for
19:17 this.
19:17 It's just called differently.
19:18 So we have brand, our master and production branches are protected branches.
19:23 And the only way to get in there is to go through a pull request merge.
19:28 So we don't necessarily require the developer to have a separate, like a forked.
19:33 repo.
19:33 They just have permissions to push their own branch into our main repo.
19:38 So once they push their branch and they open a pull request, we have a whole set of automation
19:44 systems in place which receive the webhooks for the pull request and kick off automated builds,
19:50 our style checking, linting, and all of the testing that goes around that pull request.
19:57 And so you can even in GitHub say, I require the following statuses because the pull request
20:04 object in GitHub has the concept of statuses.
20:07 As you run those webhooks, the code that kicks off from those webhooks, the status is reported
20:15 straight back into that pull request.
20:17 And in GitHub, you can say, well, if all of these have passed, only then is your pull
20:23 request valid to be merged into whatever branch it is that you want to go.
20:26 Yeah, that's a really awesome feature.
20:28 And to me, it sounds like this is a really nice way to sort of pre-vet what would be standard
20:35 code review, right?
20:36 Like instead of going, well, you've done your work.
20:38 Now let's review it and see if it's good.
20:40 It's going to be like it's on the verge of being merged.
20:43 And then you've already had all the tests done and everything is good.
20:46 You push the button.
20:46 And when that happens, it automatically deploys with no further work as well.
20:50 So it's just like that one gate, right?
20:52 Right.
20:52 Once all our tests are done and once the review is approved, we click our button.
20:57 We do some other niceties around it.
20:59 We kind of squash every right, our merge commit a little bit so that it's useful.
21:04 And we press the button.
21:06 Right.
21:06 Maybe talk about that a little bit because, you know, Git can like if I'm going to do, excuse
21:11 me, I'll create a branch, I'll do like 20 commits, a bunch of little tiny ones.
21:16 And then eventually I'm going to create a PR from that.
21:18 You might want that to not look like 20 small operations, but one holistic one, right?
21:23 Right.
21:24 So we try to follow a system where we abstract more as we go down the production side.
21:28 So we have as much detail as possible on the branch that the developer put their things
21:33 together in.
21:33 So all the commits on that branch are going to be the itty bitty things.
21:36 Started to work on this feature, went to lunch, came back.
21:40 Oh, didn't work.
21:41 Trying it again.
21:43 Yeah.
21:43 Yeah.
21:43 Yeah.
21:44 You know, there's like a bunch, there's a bunch of really funny commits that go into that.
21:47 And then you say, okay, so I'm ready.
21:49 So here's what I'm going to take all that group of 20 commits and merge it as one commit
21:54 up into my master branch.
21:56 And so I go back and clean all that stuff up and say, these are the features that are
22:01 going in.
22:01 These are the issues that are getting closed because GitHub has all that automation.
22:04 And for us, you can say closes hashtag issue number and automatically closes the issue for
22:09 me.
22:10 And then when I press a squash and merge button, all that stuff gets squashed and you only get
22:16 one bubble in your master branch with the summarized changes.
22:18 Yeah, that's really awesome.
22:19 And of course, you can go back to the other branch and see it, right?
22:22 And see all the details.
22:23 Yep.
22:23 Yeah, that's really nice.
22:24 I wonder how many people actually use that GitHub automation around like interacting with
22:29 issues.
22:29 I use that all the time, even just for myself, you know, hashtag some ID of an issue or PR
22:36 and say, this is related to that.
22:38 And it's really nice to just get those automatic links in there.
22:40 Yep.
22:41 And like, I'm very opinionated on issues.
22:43 So I love the way that GitHub does issues, not in the UI or anything like that, just like
22:48 the idea of it's an issue.
22:50 It has a title, it has a description, some comments in it, and some tags, some labels,
22:54 right?
22:54 I don't need anything more than that.
22:57 Everything else can be described with labels.
22:58 Like Jira to you feels like you're swimming in too many, like UI soup.
23:03 Yes.
23:03 And so I've used like half a dozen different issue trackers.
23:07 At the moment, the one we use internally actually is Jira.
23:11 So, you know, we have the usual GitHub versus Jira thing.
23:16 So we actually wound up writing a kind of a bridge to help us out, follow our business
23:21 logic in Jira as we do stuff on GitHub.
23:24 So we also listen for issue webhooks and update Jiras for us automatically.
23:28 But this way I can go open an issue in two seconds by typing it into GitHub.
23:32 And then all the stuff just kind of that goes into Jira gets all rolled into place as it
23:36 should be.
23:37 Oh, that's a really, really awesome way.
23:39 Like, I don't really want to work in Jira.
23:40 So I'm just going to automate working in Jira like my robot will.
23:44 Yeah.
23:44 And you got to be careful with that stuff because then you'll wind up maintaining it.
23:48 But right.
23:50 Nice.
23:51 But yeah.
23:51 So once our pull request is actually merged, more of our webhooks also say, oh, there was
23:57 this pull request that got merged into master.
23:58 So that means we need to deploy code.
24:00 So we open a new pull request to go to production.
24:03 And so that's what we call our deploy pull request.
24:06 And so that one does a little bit of the different thing where it actually builds.
24:12 So our deliverables are container images.
24:15 Docker containers, right?
24:16 Because our service runs in using Docker and AWS container, elastic container services.
24:23 So just to step back on that a little bit.
24:27 So a lot of people get confused.
24:28 Docker containers is one thing, but there's really two concepts.
24:34 There's the image.
24:35 And then there's the instantiation of that image, which is your actual container, right?
24:40 So when I go and say, I want an image of my REST API, that means I have the file system in
24:47 place.
24:47 So that when I say Docker run, I can instantiate a version of that image and execute my code inside
24:56 that environment.
24:56 So we deliver our two different images, one for our web service or web UI and one for our core stuff.
25:04 So all of the orchestration that we have to do in order to make our service work, which involves cloud
25:11 orchestration with AWS and Azure, some third party data center orchestration, switch management,
25:18 array configuration, and then all resource allocation algorithms, user management, all that stuff.
25:25 That's all kind of bundled into one container image.
25:27 And we run it with different environment variables to have it perform different functions.
25:32 So we have a microservices architecture, but with two images.
25:37 The way we manage the way that AWS works, when you have the container service in it, you define
25:44 a service like my core REST API service.
25:48 And I say, I want this to run several tasks or one task.
25:55 And I want, I wanted to run this container, but I want this image tagged in such way.
26:03 So for example, in our repository for that, the REST API backend, when we deploy an image, we push up the code
26:12 and then we say, okay, we're ready to move to staging.
26:15 So we tag it with staging latest.
26:17 So then I can go to AWS and my deployment activation work of make this image.
26:23 Now the valid one in this environment is just stop the containers.
26:27 And then AWS will automatically restart them.
26:30 And when they start to come back up, they say, oh, there's a new staging latest image.
26:33 Let me download that and use that one.
26:35 I see.
26:35 That's really cool.
26:36 So basically the AWS container service just knows I'm going to run out of this, this get repository
26:43 with this tag.
26:45 And I just always look for that.
26:47 If necessary, rebuild it.
26:48 Right.
26:49 A Docker image.
26:50 A Docker image.
26:50 Yeah, yeah, yeah.
26:51 So, and right.
26:53 So everything just builds off of that.
26:55 And it's really helpful also because if you broke things for one reason or another, all you
27:03 have to do is move your staging latest tag back to your previous one and restart the containers.
27:09 Go now, undo it, undo it.
27:12 Yep.
27:12 That's all you got to do.
27:13 You don't have to worry about anything else because you know that was working code and you're
27:16 back in time.
27:17 Now, there's other.
27:18 That's really nice.
27:19 Complexities when it comes to database migrations and things like that.
27:25 But 90% of the time, you don't have to worry about it.
27:27 Right.
27:28 Okay.
27:28 That's really nice because like it's one thing to roll the code back, but you're potentially
27:32 making infrastructure changes and OS changes at that level as well.
27:37 And the ability to go, oh no, just put everything in the back of, put it back like it was.
27:43 It was working.
27:43 It's pretty cool.
27:44 Yeah.
27:44 Because like once, so the code underwent all the testing and all that worked and the
27:49 container image that we used to build that code.
27:53 And I'll get a little bit into that in a minute is, it's slightly different than the one I actually
27:58 wind up putting in staging because it has more stuff in it to maintain the test infrastructure
28:03 or be able to get to the test infrastructure.
28:06 The one that actually makes it up into staging, which is equal to the one in production at that
28:11 point also has a bunch of other things like an NGINX configuration that's a bit different
28:16 or a micro WSGI config along with it.
28:20 So I can easily like mess that up.
28:22 And it's only a problem I see when I go to deploy it.
28:25 Right.
28:25 Right.
28:25 I just, today I would have liked to have something like this.
28:28 I mean, I have multiple like staging and production servers for my various things.
28:34 And one of them, I got an indication there was an upgrade for IDNA, I think some, which
28:41 is some low level dependency on my, my system.
28:44 And then there's requests.
28:46 And so I upgraded the low level thing and said, oh no, requests forces, you know, to use an
28:52 older version of that.
28:53 So guess what?
28:54 Your site won't even start.
28:55 It's just dead.
28:56 But luckily it was like running on, like I had taken that one out of the load balancer and
29:00 like, oh my goodness.
29:01 But I had to do a lot more work than just move the tag back.
29:04 Right.
29:05 It was like, all right, well now how do I unravel this?
29:08 How do I make it know that it's supposed to install the right one and all that kind of stuff.
29:11 So it, yeah, it sounds, I can definitely see the advantage here.
29:14 And that actually happens quite often with a bunch of things, not just your Python libraries
29:20 that you have to worry about and their interactions, but also your Docker images.
29:23 So what happened to us in the past couple of weeks while we were going through testing, like
29:27 in the middle of I'm tests are passing and then all of a sudden everything's failing, what's
29:31 going on?
29:32 Like a couple minutes.
29:33 We actually hit where whoever maintains the base Python Docker image that we depend on
29:39 iterated on it.
29:40 They changed it.
29:41 Oh boy.
29:42 Yeah.
29:42 Yeah.
29:43 And the new one, and obviously we can go back to the older one, but at this point we
29:47 wanted to move with a new one.
29:48 The newer one changed the base.
29:51 I think it moved the major version.
29:54 So all the app packages and stuff like that had updated.
29:57 So I needed to use different, different names to pull some stuff.
30:00 So that was fun.
30:02 Yeah, that's fun.
30:03 And if you do that on the real machine in production while it's running, not so good.
30:07 Yeah.
30:08 Not so good.
30:08 Right.
30:09 That's why all this stuff is in place.
30:10 That's right.
30:11 This portion of Talk Python to Me has been brought to you by Rollbar.
30:16 One of the frustrating things about being a developer is dealing with errors.
30:20 relying on users to report errors, digging through log files, trying to debug issues, or getting
30:26 millions of alerts just flooding your inbox and ruining your day.
30:29 With Rollbar's full stack error monitoring, you get the context, insight, and control you
30:33 need to find and fix bugs faster.
30:35 Adding Rollbar to your Python app is as easy as pip install Rollbar.
30:40 You can start tracking production errors and deployments in eight minutes or less.
30:44 Are you considering self-hosting tools for security or compliance reasons?
30:48 Then you should really check out Rollbar's compliant SaaS option.
30:51 Get advanced security features and meet compliance without the hassle of self-hosting, including
30:57 HIPAA, ISO 27001, Privacy Shield, and more.
31:01 They'd love to give you a demo.
31:02 Give Rollbar a try today.
31:04 Go to talkpython.fm/Rollbar and check them out.
31:08 One question I did have while you're describing what you're up to, and we talked about the
31:13 GitHub hooks like hashtag closes, hashtag one, two, three, or whatever.
31:17 Is there a way to make that happen only when it merges into the main branch?
31:23 Or does that PR commit itself trigger the closing of that issue?
31:28 Yes, I understand what you're asking.
31:29 Yes, it only happens when you do the merge.
31:32 Oh, really?
31:33 Okay.
31:33 Because I type it in my commits all the time.
31:36 Interesting.
31:36 Okay.
31:37 Well, that's awesome.
31:37 Yeah.
31:37 Very, very nice.
31:38 All right.
31:39 So maybe one of the things, I think maybe the most interesting thing to cover is like,
31:43 we've now set the stage of what you're building, but all the various pieces, there's so many
31:48 cool little libraries and packages and things involved in the act of building this whole pipeline
31:55 that you've created.
31:56 So do you want to walk us through that?
31:57 In order to run all the testing, we have a Docker swarm internally on premises in our data
32:03 center where we orchestrate all of this stuff.
32:05 So I have a container running that is my webhook receiver.
32:09 And I built that one, that REST endpoint for that using a hug.
32:15 Hug is a Python 3 REST API, kind of like Flask, but it's a bit smaller and a little more expressive
32:23 because it uses annotations when you're defining your functions to define the input type of your
32:30 parameters from your REST API.
32:32 It also automatically generates documentation.
32:35 Yeah, that's cool.
32:36 Hug is really interesting in that it's like one of these REST only frameworks.
32:42 It's not built from as far as I understand it, mostly for building web applications,
32:46 but more for building web services.
32:48 And there's a...
32:49 Yeah, there's a host of these that are really amazing at sort of leveraging Python 3.
32:54 So hug is definitely in there, which is super cool.
32:58 API star.
32:59 And API star.
33:00 Yeah.
33:00 Yeah.
33:00 That's the new one I'm playing with because API star fully supports asyncio.
33:05 So I can have an async function and have API star serve it up.
33:11 That's awesome.
33:11 Yeah.
33:12 I can just await in the function.
33:13 And in the meantime, it'll go off and do other things.
33:15 It's pretty cool.
33:16 How about hug?
33:16 Does it do asyncio?
33:17 Do you know?
33:18 I don't think so.
33:19 Last I checked, it did not.
33:21 But it was...
33:22 I think it was able to handle it better because it was all Python 3.
33:27 Yeah, it's definitely all Python 3, which is cool.
33:29 And if you look at the performance, Hug is built on a framework called Falcon.
33:33 Falcon, yes.
33:34 Which is also another cool web framework that probably no one else has heard...
33:38 Not many people have heard of.
33:39 But I had the guys building Falcon on my show.
33:41 And it's like a really low level, high performance...
33:43 It is.
33:43 ...web framework.
33:44 And then Hug is actually built on top of Falcon, which is pretty cool.
33:47 But they're definitely...
33:48 Both of those are like right near the absolute top of performance in terms of request per second on some random piece of hardware.
33:55 So, yeah, pretty cool.
33:56 I find that a lot of things like Django, Pyramid, or Flask, right?
34:01 These things have been along for a while.
34:03 So they have a bunch of things they do for you.
34:05 And the higher level of abstraction that you get out of a framework, which is what you want from a framework, usually.
34:12 The more careful you got to be with performance, because in order to give you that abstraction, they needed to put you through a number of other levels, especially...
34:23 Usually, function calls, which in Python are a little bit expensive.
34:27 They're surprisingly expensive, actually.
34:28 Yes.
34:29 Yes, they are.
34:30 Yeah.
34:31 And so, for example, one thing that Hug says, and I think this is partly coming through the Falcon side of things, is it's compiled with Cython to basically get much higher performance, which is a pretty cool aspect as well.
34:44 Yep, yep.
34:44 Okay, so you've got this, and this is one of the really important things about this Docker stuff, is it's awesome to have your database in a Docker container and your web framework and then your backend services.
34:56 But they all need to know, okay, where are you?
34:59 We just all got rebuilt.
35:00 Where are you now?
35:00 Who are you, right?
35:01 How do I find my backend?
35:02 Right?
35:04 So that's the role of this thing that you built.
35:06 Right.
35:06 So that receiver, the web receiver also can communicate with the Docker swarm using Docker Pi.
35:15 And then orchestrate, oh, I need to build a new container.
35:18 I need you to start a new container.
35:21 I need to build a new image.
35:22 I need you to start a new container with this existing or newly built image.
35:26 So, for example, one of the things we do is when the web hook comes in, we go in and do use requests to go up to GitHub, grab some information on the repository, search for a file.
35:36 That kind of works kind of like how Travis CI does.
35:40 We have a YAML that says, oh, here to set up for testing.
35:43 Here's your install instructions.
35:45 The actual tests are these things.
35:48 And there's a bunch of other settings we can do.
35:50 So one of the things in there might say, well, I want all these tests to run in parallel.
35:55 So that means I got to orchestrate getting the container built, the container image built off of the repository the way that the instructions say they're supposed to be done.
36:04 Then taking that and committing that new image to an internal registry that we have and then telling our Docker swarm to start five to six parallel images to go and execute tests based off of that new image.
36:22 And then those tests all have to require resources of their own.
36:27 So we have another infrastructure piece, which is a resource manager that all it does is it sits there and receives, waits for a WebSocket.
36:39 So I can, in order to pull, essentially check out a resource, I open a WebSocket connection and I say, oh, I want this type of resource.
36:49 And while that WebSocket connection is open, I have a reservation on that resource.
36:55 So that makes it so that I can write tests and not worry about releasing the resources when they fail.
37:00 Oh, that's pretty interesting because when the thing goes away, it just, it, because it breaks.
37:05 Yeah.
37:06 Socket closes.
37:06 Boom.
37:06 It's all done.
37:07 Huh.
37:08 I didn't realize, I thought you were just doing push notifications.
37:11 I didn't realize the WebSocket like session had such an important role.
37:15 That's pretty cool.
37:16 Yep.
37:16 So we do that.
37:17 And so for that, I used Autobom before, which is something I used inside Sophie, one of my open source modules.
37:23 But I recently moved it to WebSockets.
37:26 It's a module called WebSockets.
37:28 It's a lot more, it's built around AsyncIO a lot better, more Pythonic using Async4 and AsyncWith.
37:35 So it makes it a lot easier to interact with in a coroutine kind of way.
37:39 Yeah.
37:39 Interesting.
37:40 So your test might just do AsyncWith WebSocket connection and then do its stuff?
37:44 Something like that.
37:45 So the test will, the test will, the receiver will do an Async4 around the WebSocket, around receiving something in a WebSocket.
37:55 And so that's on the server side.
37:57 On the client side, we just open the socket and do, I think on the client side is an AsyncWith where you just sit there and just kind of wait for messages.
38:06 Interesting.
38:06 That's pretty awesome.
38:07 So then another thing that you do after the test pass, then you build your artifacts, right?
38:13 Like your packages.
38:14 And you use proper Python packaging as part of this, right?
38:19 My two main deliverables are container images, but I also have a, those are built on top of a bunch of other repositories that I have.
38:26 Two or three of those repositories are, their deliverables are actual Python packages, which are internal.
38:33 And we use an internal Python package index for that, which we later migrated to a tool called Artifactory.
38:41 I had never heard about Artifactory.
38:42 This is a thing by JFrog.
38:44 Yep, it's by JFrog.
38:46 It is, oh my goodness.
38:47 That is one serious piece of like enterprise software management software there.
38:52 Yeah, it's a lot of stuff.
38:54 A lot of stuff.
38:55 Python package indexes, NPM indexes, whatever those are called, just NFS, Docker registries.
39:02 And then you can mirror.
39:03 So if you have stuff in the outside world, you can mirror those and you can have it automatically push things for you.
39:09 And you can, you can add like tags and properties to things.
39:14 It's, it's quite complex.
39:15 It has a REST API too, to get to it.
39:17 It's pretty interesting.
39:18 Yeah, it's really interesting.
39:20 Their website has like a bunch of cool little animations.
39:22 It just makes you, it kind of draws you in.
39:24 So to me, it looks like you've taken, you guys in general have taken a lot of the awesome stuff from the public open source and maybe sort of made your own private version of it.
39:34 So you've got, you know, GitHub Enterprise.
39:36 You've got like a private PyPI server, private Docker repositories, all sorts of stuff, registries.
39:43 Yep.
39:43 It just makes the whole thing easier to work with because you have, you have an existing ecosystem that can work with all of this.
39:49 You don't have to build your own modules to talk to them.
39:52 Yeah.
39:52 Super cool.
39:53 I'd never heard of Artifactory, but it definitely looks like, like worth checking out.
39:57 It's no small piece of software as far as I can tell.
39:59 It looks like a big, a big thing that does a whole bunch of stuff, but it definitely looks like it.
40:04 It's pretty cool.
40:04 Yeah.
40:05 And it comes with its own complexities.
40:07 So if you want, if you just want an internal package index, really, there's a bunch of existing things already you can use, or you could just build your own.
40:15 I built one with Hug before Artifactory.
40:17 It's just a web server.
40:18 Right.
40:19 It's just a web server and a couple of interactions.
40:21 It's not super complicated, but yeah.
40:22 Pretty cool.
40:23 And so we talked a little bit about some of what happens next.
40:26 You have your GitHub hooks and your PRs and all that kind of stuff.
40:31 What else is involved?
40:32 So you have your Hug service that you've talked about.
40:35 That's pretty awesome.
40:36 You used PyDocker, which you mentioned in passing there.
40:40 DockerPy, sorry.
40:41 Which is just pip install Docker, right?
40:44 Yes.
40:44 Yes.
40:45 Nice.
40:46 pip install Docker.
40:46 Okay.
40:46 So if you wanted your Python app to, say, orchestrate creating new containers or spinning them up, that would be what you use?
40:54 Right.
40:54 And it's got two client layers.
40:59 One is like a lot lower level, which is an API client kind of thing.
41:03 I think they call it API.
41:04 I forget what they call it.
41:05 And then there's one which is like the Docker client.
41:07 So the Docker client operates more at an object level.
41:11 So you can say, so you point it to where your Docker, your main Docker master is of your swarm.
41:19 And you can just do .images, .lists, .create, stuff like that.
41:27 Same thing for .containers.
41:29 And then with the swarm in general, things get pretty complicated when you go out to swarm or Kubernetes, mostly because things are your containers that are managed by the swarm are not really containers.
41:42 They're services.
41:44 And it all makes perfect sense if you're running, say, a web service.
41:48 And you say, I want a web service that needs to always be up and I want two instances of it.
41:53 So you run one service with two tasks, each one of this type of container.
41:59 But for us that are actually creating essentially one container or two containers individually to run every time, we have to make a new service for it.
42:08 So there's a lot of layers there that complicate things a little bit.
42:12 But it's very easy to manage with Docker Pi because it's all kind of built by the Docker guys.
42:16 Yeah.
42:16 Okay.
42:16 That's really, really cool.
42:17 And then another thing that you use is something called ChatOps.
42:22 What is ChatOps?
42:23 Is that like something for DevOps?
42:25 Sure.
42:25 When our deploy PR is complete, that means it built a container image and it pushed that image out to our Amazon container registry.
42:37 So then we need to, we could automate this, but I still want to have some manual checks in place.
42:44 So what we did was we have a chat system and just made a bot.
42:52 And I can tell the bot, hey, I want this image to be my staging image.
42:57 Go do it.
42:58 And then the bot will go in and tag that image with staging latest and it'll go in and stop all my containers in AWS and which will automatically restart and essentially do my flip over from to a new version.
43:12 That's awesome.
43:12 Now, ChatOps in general is kind of like a concept of being able to run, to manage a bunch of services or deliverables or code or whatever you want to do over a chat system using a bot, essentially.
43:26 Yeah.
43:26 That's a pretty cool idea.
43:27 I mean, we saw Kelsey Hightower's thing at the 2017.
43:32 Yeah, it was 2017 PyCon where he got basically Google's voice assistant to do his Kubernetes stuff, right?
43:40 Yep.
43:41 So I was kind of laughing through his presentation a little bit because I was like, yep, that's what I do, except I can't talk to it.
43:49 But, you know, not through voice, right?
43:52 Well, you're not far away from getting some Google Home or some Alexa.
43:57 Yeah.
43:58 You could use some of the Alexa stuff going.
44:00 And my, sorry if everyone's Alexa is going off, mine is as well now.
44:03 The Amazon assistant, let's call it that.
44:06 There's a Python one called Calliope we were just looking at today.
44:09 And there's another one, I forget the name, that's also pretty famous in the Python world.
44:15 We were just laughing at it this morning saying that we should hook all our stuff up together and just say, hey, deploy to staging.
44:21 Hey, restart our stuff.
44:24 There's some pretty easy ways to do it, actually.
44:27 Yeah.
44:27 Just some random dude walks into the cubes and just kind of kills all our stuff.
44:32 Deploy production.
44:33 No, no, no, no.
44:34 Yeah, that's pretty awesome.
44:37 Another tool that I don't think I've heard of that was really impressive to me is Locus.
44:43 Yes.
44:43 Yeah, tell us about that.
44:45 Like, use pytest for your standard level, your automated testing.
44:49 But Locus is more on the performance side, right?
44:51 That's right.
44:52 So, Locus, the idea of Locus is to test web services.
44:58 So, you can write tasks in the forms of scripts or actions that represent users of your service.
45:09 And you can have, like, all set of kind of like setup and teardown and type of stuff, kind of like your regular type of test environment.
45:16 But then Locus can manage that over a large amount of virtual machines to go off and test your API and then come back and tell you, well, you managed to receive these many requests per second in this endpoint and these many in that endpoint.
45:32 And this one was rate limited at this point.
45:34 And this one was, you know, errored out when you did this thing.
45:37 And so, you get a report.
45:38 That's really cool.
45:39 So, you've got, like, all the different parts of your site.
45:42 And it shows you, here's the number of requests in a big grid.
45:45 Like, this URL got this number of requests with this many failures.
45:48 And, you know, average response time is this.
45:50 And, yeah, it's super cool.
45:52 Like, one of the big problems with the load testing is actually it can be at least getting enough pressure on your web server, right?
46:01 Like, if you just do that over, say, your broadband connection at home on your laptop, like, maybe the limit is your outbound network or something, right?
46:10 Something like that.
46:10 Whereas, like, if you could put it on 100 VMs, spin them all up and, like, you know, turn those loose in a slowly way.
46:17 Like, that's awesome, right?
46:18 Yeah.
46:19 You can have a different amount of virtual machines and it just kind of orchestrates all of them for you.
46:23 You just got to have Locus installed on them and the scripts that they got to run.
46:26 But, yeah.
46:27 Yeah, this is looking super cool.
46:28 I definitely would like to look more into it.
46:31 So, yeah, it says define user behavior with Python code and swarm your system with millions of simultaneous users.
46:37 You know, when tools like this exist, I'm just blown away when there are websites that fail so badly when they get a lot of traffic.
46:47 You know, like, I understand there's some limit where it's like, okay, it just is not going to take more.
46:52 But that limit should be many thousands, not a couple hundred, right?
46:57 Right.
46:58 And so, another thing we did, so we used Locus to figure out where we might break.
47:03 And then what I do is there's in our Docker image, whenever that actually gets executed into a container, there's a few instructions that go in and replace environment variables in our Nginx configuration.
47:20 So, I can go in and tweak the request per second.
47:25 So, I do it at the Nginx level so I never hit the Python code.
47:28 So, if I know I'm going to break at whatever, I can put in a limit at 8 or whatever on my...
47:37 And then it just queues in Nginx until...
47:40 Correct.
47:41 ...until Microwezky's done it.
47:42 So, you can configure Nginx to do that per IP address or just in general and what error codes to return, all that stuff.
47:52 Oh, that's awesome.
47:53 So, a couple of other tools that are at play here are some of your projects.
47:59 One is Sophie and one is Corv.
48:01 So, Sophie falls to a pretty interesting realm of Python, I would say.
48:07 Yeah.
48:08 Maybe tell people what Sophie is.
48:09 Some people talk a lot about user interfaces and Python.
48:15 I don't know any of those.
48:17 Oh, yeah.
48:17 Actually, I think it was after one of your very first, like a long time ago, conversations about user interfaces and stuff.
48:23 And I was like, oh, you know, I'm pretty sure this is exactly what I was thinking.
48:28 I've built so many, you know, Bootstrap-based quick interfaces with like some just jQuery default stuff.
48:37 I don't want to write that anymore.
48:38 I want to just write it in Python.
48:40 Yeah.
48:40 So, what I did is essentially...
48:43 It's a module that lets you do that where I kind of wrapped the widgets that you'd get out of Bootstrap, the HTML kind of library to help put that together.
48:52 But it evolved because the way I do it is through WebSockets and AsyncIO.
48:58 So, in the back end, you can kick off a web page that loads off a basic JavaScript library that you only have to write once, which tells that web page how to interact with Sophie.
49:11 And you run a Python web server, WebSocket server, which is Sophie, that actually sends commands out to the web page.
49:19 So, you open up a website and all of your interaction and eventing can go all the way back to your Python code and you can react on that and come back out to the UI.
49:29 And after that evolution, I realized that Sophie is actually really a WebSocket protocol to help you do all of this and kind of like library to help you do all of this.
49:40 Because you can just drop in and replace other ways of doing these conversations between client and the server.
49:46 And so, what I did after that was I went to a game engine, Unity 3D, and dropped in a WebSocket client written in C Sharp.
49:57 So, now I can, from Python, spawn game objects and things like that in Unity.
50:04 That's pretty awesome.
50:05 So, it's a little, a tiny bit like Electron.js type apps where there's like a Python backend.
50:12 It's some sort of web front end, at least the first incarnation of it.
50:15 It's also deployable kind of like that if you want as well.
50:18 But you can also do it.
50:20 So, the original idea was to go down a desktop application type thing, in which case you would want to build it like that, like an Electron.js thing where you distribute Chromium, the browser that Chrome is based on, which is completely open source as your front end.
50:34 But you can just deploy the backend by itself onto like some service in a Docker swarm, which is what we do.
50:40 And just open up a web page and talk to it.
50:42 Yeah, nice.
50:43 And your other project, Corv, is about sort of skipping the whole REST API entirely, right?
50:49 And using actually SSH?
50:51 So, in the process, over the years, working on different services, right?
50:57 You always want to have, there's always the customer facing one.
50:59 But then you always want some data or something you want to have in some admin mode for.
51:04 So, it's always a risk to put those admin endpoints in the customer facing one.
51:08 Because if they're there, somebody's going to fiddle around and bump into them.
51:11 And then you've got to worry about security and all that.
51:15 Right.
51:15 It just takes one forgotten security check and all sorts of badness happens.
51:19 Right.
51:19 Especially when those checks are usually decorators around Python functions, which you could forget to put in.
51:24 Right?
51:25 Yeah.
51:25 So, the idea is, instead of using HTTP, use SSH.
51:30 And there's, the first time I came up with this was when I ran into Async SSH, which is the base library for this.
51:38 Because I wanted an async way of doing SSH calls.
51:41 And they let you do this.
51:43 So, I let SSH take care of the authentication.
51:46 You've got to have your client cert the allowed clients, known hosts as well on both, you know, your known hosts on your client and your acceptable public keys for your client on the server.
51:58 And so, SSH handles your authentication.
52:01 And then after that, you open a TCP socket over SSH and just send information back and forth.
52:07 I just wrapped it in JSON and kind of used kind of HTTP-ish REST-like mechanisms for, like, get, store, update, and delete.
52:16 So, I do that as my admin interface.
52:19 It's only accessible to me.
52:20 But even if I break, even if I mess it up and it somehow exposes the ports out to the internet, it's still SSH.
52:27 So, you still need the proper keys to get in.
52:30 Yeah.
52:30 That's really awesome.
52:31 I think that's quite a cool idea.
52:33 Like, I have certain things where you can only get to them through SSH.
52:38 You can't access them or interact with them, you know, without that.
52:42 And this is kind of like, instead of just exposing, like, tunneling that through or something, you're like, no, let's just make that the API network layer.
52:50 Yeah.
52:50 Exchange layer.
52:51 Both of those are async.
52:53 So, you can just have it kick off long-running things.
52:57 Doesn't matter.
52:58 You'll get a callback when it's done.
52:59 Kind of thing.
52:59 Yeah.
53:00 Really cool.
53:00 Let's see.
53:01 Another couple of things that were really interesting that you're using.
53:03 One is PyAutoGUI.
53:05 And that's from Al Swagger.
53:06 What's that?
53:07 These are kind of more of an experimentation thing.
53:10 So, we have pytest to execute all of our tests, which is great.
53:14 We use Selenium for a few things.
53:17 And then we use some other JavaScript node-specific runners for the WebUI stuff as well.
53:24 But you just, like, browser compatibility is always going to be an issue.
53:29 We want to try to do something more at the...
53:33 We ran into a couple of issues where you say, you open this thing in Firefox, but if you open it in Chrome, this one little piece of it is kind of wonky.
53:42 It's larger than it should be or it's off the screen and things like that.
53:46 Yeah.
53:46 So, PyAutoGUI lets you...
53:49 Does a lot of stuff.
53:50 It's about automating your OS through GUI things.
53:55 Move the mouse here.
53:56 Click on this.
53:58 Type this from the keyboard and things like that.
54:01 Now, it also helps you take screenshots.
54:04 So, one of the things that I was thinking was we could put something together that says, open browser, type this into location bar, load web page, type username, type password, click login.
54:18 Right?
54:18 And then take a screenshot of the result and then use OpenCV to compare that screenshot to an already existing screenshot that I should have and maybe find the next button I have to click on.
54:33 And if that button is not on the screen, then error, move on.
54:37 Kind of way.
54:37 The advantage of using the OpenCV stuff is you can have a confidence level on it.
54:41 So, if things are off, you'll know.
54:44 That's totally cool.
54:46 Yeah.
54:47 There's a project by a friend of mine named Llewellyn Falco called Approval Tests.
54:52 And it will do something similar, basically.
54:55 It will go and instead of having a whole bunch of tests, it just says, here's the output.
55:00 Is that good or bad?
55:01 You say, yeah, this is good.
55:02 And then it records that.
55:03 And then unless that output changes.
55:05 Oh, sure.
55:05 Right?
55:06 And it could do that.
55:06 I think it does that with pictures as well.
55:08 Right?
55:09 So, you could screenshot something and go, this is the verified version.
55:12 If this changes, I need to check it out.
55:14 Otherwise, just keep running the tests and saying they pass.
55:16 Cool.
55:17 Yeah.
55:17 So, something like that is what we wanted to do.
55:19 Yeah.
55:19 Yeah.
55:20 That sounds really cool.
55:20 And you were talking about using OpenCV as well, huh?
55:22 That's cool.
55:23 And I actually tried out a few different things and OpenCV wound up being the faster one.
55:26 Yeah.
55:27 Awesome.
55:27 Cool.
55:29 All right.
55:30 There's just so many little interesting tools and steps along this whole process that I think a lot of organizations are trying to get to, right?
55:38 Like I said at the beginning, right?
55:39 I check in, I merge a PR, I wait, magic appears on the other side, right?
55:44 With zero downtime.
55:45 But it sounds like you guys have really got it pretty nearly there.
55:49 That's awesome.
55:50 So, was it worth it?
55:51 Oh, for me, it is, right?
55:52 I mean, you have to, but you have to step back.
55:55 You have to do your engineering work behind it, right?
55:58 Don't just do it because everybody's doing it kind of thing.
56:00 If we have a service, right?
56:05 So, it is in our best interest to make it as fast and as easy as possible to release a fix out to the customer.
56:16 Yeah.
56:16 And the last thing you want to do is like try to release a fix and then take the whole thing down and make it worse.
56:21 Exactly.
56:22 So, the way to go through all of this is you have to step back.
56:26 You have to look at your process.
56:27 You have to, it's a lot of pieces and a lot of moving parts.
56:31 So, you have to say what checks do I need at which point in time of my delivery flow does it make sense to check what?
56:41 So, if I say all these tests need to pass here, that means I've guaranteed this basic function is working.
56:47 These tests pass here, that means my infrastructure is working, et cetera, et cetera.
56:52 Right.
56:52 And it also depends on the quality of your test, right?
56:55 Like you need to know that if the tests pass, pushing to production without further question is okay, right?
57:03 Whereas if you only test a few things and maybe they don't test that well, like if a lot of stuff slips through, then this isn't so helpful, right?
57:11 It's got to be a good net.
57:12 Right.
57:12 And we made the decision early on to put, to invest into that because, you know, we think it's going to bear fruit for us and it has been very useful.
57:21 We very, very seldomly have a really broken function into staging.
57:26 And when we do, the first thing to fix is not the function.
57:30 It's the test to make sure that you can't push again with it broken.
57:36 Yeah.
57:36 That's a really good point.
57:37 It's like, why did this get through?
57:39 There's actually a problem in the continuous delivery system that it got this far.
57:44 Now let's fix that, right?
57:46 That's awesome.
57:47 Yeah.
57:47 And take advantage of the situation that you're in, which is you have a real failure, not something that you thought you might have.
57:55 You have a real one.
57:56 So make sure that your tests fail when you have a real failure and then go fix the code to make sure that it passes.
58:02 That's awesome.
58:03 I think that's really great advice.
58:04 All right.
58:05 I want to take just a moment, let you maybe list off some of your popular articles that you've written, but don't want to take too much time since we're running out of that.
58:13 The most viewed article over time from the blog has been Threaded Asynchronous Magic and How to Wield It, which is pretty much about an intro into AsyncIO and what you can do with it.
58:25 Yeah, it's a really good one.
58:27 Just how to manage tasks and stuff like that.
58:29 The one that's like most read, as in like the most time people have spent going through all the details was the Python Ate My GUI, which was the starter article for making Sophie and kind of the state of GUIs in Python.
58:45 And the most recent one I have, which is now no longer true, the most recent one is about GDPR.
58:54 And the implications of the European, the new European regulation on, for software developers.
59:01 But the one before that, which was the one in my list was Practicality Beats Purity about microservices and monoliths, which we talked a little bit about already.
59:10 Yeah, people should check that out if they're considering one or the other.
59:13 And there's a lot of interesting trade-offs that you highlight there.
59:16 All right.
59:17 So if this was a few weeks ago, I might ask you a little more about the GDPR and get your thoughts on that.
59:22 But there's something bigger to talk about.
59:23 So you talked about using GitHub Enterprise.
59:26 Like I'm super invested in GitHub.
59:28 I just checked like right now, the time of recording, I have 134 repositories in GitHub.
59:34 That's a lot.
59:36 And many of those are private ones, like supporting my various things, but a lot of them are public as well.
59:40 So the big news, like last week, was that Microsoft acquired GitHub.
59:45 Yes.
59:46 What was your first thought?
59:47 Oh boy.
59:47 That was my first thought.
59:49 Yeah.
59:50 I'm not a Microsoft fan, but I am willing to admit that they have a different direction, which I like, which is way more embracing of open source.
59:59 They are the world's greatest, biggest open source contributor today.
01:00:05 Which is like, think about where we are, just that you said that.
01:00:08 Like, that's crazy.
01:00:09 Yep.
01:00:10 They have incorporated Linux into Windows, sort of.
01:00:17 They have contributed a significant amount of work to Docker.
01:00:21 And GitHub, or Git itself, actually.
01:00:23 The Git virtual file system.
01:00:25 And Git, yes, with the Git virtual file system, which was a huge contribution, especially for folks doing large single repository code bases.
01:00:33 So that's, for me, that's a good direction.
01:00:38 Unfortunately, the track record so far hasn't been all that great, even with their most recent acquisitions.
01:00:44 The most common one that I hear and that I have problems with is Skype and where that wound up with.
01:00:50 But some folks also complain about how LinkedIn is going.
01:00:54 I don't know.
01:00:55 So I don't know.
01:00:57 Internally, there was a lot of outcry from the community.
01:01:01 But equally, there's a lot of Microsoft developers that are like, well, but we at Microsoft love GitHub just as much as you guys do.
01:01:09 And I genuinely believe that.
01:01:11 It's not about whether the folks contributing to the code like it or not or want to keep it going or not.
01:01:19 It's the fact that now you've concentrated the majority of open source projects in a business that has its own languages, its own platforms, its own software, which you could maybe wind up getting biases for.
01:01:40 And once it's there, it's just going to be more complicated.
01:01:43 There's also questions about IP that people brought up.
01:01:47 I don't know how much that's a thing.
01:01:50 But, you know, technically, everybody had access to all those repos anyway.
01:01:55 But I guess now they get access to all the private ones, too.
01:01:57 So I don't know.
01:01:59 Yeah.
01:01:59 So a couple of thoughts.
01:02:00 Yeah, I definitely sort of felt similar to what you're saying.
01:02:03 Like, I don't think Microsoft has any sort of bad intentions towards GitHub.
01:02:08 I think they do.
01:02:09 Right.
01:02:10 I think they do really love it.
01:02:11 They're really invested in it.
01:02:12 However, you know, they could fumble, fumble it and just make it not so nice.
01:02:17 Right.
01:02:18 I don't think they would like intentionally shut it down or do something to make it less good.
01:02:22 But they certainly could try to make it better and make it worse.
01:02:25 That is a thing that could happen.
01:02:27 The IP part is pretty interesting.
01:02:29 What really surprised me is there's an article called, it was on Ars Technica.
01:02:34 That's one of my favorite places to read this kind of stuff because the comments are great.
01:02:38 It says, everyone complaining about Microsoft buying GitHub needs to offer a better solution.
01:02:41 And they really went through point by point, like how GitHub was actually in pretty big trouble.
01:02:47 Yeah.
01:02:47 And at some point, this is almost like, would you rather not have GitHub or have a GitHub that Microsoft owns?
01:02:54 Rather than, well, I want GitHub to be this free thing.
01:02:57 Like to me, the biggest negative here is, it's just consolidation.
01:03:00 Right.
01:03:01 Like there was this sort of independent place where open source could go be its thing.
01:03:06 And everybody was on sort of equal footing.
01:03:08 And now it's been consolidated into one of the big five tech companies.
01:03:12 And that just, that's just different and not necessarily better.
01:03:17 But after reading this, this Ars Technica article, I felt much better about it because I didn't realize the alternative was as bad as it.
01:03:24 It could be.
01:03:24 But I mostly don't like the fact that it's just consolidating further in the whole tech space.
01:03:31 I think that is the main concern.
01:03:32 I agree.
01:03:33 I think after going through this, I think it becomes a bit more obvious that we kind of, I think I said this over Twitter, we kind of need like a Mozilla software foundation of open source repositories kind of thing.
01:03:46 Yeah.
01:03:47 Like an independent body that's somehow funded that whose sole purpose is to just, you know, you put stuff up there and it's going to stay there and, you know, we'll keep it up kind of thing.
01:03:57 And that would alleviate the concerns a bit.
01:03:59 It would.
01:04:00 I think on the positive side, I think, you know, Microsoft has done a good job with Xamarin, right?
01:04:05 And that was open source sort of, you know, they, I think that that's stronger now than it has been as part of them taking it over.
01:04:12 So there's one, you know, check for maybe the win, win box, you know, Microsoft's part of the Linux foundation.
01:04:17 And they're like, there's signs that this is going to go well.
01:04:20 There's like you said, Skype example.
01:04:22 So there's also signs where it might not go so well.
01:04:24 So I think it's, it's up in the air.
01:04:26 My concern is the two things like it could just get fumbled and messed up.
01:04:29 But the fact that they're running GitHub as an independent organization, it's really good.
01:04:34 The fact that Nate, the guy that was one of the co-founders of Xamarin is running, going to be the CEO of GitHub, where apparently they were struggling to get a CEO at all.
01:04:44 There's like a big problem there.
01:04:46 So anyway, it's, I think it's, it's pretty interesting.
01:04:49 Fingers crossed for a positive result.
01:04:51 Yeah.
01:04:52 And from the business perspective as well, you know, Microsoft, I'm sure it makes perfect sense for them.
01:04:57 They just spent all this time into the Git virtual file system stuff and they just moved all their stuff over to that.
01:05:03 So they want to secure a future for that.
01:05:06 Right.
01:05:06 And this is their, this is their way of doing that.
01:05:09 Yeah.
01:05:09 I guess the other thing is that sort of gives me sort of a positive outlook, I guess, is at least the way I've seen it these days.
01:05:17 It's like, if you want to understand what Microsoft is doing or why they're doing it, the, the answer is Azure.
01:05:22 And then you've got to figure out what the question is.
01:05:24 Like, obviously it's for Azure, Azure, Azure.
01:05:26 They're just trying to grow Azure.
01:05:28 Like they could just care less about windows or to some degree office, right?
01:05:32 Like they see like the, the new lock-in is the cloud.
01:05:35 And how do we go be part of that?
01:05:37 And you know, all the different technologies run there.
01:05:39 So I think that that's going to put some pressure to keep it more fair handed rather than say it's only .NET or it's only windows or any of these sort of pressures that you kind of hinted at at the beginning.
01:05:49 Yep.
01:05:49 Agreed.
01:05:50 All right.
01:05:50 Well, I guess we'll leave it there.
01:05:52 We could go, we could have a whole show on the thoughts of the Git application.
01:05:54 Maybe I will at some point.
01:05:56 At first I was like, oh boy, this is probably going to get messed up somehow.
01:06:01 But after spending a week doing more research, I'm kind of like, well, it looked kind of like it was necessary and it's probably the least bad outcome that we're going to get.
01:06:10 So, you know, fingers crossed.
01:06:11 Yeah, for sure.
01:06:12 Cool.
01:06:13 All right.
01:06:13 So let me hit you with the last two questions before you get out of here, Chris.
01:06:16 So notable PyPI package.
01:06:18 We covered a bunch actually.
01:06:19 The two that I would bring out out of the whole list, which since we're kind of talking about async stuff as well as async SSH.
01:06:27 Forget about Parameco.
01:06:29 Async SSH does it better.
01:06:31 And there's, and the WebSockets module, which it's pretty good if you're doing anything with WebSockets.
01:06:37 Nice.
01:06:37 And so final call to action, people want to bring continuous delivery into their whole workflow, their life, their team.
01:06:44 How do they get started?
01:06:45 The first thing to do is to, we kind of touched on it a little bit earlier, is step back, analyze what benefits you get out of it and what problems you're trying to solve.
01:06:55 And then slowly go through it and put something in place where, you know, my end result is going to be X, a delivered package of this here.
01:07:03 And I need to guarantee that it works in pieces at different steps of the way.
01:07:08 And figure out the effort to make that happen.
01:07:11 If the effort is really, really, really, really, really large, then maybe it's not worth it for you.
01:07:16 Cool.
01:07:17 Well, I really appreciate you coming and sharing what you guys are up to because you definitely have it pretty dialed in.
01:07:21 Yeah.
01:07:23 Yeah, no problem.
01:07:24 Glad to be here.
01:07:24 It's always fun to have these conversations.
01:07:26 Yeah.
01:07:26 Thanks, Chris.
01:07:27 This has been another episode of Talk Python to Me.
01:07:31 Our guest has been Chris Medina.
01:07:33 And this episode is brought to you by Linode and Rollbar.
01:07:35 Linode is bulletproof hosting for whatever you're building with Python.
01:07:39 Get four months free at talkpython.fm/Linode.
01:07:44 That's L-I-N-O-D-E.
01:07:45 Rollbar takes the pain out of errors.
01:07:48 They give you the context and insight you need to quickly locate and fix errors that might have gone unnoticed until your users complain, of course.
01:07:56 As Talk Python to Me listeners, track a ridiculous number of errors for free at rollbar.com slash Talk Python to Me.
01:08:03 Want to level up your Python?
01:08:05 If you're just getting started, try my Python jumpstart by building 10 apps or our brand new 100 days of code in Python.
01:08:12 And if you're interested in more than one course, be sure to check out the Everything Bundle.
01:08:15 It's like a subscription that never expires.
01:08:18 Be sure to subscribe to the show.
01:08:20 Open your favorite podcatcher and search for Python.
01:08:22 We should be right at the top.
01:08:23 You can also find the iTunes feed at /itunes, Google Play feed at /play, and direct RSS feed at /rss on talkpython.fm.
01:08:33 This is your host, Michael Kennedy.
01:08:35 Thanks so much for listening.
01:08:36 I really appreciate it.
01:08:37 Now get out there and write some Python code.
01:08:39 I really appreciate it.