Learn Python with Talk Python's 270 hours of courses

#43: Monitoring high performance Python apps at Opbeat Transcript

Recorded on Wednesday, Jan 13, 2016.

00:00 What does it take to track detailed analytics and errors from literally thousands of web applications all at once? Could you build such a system entirely in Python?

00:00 The answer is yes, and we'll hear from Ron Cohen from Opbeat about how they do it for Django, Flask, and even NodeJS apps. This is episode number 43, recorded January 13th 2016.

00:00 [music intro]

00:00 Welcome to Talk Python to Me. A weekly podcast on Python, the language, the libraries, the ecosystem, and the personalities. This is your host, Michael Kennedy, follow me on twitter where I'm @mkennedy.

00:00 Keep up with the show and listen to past episodes at talkpython.fm and follow the show on twitter via @talkpython.

00:00 This episode is brought to you by Hired and Snap CI. Thank them for supporting the show on twitter via @Hired_HQ and @snap_ci.

00:00 News

00:00 Hi everyone. Do you remember that T-shirt kickstarter I did last summer to create a cool Talk Python To Me Podcast T-shirt? It was super successful reaching its funding goal within just 2 hours!

00:00 Well, this shirt is back!

00:00 I've worked with our friends at pythongear.com to make the shirt available on demand for just $25. Visit talkpython.fm/shirt and get yourself one while their hot. The proceeds support the show and the shirt helps spread the word about the podcast!

00:00 Now, let's get right to the conversation with Ron Cohen, the CTO and cofounder at Opbeat

01:49 Ron, welcome to the show.

01:49 Thank you. Thank you so much, Michael.

01:51 You are welcome. I'm a big fan of Opbeat and I know you guys do a ton of stuff with Python. It's going to be a really interesting conversation. Before we get into all that though, as usual, how did you get started in programming in Python?

02:03 I started programming initially because one of my friends borrowed some books from the library- this was many years ago, and it was about the programming language Basic, as I am sure a lot of people got started with Basic. And I found it very intriguing to be able to tell a computer what to do, and sort of interact with it, and my friend lost interest but I sort of kept going. And, my dad was actually also working as a programmer, so a lot of people think I got into programming because of him, but it turns out it actually was in spite of him. And he way I found out about Python was another friend of mine who really thought I should take a look at this thing called Django, because at the time I was working as a consultant doing web applications and I had done web applications in Rails and PHP, and also some different languages and frameworks. But he really insisted that I tried out the Django and then I had to learn Python to actually work with Django, and I have been pretty much stuck with it ever since.

03:13 That's excellent. When was that, what time, what year?

03:17 That was 3 or 4 years ago.

03:20 Yeah, Django is excellent, Python is just such a fun programming language to work with, it's such a fun ecosystem, it's hard to not love it.

03:28 Yeah it really is. The thing I like about it is that it is so explicit, there is no surprises.

03:35 Yeah, there is very few gotchas. You often hear like about Javascript gotchas and all of the things you've got to be careful of, you don't typically hear about the Python gotchas, I love it.

03:45 Javascript is interesting.

03:45 Yes, it is, I'll ask you more about that later actually. You like Python so much you started a company called Opbeat, right?

03:52 Yeah, exactly. So I started Opbeat with a friend, in the beginning there were just him and me, he is called Erasmus. We had been working as consultants building web applications for a while together, and every time we sort of built something, we found that we were also the people that ended up running it, so maintaining it to make sure that it actually worked. But what we found was that all the tools were sort of targeting people who were technical ops people, so ops people basically. Developers just really think in a different way than ops people do, and they also need to know about different data. So we thought we should build a product that helps developers operate their applications that they built, that was targeted to developers directly. And that's how we sort of got started with Opbeat. At the time I was really into Python, so we started building it in Django and Python and today it's pretty much all Python.

04:52 That's awesome. So, in sort of the whole backend of what you guys have gone in there, not just what plugs in to the app, it's all Python, mostly Python let's say?

05:01 Yes. We started working on support for Node, NoJs because that's also a big opportunity for us, that's kind of really interesting to see how that goes.

05:14 Yeah, I just saw today that you guys have a beta program that people can go sign up for, so that's cool. So, let's talk about monitoring apps in general. You said there is sort of the infrastructure side of folks and they have one side of things they want to know about. We've got the developers, and then also we've got this growing sort of devops continuous delivery set of people that maybe have kind of a mix of those things. But what do people want to know about their apps, and what kind of things can be tracked in applications like Opbeat, things like that? How does that all work?

05:47 Right. So you are absolutely right. Different sort of disciplines within the devops ideology or terminology or whatever you want you want to call it. Let's start with Opbeat-Opbeat is a really sort of focused on delivering actionable metrics for developers. And for us, that means that all the data that we give you, it has to sort of relate your application. So it has to be something that you can use to improve the performance of your application. That means we don't deal with sort of the machines that the application runs on very much, more sort of the code that you have written, and that's something that we hear as really attractive to developers because that's sort of also there, angle of attack if you will, basically it all depends on the code and deals with the code. And things like response times for an application, that's of course really interesting, and then also sort of delivering it to developers in a way that they are used to think about stuff.

06:56 So some tools deal with something called ethics, which isn't really a metric that people are very used to working with, it's probably something that Ops people are more used to or even just business people who need to deliver some kind of SLA. Developers are more familiar with averages and percents, that's for example we have picked instead of ethics. Then there is error logging so there is a lot of ways that you can be logging, the most basic thing is write into a file, a lot of people do that and then they shift those lines in the file off to some service and then they search through the lines of the files to see what kind of errors they had. And that's also I feel like something that caters to ops people more than developers. Developers are used to dealing with code, so when an error happens they would like to get an email, or get notified on the mobile phone that something is broken.

08:00 Right, they don't want to just tail some log and sit there and watch it go by they want detailed stuff they can go back and actually track down the bug with, not just knowing there is a problem, but all the details, right, which maybe don't necessarily fit on like a line in a log file.

08:15 Exactly, and when you have the details sort of in structured way, there is a lot of stuff you can do to make it more useful, more actionable.

08:26 Yeah, I guess you could say like what are the most common exception types, how many exceptions do we have per hour, right, whose check-ins cause the most exception.

08:35 Yeah, and we actually do that or we do something called automatic assignments. So based on who checked in the code we will automatically assign that error to the person who checked in the code-

08:46 That's really fantastic.

08:47 Thanks. Something you can only do if you have the data in the structured way. It's just one example.

08:56 Yeah, I love the blame feature of source control, "All right, this looks ridiculous, who wrote this?" And that's kind of the error equivalent of blame, right? [laugh]

09:03 Yeah and it's- you know, it's not a good way to think about it, blame, but it's also not very effective for you to sit and worry about something that someone else on your team knows much more about.

09:18 Yeah, well there is the negative way of looking at like blame mark whose fault is this, but then there is also the sort of the positive perspective of like, whoever wrote that code and probably just checked it in, they are more likely to be able to quickly fix it and quickly go, "Oh yes, I understand let me do this." If you just give it to somebody out of the blue and go, "Here is a random problem with some app, you probably didn't write it but fix it," that's much harder to get it fixed quickly.

09:43 Yeah, absolutely. It's also about accountability in some sense, we can talk about it a little bit later, but in my experience good developers they like to be held accountable, so they like to know where they actually made a mistake and this is sort of the way to complete the circle if you will after you write something you will also be assigned to the errors that your code caused.

10:07 Yeah. I think especially when you are learning to program, when you are a fairly junior developer, the whole error handling, dealing with malformed data and unexpected things and so on, that's much harder to get your head around, it's much easier I think to write the code so it's supposed to take this, and it's supposed to do these things, and it's dome, right, just like the happy path, if you will. But knowing to be aware of all these other errors, that's something that takes more experience I think and so if you can help catch those sooner you maybe learn those lessons sooner, that's also good.

10:45 Yeah, absolutely.

10:45 Nice, so you talked a little bit about performance and you talked about errors. You guys also talk about like deployment workflow, what's the story with that?

10:52 Right. Again, coming back to the devops paradigm, if you will. What we found is that developers now more and more empower to deploy their own code whenever they feel like it's finished. And whenever the CI tests pass, it's usually the case nowadays that devops have actually the power to deploy that code. Now, that's really cool, but it also means that it can get pretty difficult to figure out what was actually deployed and at what time, because you probably used to have an ops department that would make a little note in the changelog somewhere to sort of keep track of what was deployed, but developers don't really do that. So, we have to do that by what we call release tracking which is basically a list of releases and each item on the list will contain the comments of the specific release and it makes you really easy to go back and see exactly what you deployed and at what time.

11:50 Yeah, so maybe you can link those back to a series of GitHub issues that have been closed or something like that, right?

11:57 Yeah, exactly. Or errors that started happening after specific release etc. Yeah, it's another resolve case where the tools and the sort of workflows that you used to have don't really fit anymore and that's why we did this release.

12:15 Yeah, that's really cool. I mean, it definitely ties together with continuous integration and continuous delivery services like SnapCI and those guys built right the sort of do the checking before it goes out but you are guys are kind of on the other end, right, once it hits production, if something happens you can sort of say after this release these errors started happening, is that right?

12:38 Yeah, yeah, exactly and the CI is obviously still have really important from part of the workflow and it's definitely not a replacement.

12:49 Yeah, absolutely, but you know, the thing is- there is the unit test you write and the scenarios you test for and look for and then there is the real world. Right? And no matter how good your CI system is or your tests are, chances are on some major application there is something happening, that's going to happen that you just didn't account for. Like, why are their browsers on my page that have no user rate? I didn't plan for this, right?

12:49 [music]

12:49 This episode is brought to you by Hired. Hired is a two-sided, curated marketplace that connects the world's knowledge workers to the best opportunities.

12:49 Each offer you receive has salary and equity presented right up front and you can view the offers to accept or reject them before you even talk to the company.

12:49 Typical candidates receive 5 or more offers in just the first week and there are no obligations ever.

12:49 Sounds awesome, doesn't it? Well did I mention the signing bonus? Everyone who accepts a job from Hired gets a $2,000 signing bonus. And, as Talk Python listeners, it get's way sweeter! Use the link hired.com/talkpythontome and Hired will double the signing bonus to $4,000!

12:49 Opportunity is knocking, visit hired.com/talkpythontome and answer the call.

12:49 [music]

14:19 Yeah absolutely, and it turns out that users are really creative in what they enter into a form, and basically I have no chance to guess what with all the different scenarios are going to be.

14:29 Yeah, absolutely. So, right now you guys support Django, and that's where you started. And recently you added Flask support, and you also are about to add Node support or you are beta testing it. What about other apps, like a lot of the apps I work on, web apps or Pyramid apps. Is there a way to add tracking to apps that are not one of those three?

14:51 Yes there is actually and we have not been very good document this, but the Opbeat module has a very simple BI and it comes down to calling begin in transaction whenever you sort of start a new request or background task starts and then you call end transaction whenever you send back the response or your background task has finished and the Opbeat module will automatically pick up all the information that needs in between those, that's performance metrics, performance monitoring. And usually there is a way to look into the frameworks of unhandled exception signal or something like that, so it should be definitely doable we just hadn't really had the time to look at it yet-

15:39 Ok, that's interesting, so if I was able to sort of trigger, like do a begin transaction and transaction all the calls I am making say to SQLAlchemy or at other web services so as we get track?

15:51 Actually, what you need to do is just call begin transaction whenever your request starts, and we already instrument most of the modules you use, I hope, so that they should automatically show up actually. What you need to do is to call begin transaction when the request starts, and then end transaction when request ends, the web request let's say.

16:12 I may have to go play with this, after the interview.

16:16 Cool.

16:17 Yeah, very cool. So, what's the craziest sort of monitoring example you are seeing like, there has got to be some company or some piece of software that's just done something way crazier than you have expected?

16:32 That's a really good question. I can tell you the first time our main data base server just went down, it was a very unpleasant experience, but that was a bit of rough night we had to replicate database and the site was down while we did it, so we did very quickly I would say it was like 15 minutes, but it was still not a very nice experience.

16:59 Yeah, I guess so, because you guys are running real time data collection from many potentially popular apps and so you guys must have a lot of load17:11 request?

17:12 Yeah we have quite a lot of 17:14 requests. If we are down then people will get us little notification in their log whenever they try send something to us, that's really not cool.

17:12 You want your monitoring service to be up all the time, otherwise it's sort of useless. So we spend a lot of time trying to make sure we can't go down and so what we recently did was we changed it so that we should still be able to receive data even if the main database server is down, so the data is just going to keep being accepted by us, and whenever the server is back up, it will start processing the data.

17:50 What infrastructure you guys are running on, is it like Amazon web services or something else?

17:54 Yeah. So it's all on Amazon web services, we've set it up ourselves, it runs on IC2, we don't use too many of the sort of Amazon services on top of it, we use a bit S3 but mostly IC2.

18:08 Yeah, IC2 S3 those are the main ones, right?

18:11 Yeah, exactly.

18:14 You talked about performance and collecting data even if it can't necessarily be processed. And one of the really nice ways to do that is to add some sort of queue, asynchronous queuing mechanism to the whole process, right?

18:27 Yeah, exactly. And we do that a lot so whenever some data comes in it gets put into a queue we use 18:33 queue which is very powerful software. That gives us a lot of freedom in scaling out, etc.

18:44 Right, because it's much easier to keep a queue alive than maybe a complex data base with a scheme that could change and it can no longer insert into it or something like this, right?

18:52 Yeah, exactly.

18:55 Maybe talk me through what pieces are involved and what it looks like from some web app external to you guys, sending some piece of data until it actually gets totally stored in like some database.

19:08 Right, so we have a separate service called the intake which is responsible for basically accepting data and putting it into a queue. It's also what does authentication and authorization of the data, so whenever data comes in we need to make sure that it has the right tokens etc; if you send us a lot of data we need to validate that the structure of the data is actually correct, so we also do that and then we put it into a queue so that service is very sort of simple in the sense that it just needs to accept the data and put it into a queue. And that makes it easier for us to scale up and when we have a lot of data come in.

19:57 Yeah, I bet it's almost stateless, right. Other than knowing the authorization part it's like entirely stateless, right?

20:03 Yeah, exactly, so a lot of the- we can cash the authentication stuff really heavily and it's not so very resilient in the face of failures because it's read only, the database like the authentication stuff just needs to be able to put data into a queue.

20:22 Yeah, excellent. So then there is something else that gets the data back out and really does the processing right?

20:27 Right, so we have a separate service that pulls the data out and then processes it, and that's also very convenient for us because it means that we can scale that out very easily, we have more freedom in sort of let's say we need to do some maintenance we can stop that for short period of time and then keep going and things would just be in the queue waiting for us to process them .

20:48 Yeah, I think queues are somewhat underused, they are so easy to use and yet they provide so much architectural flexibility and the response time, flexibility and so on and even as you say sort of like a deployment infrastructure management perspective, like as long as you don't leave things offline until the queues can't take any more then you are kind of golden, right?

21:07 Yeah, we do have some requirements for processing time, so you can't leave stuff around forever, but it does give us a lot of flexibility in switching out things while the things are running so that's great and I agree at the point that queues are undervalued and also probably not that well understood in vast majority of developers, there is also some as you mentioned architectural benefits.

21:37 Yeah. There is a lot of talk about microservices and building more smaller pieces of software, and putting a queue between those two pieces, it makes it really easy rather than have a monolithic thing that does all the intake, all the processing, all the reporting, and on and on and on, right?

21:52 Yeah absolutely. And also people use the queue to talk between the services. So, it basically becomes a sort of communication medium instead of for example using http, people will put request in the queue and then expect a response on some other queue. And that also quite useful because 22:19 with some additional architectural advantages when it comes to timeouts and things like these.

22:26 Yeah that's really cool. I have a big question for you-

22:28 Hit me.

22:28 Python 2 or Python 3?

22:31 Python 3 for the sake of progress.

22:35 Beautiful. I know a lot of people are in Python 2, any chance we can get kind of to move forward, we should take that chance, right?

22:42 Yeah, I agree. It's been a cost that we have to pay now, sort of upfront but it's the right thing to do in my opinion.

22:50 Yeah, excellent. I agree. A lot of the systems that the people are writing that are kind of in the realm of what you guys are doing, they are maybe choosing languages like Go and Rust- do you know what the advantages of those are like what the disadvantages are, have you guys considered those- not that I am necessarily encouraging you to do so but you know, just I know a lot of people are thinking about that.

23:15 Yep. So, I have written some Go, and a little bit of Rust and I think it's really interesting, I think what still is very clear to me is that Python gets things done very quickly and with very little code in a very robust way. If we start with Go for example, I think Go is most interesting in the way that concurrency works in Go. I think the main reason why you would write something in Go instead of Python is the concurrency that exists in Go. So Python has a really sad concurrency story in my opinion and usually like if you do Django it's not a big problem but as soon as you have to write a service that talks to the outside world and you want to talk to the many different web services at the same time, or something like that, then that becomes kind of difficult in Python. And it basically comes down to the event loop, in my opinion. Go sort of builds on top of an event loop that is seamless to you and your program, while getting an event loop into Python usually involves some kind of monkey patching for example it's gevent or some really sort of strange at least if you come from the Python world some strange modifications that you must make to your application to get this kind of event loop concurrency. And then there is Rust which I think is also super interesting and there is a much more interesting type system than Go but at the same time, the rest feels more like a replacement for C++ so Go- some of the things that you will typically use Python for it also make sense to use Go for, but I would say Rust is in a different category, and I feel like a lot of people are comparing those, there is recently a lot of people talking about if they should use Go or Rust but in my opinion they are pick between two different use cases, so Rust is more low level, you have to deal with many arrangements yourself and that's important but it's sort of other sorts of applications that you are writing in Rust than it is in Go or Python.

25:15 I see it almost more reserved placement for things like I would have done this in C so now I'll do it in one of these languages but that could just be my lack of experience with them right?

25:24 I think actually when it comes to Rust, I think Go is somewhere in between, it's statically typed, it feels a lot more high level I would say than C or Rust.

25:24 [music]

25:24 Gone are the days of tweaking your server, emerging your code and just hoping it works in your production environment with Snap CI's cloud based hosted continuous delivery tool you simply do a GitPush and they auto-detect and run all the necessary tests through their multi stage pipelined. If something fails, you can even debug it directly in the browser with a one click deployment that you can do from your desk or from 30 000 feet in the air Snap offers flexibility and ease of mind. Imagine all the time you'll save. Thank Snap CI for sponsoring this episode by trying them for free at Snap.CI/talkpython.

25:24 [music]

26:35 So, let's talk about shipping software a little bit more.

26:40 Yes!

26:40 You said you had some recommendations for sort of how to make your team a high performance shipping machine, what's the story there?

26:46 My role here at Opbeat had transitioned from coding every day to more and more trying to get my team to be efficient. And along that path, different things sort of became clear. There are just some different things that you should be aware of I think when you are managing a team of developers or even if you are just a single developer building an application, especially for the web that sort of what we have been focusing on building the applications.

27:19 That's really interesting. think a lot of people who are working by themselves, don't necessarily adopt sort of what you would think it's best practices and tooling that maybe teams would automatically adopt, thing like continuous integration, things like sometimes even source control, but things like application monitoring and so on. So do you think even if there is one person working on a project maybe he should put this stuff in place?

27:48 Yeah, absolutely. Especially things like CI, I think you should definitely have CI even if you are just a one person team.

27:56 So if I have like my files on the hard drive and I just zip them up periodically and put a date a date on it that's probably not enough? [laugh] I've seen that before, no really that's not-- ok, let's talk about first source control.

28:14 Yeah.

28:16 It's been a few years but still... SO maybe you could make it concrete like what do you guys do to ship software at Opbeat to the sort of push out new versions and so on.

28:26 Good question. So one of the things we really focus on is getting things shipped early, in the sense that whenever that something that is an improvement to what we have today, we'll generally try to ship it. And what typically happens when you are sitting and programming and working on some feature is that you sort of gotten started on this feature and it's going well, but your feature relies on something else, it depends on some other code and you sort of take a peek into that code and it feels really sort of- it has some of that bad code smells that we as developers are familiar with and are trained to recognize. So you consider whether you should just quickly refactor that other thing as your feature is going to depend on which usually ends up happening is that you end up actually spending time both working on your feature and refactoring that other thing that you found. And then, that thing relies on some third thing and you think, "Oh, well I might just refactor that thing too" and then it ends up being a huge release when you finally get it shipped and that's sort of a big red flag for us. Big releases are a problem for multiple reasons. First of all, they are much more cundersome to review, so everything we do gets reviewed and if you have a big release that just takes much more time and it's much harder to get an overview over the impact of this particular release when it's huge. It also takes much more time before your stuff actually comes out to the users. The sad thing about that is that we know as developers that you can think of a lot of scenarios on how- you have some some idea about how your users are going to use a specific feature and you obviously think the feature is really valuable to them, but it turns out that our assumptions are often wrong about this kind of stuff. so getting new features into the hands of the users is really important because then you will learn how they are actually going to use it.

30:26 It's really hard to predict what people will find valuable, what they want, like you said how they are going to use it. Speed isn't advantage in the software business, right?

30:37 Yeah, small releases actually give you a lot of speed in my opinion. There is also things like when something breaks- if you have just released a huge change, it's really difficult to figure out what part of that change actually make things break, but if you are releasing smaller releases all the time, it's much easier to go back and see exactly what caused a specific problem. So we are proponents of smaller incremental releases at Opbeat and it's something we iterate on sort of and talk about often.

31:08 The other side of that story is if everything you are releasing is a small, little feature or a small piece, and something goes wrong, it's pretty painless to just say, "Oops,we are just going to roll it back to the way it was before."

31:22 Yeah, exactly.

31:23 But if you've had to do some massive database migration to like roll out a huge new thing and then you are kind of stuck, right?

31:38 Another thing that is really useful when you are building applications is to make sure you try to break down 31:44 for us that means that getting something shipped is a sort of cross discipline process, we have a pro examiner that works with a visual designer that works with a developer and finally with my team. That means everybody is sort of aligned on shipping this feature. Of course there is a lot of talk about devops and one of the points there is that your operations people should be aligned or should have the same goals as developers and that makes much better process and much nicer product in the end.

32:24 That's really interesting. So I feel like teams used to be structured like here is the data guys, here is the middle-tier servers guys, here is the front end people right?

32:35 Yeah.

32:37 That doesn't seem like a great workflow. So you are saying a vertical slice through that is a much better way to grew people and features and work, right?

32:44 Yeah, exactly. And even including in the marketing people and designers in that slice so that you have like a wide range of credibility on a specific team who are aligned to gather to ship a feature. My point was that people talk about devops but really we should be talking about collaboration between all of the sort of different roles or disciplines that are involved in shipping a feature. So the devops- the collaboration between ops and developers is just the very start, you should have marketing and product design, etc int here as well.

33:26 Yeah, so you are proposing like a devops-mark-prod team. But I think you are right, that makes a lot of sense and it's really easy as software people to forget once you write the software unless you have a very established business or you are writing internal software, that's only part of it, right, you've got to have marketing effort and a product development, like you said it's a whole team effort and software is only a part of it.

33:54 Yeah, exactly. Another thing that's important for us when we ship code is that the person that wrote the code is also who actually presses the button to get that code into production. That's something we have insisted on from the very beginning because it means that if something breaks, that developer who wrote the code will be around to fix this, so he didn't go home and the developer who actually presses the button knows exactly what code is going to go out. But it's basically about accountability so want to make sure that developers that you are working with and that you also understand that if you've built something you've written some code, you are the person who ships it and then if it breaks, you are the person who fixes it. That's something we have really sort of been adamant about from the very beginning and it seems to work very well.

34:37 Yeah, I think it's great advice. There is always on teams people who embrace things like continuous integration, the unit testing more, and there is people who embrace it let's say less. But, making everyone be actively part of the shipping puts the accountability on them, which means, "Maybe we'll rely on the process a little more, maybe they will run those tests before they send it out," right something like this.

35:00 Exactly.

35:03 Let me ask you a few more questions, we are kind of getting to the end of the show here. What's your favorite editor, if you are going to write some Python code what do you open up?

35:12 These days it's actually mostly PyCharm, I find it to be really useful and it has a lot of interesting features it helps me find the stuff that I need very easily it's a bit heavy but it feel like the JetBrains people have spent a lot of time making it fast. I'm pretty happy with PyCharm.

35:32 Yeah, that's cool, like I've said it a bunch of times on the show, that's the one I use as well, and like you said, it's a bit heavy but if you are willing to wait 5 seconds, the whole next few hours are a lot nicer. And there is thousands and thousands of packages out there that you ca use in Python. What are some of the ones that maybe not everyone knows about that you are like, "Oh this thing is awesome, you should know about X"?

35:58 Oh, of course I'm going to have to say the Opbeat module, but apart from that, for example something called doc.opt which is a way to basically have to write command line applications and it helps you parse comment line arguments, and the way it works is opposite of how all the other ones work, so and here you actually write your usage document so the stuff that comes out when you do 36:27 and then it'll parse that and from that it will know how to parse the arguments to the application. So that's really useful in my opinion, the modules that come with Python Optparse and argparse are really not very good in my opinion.

36:46 Yeah, that's a really cool package, I've heard of that one before and what surprise me was that there is actually a specification for the way that you write that help documentation, that's like well structured and so this thing just looks at that and builds the actual command line for you, right?

37:01 Oh yeah. Exactly.

37:03 That's really awesome.

37:05 I think that some other things that we are using is Requests is always a good module, I'm sure most people know that-

37:14 Request is amazing, you can't get away without Request, right?

37:16 Right. Exactly.

37:18 All right Ron, so how about final call to action- how do people get started with Opbeat, what should they do?

37:24 To get started with Opbeat you go at opbeat.com and you sign up create your first Django application or a Flask application and instructions are right there it takes like 5 minutes to set up probably and it's free to get started everyone can go and check it out. Please let us know if you run into any issues so if you have any feedback we are always trying to improve it so yeah, looking forward to hearing the feedback.

37:47 All right, very cool. Thanks for the look inside what you guys are doing in there, there is a lot of cool stuff happening at Opbeat.

37:53 Thanks for having me Michael.

37:55 Yeah, thanks for advice on shipping software, it's great.

37:57 Talk to you later, bye.

37:57 This has been another episode of Talk Python To Me.

37:57 Today's guest was Ron Cohen and this episode has been sponsored by Hired and SnapCI. Thank you guys for supporting the show!

37:57 Hired wants to help you find your next big thing. Visit hired.com/talkpythontome to get 5 or more offers with salary and equity right up front and a special listener signing bonus of $2,000 USD.

37:57 Snap CI is modern continuous integration and delivery. Build, test, and deploy your code directly from github, all in your browser with debugging, docker, and parallelism included. Try them for free at snap.ci/talkpython

37:57 You can find the links from the show at talkpython.fm/episodes/show/43

37:57 Be sure to subscribe to the show. Open your favorite podcatcher and search for Python. We should be right at the top. You can also find the iTunes and direct RSS feeds in the footer on the website.

37:57 Don't forget to check out the podcast T-shirt at talkpython.fm/shirt

37:57 Our theme music is Developers Developers Developers by Cory Smith, who goes by Smixx. You can hear the entire song on our website.

37:57 This is your host, Michael Kennedy. Thanks for listening!

37:57 Smixx, take us out of here.

Back to show page
Talk Python's Mastodon Michael Kennedy's Mastodon