#267: 15 amazing pytest plugins Transcript
00:00 Do you write tests for your code? You probably should.
00:02 And most of the time, pytest is the industry standard these days.
00:06 But pytest can be much more than what you just get from installing it as a tool.
00:11 There are many amazing plugins that improve pytest in many aspects.
00:15 That's why I've invited Brian Okken to the show to tell us about his favorites.
00:19 Listen in and your Python testing will be faster, stronger, and more beautiful.
00:23 This is Talk Python to Me, episode 267, recorded June 4th, 2020.
00:29 Welcome to Talk Python to Me, a weekly podcast on Python, the language, the libraries, the ecosystem, and the personalities.
00:48 This is your host, Michael Kennedy. Follow me on Twitter where I'm @mkennedy.
00:52 Keep up with the show and listen to past episodes at talkpython.fm.
00:56 And follow the show on Twitter via at Talk Python.
00:59 This episode is brought to you by Linode and Sentry.
01:01 Please check out what they're offering during their segments.
01:04 It really helps support the show.
01:05 Brian, welcome back to Talk Python to Me.
01:08 Thanks. It's fun to be back.
01:09 Yeah, it's good to have you back.
01:10 You've been on a lot of episodes.
01:12 You've been on five.
01:14 2017 Python Year in Review, 2018 Python Year in Review, the Python Testing Column is a Thing,
01:20 30 Amazing Python Projects, which is one of the most popular episodes, actually.
01:25 And the book author panel discussion.
01:29 So happy to have one more on the list to have you here.
01:32 Yeah, it's nice.
01:33 I love your show.
01:33 Listen to it all the time.
01:34 Thanks.
01:34 Yeah, and it's good to catch up with you.
01:36 It's been a while.
01:36 Yeah.
01:38 Well, we, I mean, we talk to each other weekly.
01:41 That's right.
01:42 Of course.
01:43 Of course.
01:44 Yeah.
01:44 So those of you listening who don't know about Python Bytes, that's our other podcast that
01:50 Brian and I co-host.
01:51 We each have separate podcasts, right?
01:53 Talk Python and I do alone.
01:54 You do Test and Code, which is really good.
01:57 Yes.
01:57 Where do people find Test and Code?
01:58 At testandcode.com.
02:01 Perfect.
02:01 But we also do Python Bytes, which is like a newsletter for Python with analysis and audio
02:06 form.
02:07 It's a lot of fun.
02:08 Yeah, it is.
02:09 People should check that out as well.
02:10 So I want to have you tell your story because you probably told it four times out of five there
02:15 anyway.
02:15 So let's just sort of check in and see what you've been up to in the last year or so.
02:22 It's been actually pretty exciting.
02:24 I think I've told the story before, but I work at Roden Schwartz.
02:29 So I work at a test and measurement company.
02:30 I'm an embedded software developer and a team lead and a lead engineer.
02:34 But I've been focusing for the last 10 to 15.
02:39 Half of my career has been on communication testing.
02:43 Most of it's been cellular stuff and we're switching over the last year or so.
02:47 I've been focusing mostly on Wi-Fi testing.
02:50 So that's been a lot of fun.
02:52 Okay, nice.
02:52 Like Wi-Fi 6, that kind of stuff?
02:54 Yes.
02:54 One of the fun things lately is some of the work on 802.11x, which is just starting to roll
03:00 out around the world in different places.
03:02 Yeah, that's cool.
03:03 I'm actually very excited about that.
03:05 Yeah.
03:05 Yeah.
03:05 It's neat.
03:06 Also, one of the reasons why I kind of focus on testing a lot is I am a proponent of developing
03:12 software with testing hand in hand so that you develop tests and code at the same time.
03:17 Yeah.
03:17 It's always very error prone and unlikely to succeed to say, now that you've written the
03:24 code, let's go back and add all the tests to it.
03:26 Yeah, definitely.
03:27 But people don't learn how to write tests when they're in.
03:30 I mean, even college classes hardly ever talk about how to write good tests.
03:36 Yeah, there's definitely that sort of splintering there.
03:39 Like on one hand, the stuff you're building in college, it's probably not going to be long
03:44 lived, right?
03:45 It's an assignment or a short project.
03:48 You just want to get it done because you also got that English paper and you didn't really
03:52 even want to take English.
03:53 But now you're in there and you got to do it.
03:54 So let's just get it done.
03:55 And it's a very different thing to think a team of people have to live with this for a
04:00 period of time.
04:01 Yeah, definitely.
04:02 And with a more and more reliant on, it's a little bit easier of a thing for me to try
04:07 to tell people the importance because they're used to working.
04:10 A lot of people are used to working with open source projects now and you would never pull
04:14 an open source project off the shelf and use it if it had zero tests around it.
04:18 Yeah.
04:18 And a lot of times you've got to have tests to accompany your contributions back if you're
04:23 doing a PR, right?
04:24 Yeah.
04:24 Yeah, definitely.
04:25 It's getting more and more attention.
04:27 And I think that's a good thing.
04:28 What's the work from home story for you these days?
04:30 Do you got to go to the office?
04:32 I mean, the world scrambled for everyone, but the only part of my world that hasn't changed
04:36 is my work because I've already been fully distributed and whatnot, I guess, other than
04:42 conferences.
04:42 But how about you?
04:43 Yeah.
04:43 So it was a hard shift.
04:45 I think it was early March, was it?
04:48 Really early March.
04:50 There was a day where we just got told, grab your stuff and go home.
04:54 And we're working from home from now on for a while, at least.
04:57 And we had just moved into a beautiful office in July.
05:01 So it's...
05:02 It is really nice.
05:03 Like, it's a place you would want to go to because it's got cool decor and whatnot.
05:07 Yeah.
05:07 And these giant training rooms and stuff.
05:09 It's a good building.
05:11 So working from home, the entire...
05:13 I had just hired a couple people for the team.
05:15 And so training people remotely was...
05:18 I thought it was going to be a trick, but it's actually been...
05:20 It's gone really smoothly.
05:21 Oh, that's good.
05:22 And yeah, so we're a remote team.
05:24 We are...
05:25 So in our county...
05:27 So I'm in the county, Washington County that we're in around Portland.
05:30 We are supposedly entering phase one.
05:33 We're doing this three-stage phase going back to work thing.
05:36 We don't know.
05:37 We haven't allowed people to go back yet, but hopefully there will be...
05:41 There's some people that can more effectively work from the office.
05:45 So we're going to try to get those people back as soon as we can, but do it safely.
05:49 Do you think you'll have more work from home remote options?
05:53 Generally, at the company because of this?
05:55 Yeah, I think so.
05:56 I think we've...
05:56 One of the fears was...
05:57 Then I think a lot of companies have seen this.
06:00 The fear is that people working from home won't...
06:02 They'll be more distracted.
06:04 They won't be as productive.
06:06 And I don't even know if this is a fair test because everybody's also...
06:10 It's not just working from home, but their kids are home also.
06:13 Yeah, exactly.
06:15 There's also the stress of just what's going on in the world is a stressful situation.
06:21 So even though it's not a fair comparison, we're comparing anyway.
06:25 And things are going okay.
06:27 We're still being productive, just about as productive or more so than we were before in
06:32 a lot of cases.
06:33 The lack of a commute is really easy.
06:35 The ease of discussion.
06:37 I mean, you can...
06:38 I don't know.
06:39 It just is...
06:39 I get more time focused, but also I think I'm talking with my team more often now because
06:45 we make it a point to talk every day as a team.
06:48 Right, right.
06:48 Focus type stuff, not just, oh, I passed them in the hall sort of thing.
06:51 Yeah, that's cool.
06:52 Now, one more thing, I guess while we're catching up on this stuff is you've got these really
06:57 beautiful training rooms with nice giant screens.
07:00 And one of the things that we were doing together is the Python Portland West meetup, which kind
07:06 of got polished from all this as well, right?
07:08 Yeah, and that's a big bummer because I think it was really serving a community in the west
07:13 side of Portland that didn't have a meetup.
07:15 And I would really like to...
07:17 I have no idea when we're going to get back to being able to do that, but I'd like to be
07:21 able to do something.
07:22 Yeah, I don't even know when I can go to a restaurant.
07:24 So, you know, things are all up in the air.
07:26 The world is crazy.
07:27 Yeah, definitely.
07:29 Yeah.
07:30 All right, cool.
07:31 Well, good to catch up.
07:32 Now, let's talk some pie tests.
07:35 I'd like to do just a little bit of high-level stuff to just like set the stage, and then
07:38 we're going to dive into the plugins.
07:40 So, pytest is an external separate way to write and test Python code.
07:47 Just not, say, part of Python like UnitTest.
07:49 So, maybe let's just do like a quick survey of the modern testing tools that people might
07:55 potentially use, and then like how we'd write tests in pytest for those who maybe have only
08:00 done UnitTest type testing, or they're like, testing, maybe I should do that.
08:03 Yeah, sure.
08:04 Sure.
08:04 Well, actually, you had a few on the list that I deleted, but...
08:07 Because they're no longer modern, right?
08:08 Well, I would say not.
08:11 There still is UnitTest, and UnitTest is being supported, and I do think, and it's part of
08:16 the Python distribution.
08:18 That's the one winning point, I think, right?
08:21 It's like, I don't have to have a requirements.txt or pip install anything, potentially, and I
08:27 can still write tests.
08:28 Yeah.
08:28 One of the main reasons why it's there is to test Python and test the standard library.
08:34 And I think that that's a good reason for it.
08:36 There's a bunch of people that have built on tools around it, and I think that's fine, but
08:41 it's just not my thing.
08:43 One of the things, while we're talking about other testing app things, is it's a good thing
08:48 to note that Nose is still available, but I don't recommend anybody use Nose or Nose2 or
08:54 any of its variants, because, I don't know, Nose is not being supported at all anymore.
09:00 And Nose2 has very little traction, and I don't know if it's, I don't currently know
09:06 if it's active.
09:07 So, if you're currently working on Nose, sorry about the disparaging remark, but maybe correct
09:12 me if I'm wrong.
09:13 Yeah, yeah.
09:14 Well, I mean, the thing is, you've got all these choices, right?
09:16 Like, if it's pytest or some of these other things like Nose or something, it's like, well,
09:21 okay.
09:22 Which one are you going to pick, right?
09:24 First got into it, I really, I was looking for the easiest.
09:27 And I thought that maybe the easiest and the most powerful would be two different things,
09:30 but it turns out that pytest is both the easiest and the most powerful, which is nice.
09:35 Nice.
09:35 There's some other ones, though, that you might consider.
09:38 Yeah.
09:38 Well, in conjunction with pytest and other testing things, there's a lot of other tools that people
09:43 use.
09:44 Tox is used, it is not just a testing tool, but one of the neat things about it is you can
09:50 do a test matrix or running different Python environments.
09:53 It creates a, it tests a lot of your application.
09:56 It's pretty, almost an episode in itself.
09:58 Right.
09:58 You can test on like various versions, like Python 3.5, 3.6, 3.7, and stuff with that, right?
10:04 Yeah.
10:04 And typically, if you're running, like running pytest, pytest doesn't automatically build your
10:10 application for you, but Tox, part of its process is creating the wheel of your, like if you're
10:16 doing a package, it'll create the package for you and then install it.
10:21 And it does the whole round trip thing.
10:24 So that's cool.
10:25 Nice.
10:25 Okay, cool.
10:26 A lot of people are moving to, so continuous integration is definitely a thing that's important
10:31 everywhere.
10:32 And a couple years ago, it was probably Jenkins or Travis that people were using, but now
10:39 I think more, a lot of people are moving to GitHub Actions because they're just right there
10:44 with your code.
10:44 So why not?
10:45 Yeah, that's super easy.
10:47 And it totally solves the problem that is, I think, one of the really big challenges of
10:51 testing in teams is you get different levels of buy-in from different participants in the
10:57 team.
10:57 And some people are very, they just are reluctant to just run the test before they'll do a check-in.
11:03 Yeah.
11:04 They'll break the test, they'll do a check-in, and they don't even know they broke it, but
11:07 they don't care that much.
11:08 I mean, people care to differ in degrees, but it's like, well, they're kind of a nuisance,
11:11 but I guess if you're going to make me write tests, I will.
11:14 Right.
11:14 And so the continuous integration is like, everyone gets checked before we accept your
11:19 stuff, right?
11:20 Well, right.
11:20 So hooking it up within a GitHub Action, you could do this before, but it's really easy with
11:26 these, is to just say, if you'd like, for instance, a merge request doesn't even go,
11:30 doesn't even go very far if the test fails.
11:33 So yeah.
11:35 Listing coverage.py, because I'm, if you care about coverage, that's how you do that in Python.
11:40 I don't do a lot of web testing, but I wanted to point out that Selenium and a wrapper around
11:45 Selenium called Splinter are very commonly used.
11:49 And then these are all packaged very well within a plugin called pytest Splinter so that you
11:54 can easily control web applications.
11:56 Nice.
11:57 I hadn't heard of that one.
11:58 I've heard of Selenium, but not Splinter.
11:59 Yeah.
12:00 A property-based testing is gaining a lot more traction in conjunction with functionality testing.
12:06 And the solution within Python is a hypothesis for that.
12:10 Nice.
12:10 So everything you've listed here is pytest Plus, not instead of, right?
12:15 Well, sure.
12:15 It's pytest Plus these other things.
12:17 However, they, I mean, none of them are pytest only.
12:20 So you can use unit test with all of these things as well.
12:23 Yeah.
12:23 Or yeah, testing framework plus, right?
12:25 They're not necessarily alternatives.
12:27 Yeah.
12:27 Right.
12:28 All right.
12:28 Cool.
12:28 So those are definitely neat things to look into.
12:31 They're separate projects.
12:33 This portion of Talk Python to Me is brought to you by Linode.
12:37 Whether you're working on a personal project or managing your enterprise's infrastructure,
12:41 Linode has the pricing, support, and scale that you need to take your project to the next level.
12:46 With 11 data centers worldwide, including their newest data center in Sydney, Australia,
12:51 enterprise-grade hardware, S3-compatible storage, and the next-generation network,
12:57 Linode delivers the performance that you expect at a price that you don't.
13:01 Get started on Linode today with a $20 credit, and you get access to native SSD storage,
13:06 a 40-gigabit network, industry-leading processors, their revamped cloud manager,
13:11 cloud.linode.com, root access to your server, along with their newest API, and a Python CLI.
13:17 Just visit talkpython.fm/Linode when creating a new Linode account, and you'll automatically get $20 credit for your next project.
13:25 Oh, and one last thing.
13:26 They're hiring.
13:27 Go to linode.com slash careers to find out more.
13:30 Let them know that we sent you.
13:34 Now, like I said, just for people who maybe haven't written a test in pytest,
13:39 like, what's the really quick flow?
13:41 Like, why is it so simple?
13:42 pytest has a really great discovery mechanism, but a function that just starts with,
13:47 like, if you have a test file that starts with test underscore, there's some naming conventions.
13:52 So the easiest thing is to have a test file that has test underscore in it,
13:56 and then start your functions, test functions with test underscore, and then it'll all just get run.
14:01 So pytest will find all of the files that start with test underscore and run all of the functions
14:07 within them that start with test underscore, and then that's it, really.
14:11 And so how does a test pass or fail?
14:13 It fails if an exception is raised, and the easiest way to raise an exception is to,
14:18 if you're intentionally checking something, is to use assert.
14:21 So you, like, assert A equals B or something, and if the assertion fails, your test fails.
14:27 It's that easy.
14:28 Yeah, so you don't have to have decorators or derive from certain classes or anything like that.
14:33 It's just the name involves test underscore, and you assert stuff.
14:36 That's pretty good.
14:37 Yeah, you don't even have to import pytest into your test files.
14:40 They can be just functions.
14:42 Yeah, that's right.
14:44 So unless you're using some of the unique features that, like, extend pytest,
14:47 like you talked about the way a test fails if there's an exception.
14:51 Well, if you're testing for error handling, sometimes you want to trigger an exception and verify it has happened.
14:56 So in that case, you'd have to include some pytest stuff to do, like, the width raises sort of stuff on it.
15:01 But in general, for, like, the straight-ahead test, you don't even need to import pytest.
15:07 Which is kind of weird.
15:08 That freaked me out a little bit at first.
15:09 I'm like, why am I not interacting with pytest?
15:11 This is so weird at a code level.
15:13 No, it's beautiful.
15:14 It's one of the things I like is that it's just easy to start.
15:17 Absolutely.
15:17 Now, one other thing I want to cover before we dig into the list of plugins
15:22 that we're going to cover is there's some non-obvious other ways to extend pytest, right?
15:29 Like, it has this concept of fixtures that allow you to create and initialize stuff,
15:35 and then automatically pytest has a way to, like, pass it along to your tests and whatnot.
15:39 You'll see some of these plugins actually use fixtures, like some stuff about modifying time and whatnot that we'll talk about.
15:46 What's the deal with those?
15:48 Well, so the unit test model, and the unit test is similar to a lot of the X unit type, like J unit and PHP unit and stuff,
15:58 have a similar model where you just, things are just named also and the tests get run.
16:03 But there's often times where you need stuff to happen before your test gets run,
16:07 like setting up a resource or generating data or something like that.
16:11 And there's, so there's setup and teardown functions that you can, that are associated with these other test frameworks.
16:17 pytest actually also supports those sorts of setup and teardown functions.
16:21 If you want to use them, they are supported.
16:23 But it's really not recommended because fixtures are so awesome.
16:27 So a fixture is a way to combine, like, to write a function that is handling a resource or handling some data
16:35 and be able to do the setup and teardown within one function and, if necessary.
16:42 And then also just be able to scope those.
16:45 And you can, you can have them have the resource or data shared between one test or many tests or even across the entire project.
16:53 And you can control, you know, like if you're initializing a database or connecting to a web resource,
16:59 you might not want to do that for every test.
17:01 You can do it for one, once per session and then, and then do things like rollbacks and stuff per test.
17:07 And that granularity of being able to shift your, the granularity of your setup and teardown is super powerful
17:14 and really helps makes pytest shine.
17:16 Yeah.
17:17 And the way that you write it is super cool as well, right?
17:19 Just have a test function that just takes a parameter.
17:21 The parameter is named the same as the fixture and pytest just says, okay, I really call this function and pass it over.
17:27 It could do really cool things.
17:29 Like you could pass off your database data access layer and it could start a transaction before, hand it to you,
17:34 and then roll back the transaction after the test is over.
17:38 Right.
17:38 And you don't have to worry about it.
17:40 It could just take care of that for you.
17:41 Yeah.
17:41 Yep.
17:42 And the fixtures are really what brought me to pytest.
17:44 Some of the, there's some massively cool things you can do with that.
17:48 Like one of the things that people don't think about is if you've got a system under test that has error logs somewhere,
17:55 one of the things you can do within a fixture is check your error logs after the test runs to make sure there's no fatal errors that got saved somewhere.
18:02 Yeah.
18:03 Yeah.
18:03 Very cool.
18:03 Love it.
18:04 Okay.
18:05 So that's one way.
18:05 The other one is this plugins in general, right?
18:08 Like there's ways to extend pytest with plugins as a concept.
18:13 The pytest has a whole set of hook functions as well.
18:16 There's, there's a lot of things you can do.
18:18 You can, you can write fixtures and package those fixtures for other people to use.
18:22 So some plugins are just packaged fixtures.
18:25 Other things you can do with hook functions.
18:27 So with, I actually was intimidated when I first was reading about hook functions, but they're just, there's just a way for pytest to allow different parts of the way pytest runs to be hooked into.
18:40 So you can, you know, alter a few things if you need to, like, like some of the test ordering ones.
18:45 Once all the tests are collected, you can hook into that section and reorder them if you need to, or, or filter them.
18:52 Choose which ones execute, right?
18:54 Yeah.
18:54 There's a whole bunch of ways you can hook into it and it's fairly complex, but a lot of people have come up with some really clever things and plugins allow you to allow you to do that.
19:05 And I mean, not just third party plugins that you can pull off of PyPI, but you can also just, if you want to share some code that you have that works with your tests amongst different projects, you can package your own plugins to do that.
19:19 Yeah.
19:19 Oh, that's cool.
19:20 You could even have them for like your company, right?
19:23 Like we always do this in RCI and whatnot.
19:26 You can just grab a plugin and do stuff.
19:28 Yeah.
19:28 Like I've got plugins that I've written that, that just are particular to one particular test piece of test equipment.
19:34 So to control this piece of equipment, you can use this plugin and it helps you with some of the common things.
19:39 Oh, that's cool.
19:40 Makes it a little easier for new people.
19:41 Yeah.
19:42 Yeah.
19:42 Lots easier.
19:43 Nice.
19:43 All right.
19:44 I think it's time we go down the list.
19:47 We've set the stage.
19:48 We know what fixtures are.
19:49 We know what plugins are, at least in general.
19:52 Let's put some sugar on pytest to make it sweet.
19:55 Yeah.
19:56 So we're going to run through a whole bunch of really cool pytest plugins, right?
20:00 Yeah, absolutely.
20:01 All right.
20:02 What's this first one here?
20:03 So sugar is a fun one, actually.
20:04 I ran into this right away with looking at pytest.
20:07 So the default output of pytest is you, when you run it, you get a dot shows up for every, for every test that gets run.
20:16 And it does list the files, but a little dot shows up.
20:19 And maybe that's not enough for you.
20:22 So pytest sugar shows you, like, green check marks instead.
20:26 So, and then, you know, instead of, like, Fs for failure, I think there's, like, red Xs or something that show up.
20:33 So it alters the output of pytest so that it looks a little nicer.
20:38 Yeah, it also looks like it might be suppressing the module name, the folder names, and only, like, highlighting the actual file names.
20:46 So it does, like, gray out a little part of the UI.
20:49 So, like, the actual test file being run stands out.
20:51 And there's a vertical progress bar going along that runs as well.
20:56 That's pretty sweet.
20:57 Yeah, that's pretty cool.
20:58 And that was especially really helpful before pytest added the percentage done.
21:02 So now, for each file, pytest lists the percent done now by default.
21:07 But it didn't always have that, and pytest sugar had that.
21:10 It's pretty cool.
21:11 Yeah, it's cool.
21:11 And there's a nice little animated GIF, as all things should have, on there that I'll link to in the show notes so people can check it out and see it going.
21:20 So, yeah, pytest sugar.
21:21 And this is cool because all you've got to do is install this, and it doesn't require effort, right?
21:26 And, like, oh, and now I can do this thing in code.
21:28 It's just, like, I install this, and it looks better.
21:31 Yeah.
21:31 How do you install plugins?
21:32 Just do you pip install them?
21:34 Like, do I need to register it with pytest, or is it sufficient to just pip install it?
21:37 Just pip install it.
21:38 It's about the design of the plugin as well.
21:41 So some plugins, like Sugar, you just install it, and it starts working.
21:45 You don't have to do anything.
21:46 All right.
21:47 But some of them intentionally don't start right away.
21:50 So some plugins, they are installed, but you have to turn on the functionality with a flag or something like that.
21:56 Got you.
21:56 Okay.
21:56 Cool.
21:57 All right.
21:58 What's the next one?
21:58 Number two.
21:59 So this one's definitely something, pytest cov, dash cov, it's a way to run coverage with pytest together.
22:08 So, I mean, there's really two ways to run coverage on pytest tests.
22:13 Coverage doesn't require pytest, right?
22:15 So you can use any test framework.
22:18 So you can run pytest from coverage, or you can run coverage from pytest.
22:22 It's kind of a choice.
22:24 But I usually use pytest cov because it's just really well supported.
22:29 It does some really cool things.
22:31 One of the things that it does really nice is, we'll talk about xDisk later, but xDisk allows
22:36 parallel runs.
22:37 And pytest cov cleanly collects all of the different parallel runs and pulls the report, coverage
22:44 report together for you.
22:45 Oh, that's cool.
22:46 Which is really nice.
22:47 Yeah, it has xDisk support, which we'll talk about what that means.
22:50 But that's a pretty high-end piece of kit there to go grab a parallel executing stuff and bring
22:56 it all together.
22:57 Yeah, and there's also, it's really good about catching the coverage from right from the beginning.
23:01 So you've got like some startup code that happens, like your fixtures and stuff that you've set
23:06 up that run before your tests, making sure that those have coverage turned on as soon as
23:13 possible so that you don't have startup code that doesn't get run and coverage.
23:17 That's one of the issues with hooking up coverage often.
23:20 But pytest cov handles that really cleanly.
23:23 Yeah, awesome.
23:24 It also has sub-process support, so you can fork things into a sub-process and it'll get
23:29 covered automatically, which you might think, okay, who's going to do that?
23:33 And like, do they really care about tracking the coverage of it?
23:37 But, you know, multiprocessing, right?
23:39 They're doing parallelism.
23:40 Yeah, definitely.
23:41 Yeah, that's good as well.
23:42 Very nice.
23:44 The other nice thing about it is just, so if you're, it makes it easy to have a text output.
23:49 So normally when you run coverage outside of, like if you run pytest from coverage, once
23:55 you're finished, you have to then tell coverage to give you the reports.
23:59 You have to run coverage report to get your coverage report.
24:02 But pytest cov just prints that at the end.
24:05 So you get that automatically.
24:06 Don't have to run it again.
24:08 Cool.
24:08 Very nice.
24:08 Very nice.
24:09 Would you say it's low stress?
24:11 It is low stress, yes.
24:12 Unless the stress is trying to get 100% coverage.
24:16 Well, that's true.
24:17 Actually, what are your thoughts on, I'll share my thoughts with you as well, but, you know,
24:22 a lot of people feel like they're not testing well if they don't have 100% coverage or they
24:27 don't have any coverage.
24:27 So they don't even know like how much of their code is being run by tests.
24:31 Like what are your thoughts there?
24:32 I am a strong proponent for open source projects to have 100% coverage because it eases new
24:39 people, new developers into the project.
24:41 When they add code, it's easy for them to find the code that they added that isn't being covered
24:45 yet.
24:45 It doesn't necessarily mean that your tests are good.
24:48 It just means that their code's covered.
24:51 For open source stuff, it's good.
24:52 And then if you take that argument, then it's good for internal projects as well because it's
25:00 easier to run stuff.
25:01 However, I'm not a zealot about it.
25:03 I mean, if you're like jumping into a legacy project and adding software tests to a project
25:08 and you're testing the new stuff and the things that you change, you're never going to hit
25:13 100% coverage of like the rest of the legacy code.
25:16 And, you know.
25:17 Yeah, I agree with that.
25:19 I feel like a lot of people get hung up on like the last mile for applications, like for
25:26 open source projects where it's like a library or something.
25:28 Sure.
25:29 It should totally all be there.
25:30 But there's like in certain apps, there's just little edge cases.
25:34 Like if I'm running on Windows, I got to do this other thing, but I'm only testing this
25:38 on macOS or on Linux.
25:40 Right.
25:41 So like how much should I stress about trying to like get that little edge piece of code to
25:46 run or like there's this.
25:47 This one utility that only I run and a cron job to do this thing.
25:51 I don't know.
25:52 There's just all these little edge cases that can be like a whole lot of work to get to.
25:56 But I feel like, you know, most of the time, you know, you should probably not stress too
26:01 much about it.
26:01 Yeah.
26:01 So all those edge cases are the reasons why I think you should test those.
26:04 Because.
26:06 Okay.
26:07 So we have ways to combine coverage reports.
26:09 So you can use something like GitHub Actions or Travis or something to run your, run your
26:16 code on multiple platforms and combine the coverage from the test runs from multiple platforms.
26:21 So why not run them on multiple platforms?
26:24 If that, if you have platform specific code, if you have a platform specific code, I think
26:30 there's a reason is because some of your customers use it.
26:33 And if it's different than what the developers use, I think you should test that code.
26:38 Yeah, that's probably true.
26:39 That's a good point.
26:39 That's the only place it gets run before it gets deployed.
26:43 All right.
26:44 Well, my transition to the stress one, we got kind of sidetracked.
26:48 But this next one is all about stress.
26:50 Yeah.
26:50 So I think we should actually just cover a couple of them.
26:53 So the next two.
26:54 Yeah.
26:54 They're kind of a good pair.
26:55 Yeah.
26:56 Yeah.
26:56 And pytest repeat.
26:57 So pytest stress is a nice thing.
27:00 So let's say you've got some code that you just run it like overnight.
27:03 I know that I'm going to leave it like a mid.
27:06 I want to, I know everybody's going to be out of the building by midnight.
27:09 And then I want to just beat on the code and run it as many times as I can for eight hours.
27:16 So pytest stress allows you to be able to run a set of time to say, hey, run this for eight hours or run it for a half an hour, run it for a certain time period.
27:24 And that's really cool if that's where you're thinking.
27:27 And then repeat.
27:28 pytest repeat allows you to run your test suite or test functions or whatever a certain number of times.
27:34 So I want to set it up for a thousand times.
27:37 I bring these up together because I found pytest repeat first when I needed to.
27:41 I had some tests that, a test that was failing once in a while and I just wanted to run it all night long.
27:47 And so I just set it for a large number like 2000.
27:51 I don't know.
27:51 Like I just guessed.
27:53 But really what I wanted it to do was run all night.
27:56 And so pytest stress would have been better for that case.
27:59 Oh yeah, that's a cool one.
28:00 You know where I would see these as being useful?
28:03 Sometimes I'm trying to do profiling, like CPU profiling to figure out or maybe even memory profiling.
28:09 And the startup of getting the pytest or whatever it is up and running and all the stuff in place and then running a little bit of code and then tearing it all down.
28:19 It's like gets dwarfed.
28:20 You know, the actual bit you're trying to understand gets dwarfed by all that startup.
28:25 But here you could just let it run for 10 minutes and then it'll be really focused on what it was slow at.
28:30 Then that's where actually with profiling, I think that would be a good place to use the repeat.
28:34 So you could say do a nice round number like 100 times or a thousand times because it's easy to divide by that.
28:40 Yeah, yeah, that's a good point.
28:42 Exactly.
28:42 Nice.
28:43 What's this next one?
28:44 InstaFail.
28:45 So if you had a whole bunch of like print statements and whatever in your code, it doesn't get shown normally with pytest.
28:51 So pytest will capture the output by default.
28:55 I mean, you can turn it off.
28:56 But by default, it captures the output and any errors and failures and stuff.
29:00 Those get captured also, like tracebacks and stuff.
29:03 And then afterwards, so after the test is done, you get your report and then you get a lot of this output that you can help debug your test failures with.
29:15 With long running tests, sometimes it'd be really nice if you really could see those errors while it's happening.
29:24 So you can start looking at the code and start debugging right away.
29:26 And that's where InstaFail will report your failures and errors while the test run is happening, even with output being captured.
29:34 Nice.
29:35 Is there a way with plugins where if I've installed InstaFail, but I only want to see it some of the time to like encode, say, you know, don't run InstaFail right now.
29:44 But sometimes when I want it, let it run without pip uninstalling it.
29:48 I'm not sure, actually.
29:50 That would be sweet.
29:51 Yeah.
29:52 I don't know.
29:53 It could be up to them individually, but it would be nice if maybe a command line or something.
29:57 I don't know.
29:58 Some way to say, actually, I want this other one turned on now, but not always.
30:01 Well, I'm just looking at it.
30:02 So it looks like even if it has it installed, it doesn't.
30:05 This is one of those that it doesn't run automatically.
30:07 You have to pass in the flag.
30:09 I see.
30:09 Okay.
30:09 Cool, cool.
30:10 To turn it on.
30:10 Nice.
30:11 Speaking of passing flags and whatnot, pytest metadata is the next one.
30:15 Number six.
30:15 Yeah.
30:16 This is a kind of an obscure one.
30:17 And this is often used with pytest HTML, which actually we're not covering.
30:22 Maybe we should.
30:23 But pytest InstaFail, or not InstaFail.
30:26 Metadata.
30:28 Metadata.
30:29 Metadata allows you to either through, it gives you a couple of things.
30:33 It has a command line flag where you can pass in extra information to go into your test report.
30:39 And then also you can within, so like for instance, if you're running against a particular set of servers,
30:46 you could have the server name getting listed in the report.
30:50 The other thing is maybe the data that you really want to log, the extra metadata about a test run, is generated or like in real time is developed or gotten from within a fixture or something.
31:03 So a fixture or a test can set this extra metadata and it gets output with your test data.
31:09 Oh, cool.
31:10 Like maybe what GitHub branch you were running on or what environment like staging or production testing or whatever.
31:17 Yeah.
31:18 Yeah.
31:19 Yeah, cool.
31:19 Or maybe adding, you know, what operating system.
31:22 I mean, your operating system's already being logged.
31:24 But if there's other information like, yeah, what branch, what version, what some of the extra specifics.
31:30 Yeah.
31:31 Yeah.
31:31 That's cool.
31:32 Yeah.
31:33 That sounds useful.
31:33 Like if you're saving the report and you're going to look at it later, like maybe continuous integration or something like that.
31:40 Yeah.
31:40 So like for instance, when I'm testing against electronic test equipment, we add the instrument that we're testing against.
31:49 So that's just part of the log.
31:51 This portion of Talk Python to Me is brought to you by Sentry.
31:56 How would you like to remove a little stress from your life?
31:58 Do you worry that users may be having difficulties or are encountering errors with your app right now?
32:04 Would you even know it until they send that support email?
32:07 How much better would it be to have the error details immediately sent to you, including the call stack and values of local variables, as well as the active user stored in the report?
32:17 With Sentry, this is not only possible, it's simple and free.
32:21 In fact, we use Sentry on all the Talk Python web properties.
32:25 We've actually fixed a bug triggered by our user and had the upgrade ready to roll out as we got the support email.
32:31 That was a great email to write back.
32:33 We saw your error and have already rolled out the fix.
32:35 Imagine their surprise.
32:37 Surprise and delight your users today.
32:39 Create your free account at talkpython.fm/sentry and track up to 5,000 errors a month across multiple projects for free.
32:47 And if you use the code talkpython, all one word, it's good for two free months of Sentry's team plan, which will give you up to 20 times as many monthly events and some other features.
32:58 So create that free account today.
33:02 Yeah, I don't want to just be random.
33:04 Not just random.
33:05 Yeah.
33:06 But sometimes you do.
33:07 So one of the things that pytest will actually, I think I've intentionally forgotten the test order.
33:12 I think it's alphabetical or just it's not alphabetical.
33:15 It's like the order that the tests are in in the file.
33:19 That's how the default order that things get run in.
33:22 But you really don't want your test to be order dependent.
33:26 So you want to be able to run any test in any order.
33:30 And sometimes a good way to test that is with the pytest Randomly plugin.
33:35 So it does a couple of things.
33:37 One of the things you can do is it can reorder your tests so that they're running in a random order.
33:43 But also you might be using random as a seed for some randomized information.
33:48 Like say, like Faker, for instance, and other fake data generators use random seeds.
33:54 So pytest Randomly can help you randomize that stuff so that it's more random.
33:59 Yeah, that's cool.
34:00 It says that if Faker, which is a really cool library, if Faker is installed, it'll automatically update it.
34:07 So that Faker is different this time than another time.
34:10 Same thing for NumPy.
34:11 Yeah, this is really cool.
34:13 Anytime you have your test running reliably in the same order, you could unintentionally pick up some form of dependency on order.
34:23 Like if you call this test, it like initializes the database connection string.
34:27 And then you can call this other thing and it'll work.
34:29 But if you call it in the other order, it would crash.
34:30 I mean, maybe that's a bad example because you really want to avoid that dependency.
34:34 But you know what I mean?
34:35 Like it could set something where the next thing depends on it being set that way.
34:39 And if you randomly order it, then you're going to discover those.
34:42 Yeah, especially with your scoping your fixtures to try to speed up test time, you might inadvertently have dependencies like system dependencies.
34:52 For instance, you get the system in a particular state in one test and then you're running another test.
34:57 One of the problems often is you got a test that runs fine when you're running it from your editor.
35:04 And then you go into on the CI system where it's running everything.
35:09 It fails.
35:10 And you can consistently fail it with the entire suite, but not by itself.
35:14 Those are just a pain in the rear to find out where the problem really is.
35:18 And I mean, randomly can help you.
35:20 I'm not sure how well it can help you debug that problem, but it can help find dependency.
35:25 Yeah, at least it won't leave it covered up, hidden.
35:29 Yeah.
35:29 Yeah.
35:29 Well, you already mentioned exodist earlier when we talked about pytestCov.
35:36 So what's this exodist?
35:38 Well, exodist is super powerful.
35:40 It's a little bit of a grab bag of a few features.
35:43 One of the things that it does, and people usually grab it for this, is to run tests in parallel.
35:49 So you can tell it how many CPUs to run on and it'll just run a whole bunch of tests in parallel.
35:54 And that's pretty cool.
35:56 It can speed up tests.
35:57 It does have a little bit of startup time because it's communicating with the different, you know,
36:03 it's doing multi-process communications, a little bit of startup time.
36:06 So if you have independent tests that don't share a resource, this can be really cool.
36:11 The other thing that you can also just offload it onto different processes or different computers,
36:16 you can have it be running on different machines as well.
36:19 So that's pretty cool.
36:20 That's cool.
36:20 That's really cool.
36:21 Yeah, that's very fascinating.
36:22 And then I don't know why this is bundled into it, but this is a really cool feature of pytest
36:27 exodist is the loop on fail feature.
36:29 So you can pass it loop on fail and it'll run your tests repeatedly in a sub process.
36:35 And then you get to see what the failures are.
36:39 But if it detects you've changed your project at all, it'll rerun the test, but it just reruns the failing tests.
36:46 So it just keeps rerunning all of your failures while you're trying to fix it.
36:50 And then once all of them are, when everything's running and fixed, it'll rerun everything.
36:57 So when they all pass, it'll rerun the whole suite.
37:00 Yeah, super cool.
37:00 So it's a little bit like in PyCharm where you can say, run the test when anything changes,
37:07 but this only does it for the failure.
37:08 So you're like, you got to, you write a failing test or you have a failing test you're trying to fix and you can just work on it.
37:13 And it only focuses on that thing.
37:15 Yeah.
37:16 I mean, it kind of is all the passing ones too, because if your suite is passing, it just keeps rerunning it every time it sees a change.
37:22 I see.
37:22 Okay.
37:22 Yeah.
37:23 Cool.
37:24 All right.
37:24 Well, one way your code can fail is it doesn't execute properly, makes a mistake.
37:31 The other one is it could just not be formatted the way you like or break some of the idioms that you're supposed to write.
37:37 Yeah.
37:38 There's a whole bunch of extra plugins that are like this, but they're style guide stuff.
37:42 So pytest Flakes is one where it can run Flake 8 and other things against your code as part of the pytest run.
37:51 It can test both your source code and your test code if you wanted to.
37:54 I actually think this is a pretty cool idea, but there is some controversy as to whether this should be really part of,
38:00 your pytest run or it should be part of like, say, do a pre-commit or something.
38:06 But I think having it in both places is fine.
38:09 If your workflow is mostly people run pytest to make sure all their code's working and then they're ready to check it in.
38:17 If you want to make sure you don't, it's just your workflow, whether you like to catch your Flake errors during commit time or during test time.
38:26 Yeah.
38:26 And I just noticed the one that I grabbed to throw into the list, there was pytest Flakes, I think.
38:32 And they say you should use pytest dash Flake 8.
38:35 So I'm going to link to that one.
38:37 But same idea.
38:38 Yeah.
38:38 Yeah.
38:39 There's also a pytest PEP 8, I think it is.
38:43 There's like various different ones for this.
38:46 Yeah.
38:46 And I got to admit, I'm like a little, I guess, over the top.
38:48 I use both pytest Flake 8 and hookup Flake 8 to pre-commit hook.
38:53 So just because, I mean, pre-commit is so fast.
38:56 Why not?
38:57 Yeah, absolutely.
38:58 Cool.
38:59 Cool.
38:59 Speaking of being fast, you know, your tests sometimes are slow because it's a lot of work to run your tests.
39:04 Or they could just be stuck waiting on something.
39:08 Yeah, I really like, so pytest Timeout is a plugin that is a fairly big hammer to a problem.
39:15 But you've got tests that maybe it loops forever.
39:19 You forgot to put a timer on something.
39:21 Or it's stuck on a resource.
39:22 Or for any reason, it's hung.
39:25 And I suggest people use pytest Timeout for any long-running suites all the time.
39:30 For really short, fast tests, there is a little bit of overhead.
39:35 But for any sort of long-running stuff, the overhead is negligible and you don't see it.
39:40 But what it does is it runs your tests in another thread and then is able to kill it.
39:46 So it will just put a timer around it and kill the thread if it doesn't come back in time.
39:52 Why I call it a big hammer is because if it fails, you don't get your test output.
39:57 But it also doesn't hang forever.
39:59 And these are especially good for CI systems.
40:03 Because you really don't want to come back in the morning and notice that your test suite has been running for 12 hours.
40:08 And it's only really supposed to run for 20 minutes.
40:11 Yeah, absolutely.
40:12 Yeah, if it's unattended or it's really long-running or you're talking to something that who knows how long it's going to take to respond.
40:19 Yeah, maybe even have that with instruments that you're testing and whatnot.
40:23 Yeah, we definitely use it.
40:24 And the nice thing is you can have a global timeout to say, hey, no tests should run longer than this long.
40:31 So whatever, 10 minutes.
40:33 Whatever is in your system is a reasonable amount of time.
40:35 And then you can override that within.
40:38 We can put decorators on individual tests to say, well, this test is really fast.
40:43 It really should only be this long.
40:44 And whatever, you can fine-tune it if you need to.
40:47 Nice.
40:47 All right, another one.
40:48 I think I threw this one on the list is spec.
40:50 Yeah, I don't know anything about this.
40:53 Yeah, so spec, it's a little bit like sugar.
40:55 It allows you to sort of control the output.
40:58 So you can group tests by classes or by files.
41:02 The failed, passed, and skipped ones are marked and colored.
41:06 You can remove like test underscore and underscore stuff from the test.
41:11 So it'll like, you might have a name where it's like test user can log in with underscores.
41:17 And I think it'll take those out so it looks more like English, like user can log in or something like that.
41:22 So it's just a nice little one for customizing.
41:25 There's some options you can pass to it and whatnot.
41:27 Cool.
41:28 Yeah, yeah.
41:28 I've heard you talk about pytest Picked before.
41:31 Yeah, that's a fun little plugin.
41:33 So do you remember what this does?
41:36 Yes.
41:36 Yeah, so this is the one.
41:38 And we weren't totally sure like how smart it was.
41:41 We had a bit of a conversation about it.
41:43 But this one will run tests that are not checked into GitHub or have changes that are not checked into GitHub.
41:51 Oh, that's nice.
41:52 Yeah, so it'll run tests related to unstaged files.
41:57 And if it sees some that are, you know, there, it'll basically, if you're in a GitHub repo, you can run tests that are modified, but not checked in, not committed.
42:07 Yeah, and that's usually often what you want to be doing, right?
42:09 Exactly.
42:10 What I would really like it to do, and I don't think it does, and people can add a comment at the bottom of the show page if that's changed or still misunderstood.
42:21 But I would love it to somehow use coverage to see what are the files that need to be retested.
42:29 Like what tests touch, like I've got a non-test file I change.
42:34 What tests need to be rerun?
42:35 I think it only just reruns the tests that are changed.
42:39 I don't think it uses coverage, but I could be wrong about that.
42:41 But nonetheless, it's still pretty cool.
42:43 Yeah, I mean, all of the pieces are together for somebody to do something like this.
42:47 Yeah.
42:47 Because coverage has a context feature now, so you can tell it.
42:52 And with pytestCov, it automatically sets it up so that you can tell which tests touched which parts of your code.
43:00 And the idea seems like great.
43:01 One of the issues with that is there's a whole bunch of your code that gets hit by every test.
43:07 Yeah.
43:07 Because it's like startup code or something like that.
43:10 Right, right.
43:11 Well, you just don't change the startup code.
43:12 You can't edit that part.
43:13 Yeah.
43:14 No, but still, it is a really cool idea to say like just run the stuff that might have had some influence on.
43:24 That's really cool.
43:24 Yeah.
43:24 So pytestPick doesn't do that, but it has this.
43:28 It does a really cool, it's a nice addition to be able to run the tests that you're working on.
43:33 Yeah, I could totally see just running this and then using like a pre-commit hook to run everything or just, you know, right before you check in, just run everything.
43:40 Or even just check it in and it runs in CI and doesn't get merged if it fails, right?
43:44 Like if there's some other gated mechanism further down, then this seems pretty cool.
43:48 Yeah, definitely.
43:49 Yeah.
43:49 So this next one, FreezeGun, is all about time.
43:53 And this one I feel like is probably packaged as a fixture.
43:56 At least it has a fixture concept.
43:59 Definitely as a fixture.
44:00 It doesn't run by default, but you have to kind of turn it on when you know it.
44:04 Yeah, this one, people know it once they hear what it does.
44:07 You don't want this to run by default.
44:08 I like to list this one, but just, it's such a great name, FreezeGun.
44:12 Oh, yeah.
44:13 It's for changing time.
44:15 You can easily, there's fixtures around it so that you can easily either freeze time so that you're, anytime you grab the time of day or the date time stuff, it's always going to be the same.
44:24 Or you can, you know, fast forward or change the date or do things like that.
44:29 So one of the hard things is around testing code that deals with time because you have to know what the answer is going to be.
44:36 And it's going to be different every time.
44:38 So FreezeGun allows you to test time-related tests really easily.
44:42 Plus, it's just such a great name.
44:44 It is.
44:45 And you have these functions, they can take a freezer object and then basically that, what is that probably behind the scenes patch, like the time functions, date time.now and whatnot.
44:54 And then you can also do like freezer, move to some other time.
44:58 And then like instantly you are transformed either into the future or the past.
45:02 Yeah.
45:02 And I think some people might be using mock for that sort of thing in the past, but this is cleaner.
45:08 Use this.
45:09 Look, if you could say, let's mock out date time.date time.now, or you could use FreezeGun.
45:13 Like, which do you want to do?
45:14 Come on.
45:14 Exactly.
45:15 Yeah.
45:15 FreezeGun.
45:16 All right.
45:17 We've got one more that's on your list and then an extra one I threw in at the end.
45:20 pytest-check.
45:21 Yeah.
45:21 I wrote a plugin called pytest-check and solved a problem I had.
45:26 So the problem is the normal way to stop or to fail a test is to use assert or raise an exception.
45:34 Both of those stop the execution.
45:36 I wanted to be able to do lots of things.
45:39 Like if I want to test lots of attributes of some object, I want to be able to have a lot of things.
45:44 I want to have lots of checks in there.
45:45 And I want to see all of the failures, not just the first one.
45:48 So pytest-check solves this by allowing multiple failures per test with these little check functions.
45:54 But you can also, it also provides a context manager so that you can say within this context, you know, you can use normal asserts, but within a context that will continue after that context.
46:06 So a similar solution is available called subtests and with the pytest SubTest plugin.
46:12 But I still like pytest-check better because it works with stop on fail and X fail, whereas subtests do not.
46:18 Okay.
46:18 Yeah, this is really cool.
46:19 So you can pass in a check fixture to your test and you can say check greater AB less than or equal to this.
46:28 This is not in that.
46:30 This is not in that.
46:30 And then it'll tell you like it was not greater and it's not in there as the error.
46:35 It's just something like that, right?
46:36 It's a fairly simple thing.
46:38 It's a fixture that just collects failure.
46:40 It like wraps things right with an assert context manager or a try except block.
46:45 And if a failure happens, it just, instead of stopping the test, it records it into a list.
46:50 And then after the test is complete, it'll like kind of hack with the pytest guts to make it fail.
46:57 Okay.
46:57 That's really cool.
46:58 I definitely like this.
46:59 I'm thinking like if I get just the text printout of the build failed or the CI failed, you know, even, you know, that I just see it.
47:08 And then when I'm running it and like PyCharm or something, I'd like to just look at it and go, oh, it failed because if this is true and that's true and this,
47:15 like I see, right, you don't have to go and actually debug through it or be, it might be that there's enough information here to actually know what's going on and just fix it.
47:24 Yep.
47:24 And also just more information is good sometimes dealing with, I mean, I definitely use it a lot with test equipment, with signal processing and stuff.
47:31 A signal might be wrong, but why is the signal wrong?
47:34 It might be the wrong frequency.
47:36 It might be the wrong bandwidth.
47:37 It might be, I mean, lots of different things.
47:39 And if I'm checking four or five attributes of the signal, I really want all of that information for a failure.
47:46 Yeah, I agree.
47:46 There might be different aspects that all come together to make it successful.
47:50 So report all the aspects.
47:51 Yeah.
47:51 That's cool.
47:52 All right.
47:53 I want to throw one more in that's not really part of pytest per se, but absolutely can be used with pytest because, hey, it raises exceptions.
48:01 And that's called fluent check.
48:03 So this one allows you to use a, well, a fluent API, you know, where you say operation one dot operation two dot operation three to have a more English language thing.
48:14 And to me, it feels a little bit like your check as well, but it's not a pytest plugin or fixture.
48:19 So it has a couple of APIs.
48:22 I like the is API.
48:23 So you can say like is in dot not none dot float dot between zero and one, and it'll actually check all those things and report like nice messages.
48:33 Like if it's not a float, it would say, you know, whatever it is, here's its value is not a float and you expected it to be right.
48:41 So a little more readable.
48:43 Some people like it.
48:44 Some people just want to assert stuff and either one's fine.
48:47 But if you like the style, this is a pretty cool library.
48:49 It's not mine, but I did do some PRs.
48:52 I did basically create the is API for it.
48:54 Cool.
48:54 Does it have not a hot dog?
48:56 It should also have is a teapot or something like that from the for web testing, you know, the four for 18.
49:02 Yeah, we should throw some of those in there.
49:05 Like, why not?
49:05 We do it.
49:06 Anyway, those are kind of fun for just like inside your test expressing what you're testing a little bit better.
49:12 I think check is really cool as well.
49:14 And it's a bit similar.
49:15 Yeah, but this one doesn't stop.
49:17 It just throws the first one.
49:18 All right.
49:19 Well, that's a great, a great list.
49:21 You know, maybe just really quick, we could check in.
49:23 How's your book doing?
49:24 You wrote a book about some of the stuff.
49:26 Yeah.
49:26 So I wrote Python testing with pytest.
49:28 That was gosh, it was it's been a couple of years now that it's been out and it's still going going strong.
49:36 The of course, the big chunk of sales right after the book was released and quite a few during beta too.
49:43 And that was neat.
49:44 But it's still selling really well.
49:47 And I'm really happy with it.
49:48 And it's still valid, even though it's it's a couple of years old now.
49:52 And I wrote it in conjunction with talking with a lot of the pytest core maintainers to make sure I wasn't putting anything in there that would be deprecated soon.
50:01 Yeah.
50:02 So it's still valid.
50:04 It's good.
50:05 Awesome.
50:05 Yeah.
50:05 People can take that up.
50:06 They want to read, you know, dive into this much more.
50:08 And then also you did an episode that also talked about pytest plugins over on testing code.
50:13 Yeah.
50:14 It was episode 104 with Anthony Sotil.
50:17 The episode that you and I just did, we kind of picked our own, some of our favorites, some of the things that we think people should know about.
50:23 What we did with episode 104, we took download counts and took the top some of the top 28 downloads.
50:30 I see.
50:30 Popular by usage, not by curation.
50:33 Yeah.
50:33 Yep.
50:34 Exactly.
50:34 Awesome.
50:35 All right.
50:36 Well, this is really fun, Brian.
50:37 But before we get out of here, two final questions as per usual.
50:40 If you're going to write some Python code, what editor do you use?
50:43 PyCharm.
50:43 All right.
50:44 Right on.
50:44 It's got some sweet built-in test runners.
50:46 Yeah.
50:46 It's really clean with testing.
50:47 Yep.
50:48 And then notable PyPI package.
50:50 I mean, I guess many of these are.
50:52 Anything in particular comes to mind you want to throw out there?
50:55 It's something different.
50:56 I'm kind of a huge fan of Flit for packaging.
51:00 So that's the easiest way to package right now, I think.
51:04 All right.
51:04 As opposed to some of the other alternatives like pip and PMP and setup tools and all those
51:12 things, right?
51:13 Yeah.
51:14 Yeah.
51:14 Sweet.
51:14 All right.
51:15 Yeah.
51:15 What is that?
51:16 There's poetry and...
51:17 Yeah.
51:17 Poetry as well.
51:18 That's right.
51:18 All right.
51:20 Well, final call to action.
51:21 People are excited about pytest and pytest plugins.
51:23 What should they do?
51:24 They should buy my book, actually.
51:26 So one of the things I made sure to do is to tell people how to create their own plugins
51:31 within the book.
51:32 So I think it's a really good introduction to that.
51:36 Also, I guess, final call to action, go subscribe to Testing Code and podcast also and test your
51:43 code.
51:43 Go out and test something.
51:44 Do you guys talk about pytests over there sometimes?
51:46 Sometimes.
51:47 Yeah.
51:47 A lot of times we do.
51:48 Yeah.
51:49 That's great.
51:50 Awesome.
51:51 Cool.
51:52 Well, thanks for being on the show.
51:53 Nice to have you back as always.
51:55 Yeah.
51:55 Thank you.
51:55 You bet.
51:55 Bye.
51:56 Bye.
51:56 This has been another episode of Talk Python to Me.
51:59 Our guest on this episode was Brian Okken, and it's been brought to you by Linode and Sentry.
52:05 Start your next Python project on Linode's state-of-the-art cloud service.
52:10 Just visit talkpython.fm/Linode, L-I-N-O-D-E.
52:14 You'll automatically get a $20 credit when you create a new account.
52:17 Take some stress out of your life.
52:20 Get notified immediately about errors in your web applications with Sentry.
52:25 Just visit talkpython.fm/sentry and get started for free.
52:29 Want to level up your Python?
52:31 If you're just getting started, try my Python Jumpstart by Building 10 Apps course.
52:36 Or if you're looking for something more advanced, check out our new async course that digs into
52:41 all the different types of async programming you can do in Python.
52:44 And of course, if you're interested in more than one of these, be sure to check out our Everything Bundle.
52:48 It's like a subscription that never expires.
52:50 Be sure to subscribe to the show.
52:52 Open your favorite podcatcher and search for Python.
52:55 We should be right at the top.
52:56 You can also find the iTunes feed at /itunes, the Google Play feed at /play,
53:01 and the direct RSS feed at /rss on talkpython.fm.
53:06 This is your host, Michael Kennedy.
53:07 Thanks so much for listening.
53:09 I really appreciate it.
53:10 Now get out there and write some Python code.
53:12 I'll see you next time.