Learn Python with Talk Python's 270 hours of courses

#407: pytest tips and tricks for better testing Transcript

Recorded on Monday, Feb 27, 2023.

00:00 If you're like most people, the simplicity and ease of getting started is a big part of pytest's appeal.

00:05 But beneath that simplicity, there's a lot of power and depth.

00:09 We have Brian Okken on this episode to dive into his latest pytest tips and tricks for beginners and power users.

00:15 This is Talk Python to Me, episode 407, recorded February 27th, 2023.

00:21 Welcome to Talk Python to Me, a weekly podcast on Python.

00:38 This is your host, Michael Kennedy.

00:40 Follow me on Mastodon, where I'm @mkennedy, and follow the podcast using @talkpython, both on fosstodon.org.

00:47 Be careful with impersonating accounts on other instances.

00:50 There are many.

00:51 Keep up with the show and listen to over seven years of past episodes at talkpython.fm.

00:56 We've started streaming most of our episodes live on YouTube.

01:00 Subscribe to our YouTube channel over at talkpython.fm/youtube to get notified about upcoming shows and be part of that episode.

01:07 This episode is brought to you by Microsoft for Startups Founders Hub.

01:11 Get early stage support for your startup without the requirement to be VC-backed or verified at talkpython.fm/foundershub.

01:20 It's also brought to you by Brilliant.

01:22 It's brought to you by Brilliant.org.

01:22 Stay on top of technology and raise your value to employers or just learn something fun in STEM at Brilliant.org.

01:30 Visit talkpython.fm/brilliant to get 20% off an annual premium subscription.

01:35 Brian, welcome back to Talk Python.com.

01:38 Well, thank you.

01:39 It's good to be back.

01:40 It's really good to have you back.

01:42 I don't know if you've noticed, but over on Talk Python, I put up a guest page.

01:46 And who is out of the 440 guests we've had, you are the most common guest.

01:52 And you are now pulling further away from Brett Cannon and Anthony Shaw, who are hot on your tails there.

01:58 So I'm sure people know you and are familiar with your work.

02:02 But for those who don't, quick introduction.

02:05 People probably know me from Python Bytes, actually.

02:07 And also, I have a podcast called Test and Code that's kind of on pause right now.

02:13 But it'll start up again in a month or so.

02:16 Okay, so I am a software developer, mostly embedded.

02:19 So day job software, embedded C++ stuff.

02:23 I work with RF test equipment.

02:26 I got involved with Python and pytest around the testing part of that, the system-level testing.

02:33 And then I started writing about pytest and about test stuff.

02:37 And then I started podcasting about it.

02:38 And then I started writing books about it and all that sort of stuff.

02:41 So all of this came from me and my love for Python and pytest came from my embedded world.

02:47 I suspect a lot of people don't really think about it that often.

02:50 But Python is great for testing things that is not just other Python code.

02:55 Yeah.

02:56 I mean, there's a lot of people that, for instance, use it even to test websites that are not written in Python.

03:01 Just because, or not written, yeah, not written in Python or not written, like you can test Go.

03:07 You can really test any website.

03:08 So if you can get access to it from Python, you can test it with pytest.

03:12 Right on.

03:13 And we're going to see a whole bunch of cool tips and tricks and ideas about how to do that with pytest and do it better.

03:21 Because as you said, you've been using it in your day job.

03:24 You've also been using it in a semi-non-standard way, right?

03:29 Like testing both C++ code and actual hardware, which is pretty awesome.

03:34 And you've got a lot of exposure through your book and other things.

03:37 So it's going to be really fun.

03:38 And I also wanted to say that early on, so getting started with podcasting is nerve-wracking.

03:43 It's a stressful thing.

03:44 And way back when, when I was just starting out on testing code, you were very encouraging and wanted me to be successful and keep going.

03:54 And that's meant a lot.

03:56 So thank you.

03:56 Thank you for saying that here on the show.

03:58 I really appreciate it.

03:59 And five, six years later, however long it's been, you're still going for sure.

04:04 I mean, tomorrow we're doing another show on Python Bytes.

04:07 Yeah.

04:07 It's been great.

04:08 Yeah.

04:09 Thanks.

04:09 And I guess I did want to give a shout out to Python Bytes.

04:12 I don't speak about it that often on Talk Python every now and then I do.

04:15 But usually I'm speaking to guests who maybe are not familiar with it.

04:18 But that's the show you and I do.

04:20 And so I really want to encourage people who like the show to also check that one out.

04:24 Tell people what Python Bytes is real quick, Brian.

04:27 Well, it's Python headlines and news and headlines delivered directly to your earbuds.

04:32 That's right.

04:33 But the gist of it is, is we both pick a couple topics per week and we don't even really talk about it ahead of time.

04:40 We just pick a couple of Python related topics that we want to talk about.

04:44 And could be like a new library, could be an old library or a tool or a blog post or something happening in the news.

04:53 And then we just talk about it for a few minutes.

04:56 And then the other one asks questions.

04:58 And when we have guests on, they come in too.

05:00 One of the things I really like about it is it's always fresh.

05:03 And then also that people have said, if a topic I'm not interested in, I just wait a few minutes and there'll be something else I can listen to.

05:12 So that's cool.

05:13 Exactly.

05:13 That's really fantastic that it's just, it's always something new.

05:16 And, you know, it's great for us.

05:18 We are always on top of things, always learning.

05:20 But it's, I think it's really cool.

05:21 Way to stay on top of what's happening in the Python space.

05:25 Yeah.

05:25 And for me personally, it's an excuse to stay on top of things in the Python space.

05:29 So, yeah.

05:30 Not just an excuse, a requirement.

05:32 You've got to be on the microphone in two hours.

05:36 We're going to figure it out.

05:37 Yeah.

05:38 Awesome.

05:39 All right.

05:39 All right.

05:40 Now, also, I do want to point out that over on Talk Python training, you did a really great course calling Getting Started with pytest.

05:48 And people can check that out.

05:50 As well as for a couple years now, you've iterated on your book, Python Testing with pytest.

05:57 On to the second edition.

05:58 So those are two really good resources.

06:00 I feel like you're probably going to be drawing from for some of these tips, right?

06:04 Yeah, definitely.

06:05 And the first book launched into having the ability to be able to teach more people.

06:09 So I was able to teach other corporate people and individuals.

06:12 And then I took that learning from, like, how to teach people how to use pytest and used that to influence the second edition, complete rewrite.

06:24 And then took all of that and leveraged it into the short three.

06:28 It's like three and a half hours.

06:29 It's really short for the Getting Started with pytest course.

06:32 I really kind of like what we've done with that.

06:34 You helped out a lot with that course as well.

06:36 Yeah.

06:38 I feel, I look back on my college career.

06:40 I don't know how you feel about this, Brian.

06:41 But I look back and think, you know, a lot of times there's sort of the meme, like, that could have been an email for a meeting these days.

06:47 Like, I kind of feel that way about college.

06:50 Like, a lot of stuff I did in college could have been, it could have been four hours.

06:53 It could have been a four-hour course that I spent a week on.

06:55 But no, I spent a whole semester and, you know.

06:58 Yeah, exactly.

06:59 Six hours.

06:59 Yeah.

07:00 You feel the same way looking back?

07:01 I do.

07:03 And, yeah, there's a lot of stuff that I'm like, wow, that could have been, that could have been like,

07:07 even a half an hour course.

07:09 Yeah, exactly.

07:10 So, I feel like you've condensed it down really well here.

07:12 So, people can check out the course and they can check out the book as well.

07:15 So, a lot of kind comments in the audience as well I'm seeing for your book.

07:20 So, thank you, folks.

07:21 Nice.

07:21 Yeah.

07:22 Well, I guess I envisioned us to kind of talk about this as a podcast.

07:26 But you leveled this up a little bit here.

07:28 You took this and put it together as an article so people will be able to come back to it, right?

07:32 This whole idea of these tips and tricks.

07:34 Yes.

07:34 Because we were bouncing around ideas for what to do.

07:38 And you mentioned, like, maybe some tips and tricks from the course and pull them together as an episode.

07:44 And I'm like, you know what?

07:45 I haven't ever done that.

07:46 So, I pulled together a blog post called pytest Tips and Tricks.

07:49 And it is a blog post, but it's at pythontest.com.

07:53 But I do want to keep it going.

07:55 So, I'll probably, some of these topics I'll probably bring into grateful articles out of them.

08:01 And some of them are just as good as is things.

08:05 It's a living blog post.

08:07 Yes.

08:07 You started it.

08:09 There was a bunch of the, you started a document that had some of the stuff that you pulled out.

08:13 I don't know where you got all this stuff that you started.

08:16 I went through your course.

08:17 Paid attention.

08:18 Nice.

08:19 It's good.

08:19 There's a lot of, you know, for a, like a getting started sort of story.

08:23 There's a lot of really good tips that I think are useful for a lot of folks.

08:27 All right.

08:28 Well, let's jump in.

08:29 There's, you've broken into these different areas.

08:31 And I mean, let's, maybe I'll kick it over to your screen for you to follow along.

08:37 Yeah.

08:37 But yeah, let's kick it off.

08:38 I want to start out really with one of the things that it's so simple to start pytest.

08:43 So they like the, and I think a lot of people get into it.

08:46 So it's really simple and easy to use.

08:48 If you, to start with, you can just write a file called test underscore something and then stick a function in it called test underscore something.

08:57 And that pytest will run that.

08:59 And if you can access, whatever you can access with that, you can get started.

09:02 And I think it's a really cool feature of pytest that you can get started that easily.

09:08 But also, I don't think very many people like learn too much more.

09:12 They'll like, look up, they'll like, somebody will mention fixtures.

09:16 So they'll look that up or parameterization.

09:18 They'll look that up.

09:19 Probably they'll say, how do you catch an exception?

09:20 This is supposed to be an exception.

09:22 Like, how do I do that?

09:24 And maybe.

09:24 Yeah.

09:25 That part, that aspect of it confused me a little bit when I first did pytest.

09:28 I'm like, well, how do I make it do pytest things?

09:32 I just, it's a, it's just a file.

09:34 And then I sure I can do Python asserts, but how do I do like tell pytest is supposed to be greater than seven or not?

09:41 There's a lot of sort of implicit magic behind the scenes as part of that, right?

09:44 There is.

09:45 And I, like, for instance, just using normal asserts.

09:48 Because with like unit tests, you have to do a whole bunch of extra assert methods and helper methods and stuff.

09:54 And you have to derive, with unit, it's often people coming from unit tests or some other X unit style that are confused at the simplicity.

10:01 Because people are used to deriving from a test class or something.

10:05 And then I actually had, saw the reverse of it.

10:09 I saw people that were not used to X unit style.

10:13 They just wanted to write some test code.

10:14 And the, this whole notion on the unit test of, of deriving from a class.

10:20 I saw a whole bunch of people that I tried to teach at unit test to say, oh, well, I'm going to have to go and learn about like object oriented programming.

10:30 And I'm like, oh, you don't.

10:32 It's just the structure.

10:33 That's all.

10:33 You don't need to know how to use it.

10:35 But, so that's one of the nice things.

10:37 So I, I have on the screen just a simple test.

10:39 I've also had a lot of people ask me, well, can you give me a template for what a, a default template for a unit test or a test?

10:46 And I'm like, well, it's just test underscore.

10:49 That's your template.

10:51 I mean, there's nothing to put in it.

10:53 I've said, okay, for my template, you get at the top of your test, you write getting ready, like a comment that's like getting ready for or set up or something.

11:04 And then you set up all the stuff.

11:06 And then you do an action in the middle.

11:08 And then at the end, you assert what, if the action worked.

11:12 And there you go.

11:13 There's a template.

11:15 There's two traditional styles of structuring this.

11:18 The early testing days were the three A's, the arrange, act, and assert, which is kind of a little bit like what you got here.

11:26 And then there's the given when, then, which I feel like a little more BDD world, maybe.

11:33 What do you prefer?

11:35 I learned given when, then with some of the early writings around BDD.

11:38 And I just liked it.

11:40 I liked the notion of like, oh, given some state, if I do that, when I do something, then something happens and I can test that something.

11:50 Now, there wasn't any structure around it.

11:52 There wasn't any code.

11:53 So now BDD has these Gherkin syntaxes and stuff.

11:57 And for people that that works for, awesome.

11:59 It just doesn't work for me.

12:00 But the notion of behavior-driven development, not the syntax, but the notion of think about the different behaviors of the system and test for it, that I love.

12:10 But the given when, then, and arrange, act, and search, essentially the same thing.

12:14 Do you feel like the nomenclature of BDD maybe hampered its adoption, like talking about like cucumber and Gherkin and it just, you go to your boss and say, we're working on the Gherkin, Father.

12:26 And you need to do real work.

12:28 I don't want to be a pessimist, but I really think what happened, my guess is that it's such a simple concept that there wasn't hardly anything to charge people for as a consultant.

12:41 Yeah. Or to give speeches and conference talks about or whatever, right?

12:44 But if you add like this extra Gherkin layer on top of it, now you have something you can train people about.

12:50 There you go.

12:51 Way to productize it.

12:54 Maybe.

12:56 So I think it's all good.

12:57 Just don't pay for it.

12:58 All right.

12:58 Before we move on from this topic, I think the arrange, act, assert, or the given when then is a really nice way to think about it.

13:06 But I think there's still a large set of folks who struggle to know what scale should this be.

13:13 Should I do 15 asserts?

13:16 I've done all the work to get them ready.

13:17 Should I do one assert?

13:19 If I do two, am I failing?

13:20 I was supposed to do one.

13:21 Now two is the wrong, you know, two is too many.

13:23 How do you feel about what goes under those three comment sections?

13:28 I'm glad you brought that up.

13:30 Really, the one action should be like one function call or one method call or something, if you can get away with that.

13:37 And the getting ready stuff, it could be a lot, especially for the first time.

13:42 So one comment, dollar-driven development.

13:46 That's funny.

13:48 We're going to get to fixtures later, but I think it's okay if it's a whole bunch of setup.

13:52 So it could be like a ton of setup that you have.

13:54 Like most of your code of your function might be getting ready to test something.

13:58 And especially for the first iteration of the test, that's fine, I think.

14:02 And that's where it's good to have comments or a comment, even like a very visible comment block with a bunch of lines and stuff like that to separate the different parts.

14:10 As far as the asserts, there's a lot of people that think, like, you have to do just one assert.

14:14 And you're failing as a developer if you do more than one assert.

14:17 And I think that's not true.

14:20 There's problems with doing more than one assert.

14:22 But am I doing an action?

14:24 And if there's, like, several things I have to test about whether or not the action is right?

14:30 Like, in my world with RF systems, if I'm setting up a signal and I'm measuring a signal.

14:35 Now, what is the, if I test that I got the right signal at the end, I might be testing the power level and the burst width and the burst length.

14:45 And I might be testing a whole bunch of stuff around it.

14:47 And yes, I could separate those all into different tests.

14:51 But if it's really just, it's really conceptually the same thing, I think it's fine to have multiple asserts.

14:57 But if you really don't want to, there's ways to get around not doing multiple asserts.

15:01 Do you use semicolons, a lot of ands?

15:03 Well, you can.

15:03 Like, you can stick, like, for Booleans, for instance, if you had, like, six Booleans that you were testing, you could stick those in a list and compare the list to a list of expected Booleans.

15:16 Sure. Or you could, I mean, you're kind of, like, saying, like, how can we draw this out?

15:20 But there are legitimate ways, like, you could use the any iterator type of thing.

15:25 Or you could use the all, say, all of these things have to pass this.

15:29 Or any of, if any of them fail, then that's, you know, assert not any, something like that, right?

15:34 And one of the problems of not doing that is that your test stops at the first failure.

15:40 So if that's a problem, and sometimes it is, you really do want to see the entire state of all of the, because it might help you debug it.

15:48 Like, oh, this, the return code of a web page was, like, 400, and I expect it to be 200.

15:54 But if you could see the message, you would be so much better.

15:58 Yeah, if you could see more.

16:00 Or if you're testing the title, and that's wrong, but you, you know, you know what I mean?

16:04 If you see more than one bit of information, it helps.

16:07 This portion of Talk By The Enemy is brought to you by Microsoft for Startups Founders Hub.

16:13 I'm here to tell you about a fantastic opportunity for startup founders, especially those of you interested in artificial intelligence.

16:20 With over six figures in benefits, this program is a game changer for startups.

16:24 You'll get $150,000 in Azure credits.

16:28 And Founders Hub is offering a unique chance to access OpenAI's APIs, as well as a new Azure OpenAI service, so you can easily infuse generative AI capabilities into your applications.

16:39 The team at Microsoft will also provide you with one-on-one technical advice to help you with architectural plans, scalability, implementation best practices, and security.

16:50 Plus, you'll have access to the network of mentors plugged into the startup world, which is a huge asset when building your network.

16:56 You'll get expert help with topics like products, fundraising, go-to-market, and more.

17:00 And the best part?

17:02 The program is open to everyone and has no funding requirements, whether you're in idea phase or further along.

17:08 It just takes five minutes to apply, and you'll get massive benefits immediately.

17:12 Harness the power of AI for your startup.

17:14 Sign up for Microsoft for Startups Founders Hub today at talkpython.fm/foundershub.

17:20 This is a no-brainer opportunity for startup founders, so don't miss out.

17:25 Oh, and one more thing.

17:26 Just to make a point of how powerful these offerings are, I used the same AI on offer above to completely write that ad you just heard.

17:34 It's incredibly powerful, and if you have the chance to incorporate OpenAI into your startup, you'd better not miss that chance.

17:40 Sign up and get access today at talkpython.fm/foundershub.

17:44 Thank you to Microsoft for sponsoring the show and to OpenAI for helping write this ad.

17:52 Now, I think that's also to be differentiated, as we'll get to later in some other tips, from, well, there's different cases.

17:58 Like, what if I pass it a zero and I pass it a 10 or something above 100?

18:03 All these different inputs, and then I want to test every...

18:06 That's a different story.

18:07 Oh, yeah.

18:08 Right, right, right.

18:08 So we'll get to that.

18:10 All right.

18:10 What's next?

18:11 Well, I wanted to...

18:12 Since we're talking about structuring a test function, I thought it'd be great to just remind people that you have a whole suite.

18:19 So it's good to function, to think about how you're structuring your whole test suite.

18:24 And by test suite, I just mean a directory of stuff that has tests in it.

18:28 You can...

18:30 Being pytest allows you to have your tests interspersed with your code, but I haven't seen that for a long time.

18:35 A lot of people just have, really, a tests directory, and that's what I'm used to.

18:39 Anyway, or a couple directories.

18:42 Like, I often have an examples directory that I want to make sure all those examples still work, like for pytest plugins.

18:49 And I also want to have the tests themselves pass.

18:51 But anyway, structuring the test directory is good.

18:54 I like...

18:55 There's a bunch of ways to do it.

18:57 I like to separate based on, like, functionality.

18:59 So different, like, behaviors of a system and conceptual separations of the system into different functionality bits and separate those into directories.

19:10 You can also structure it based on, like, actual software subsystems in your software.

19:17 And then some people do, like, the same code structure.

19:19 So they have, like, the exact same directory structure in their tests as they do in their source code.

19:24 But I think just thinking...

19:26 Making sure you think about it and don't just know...

19:29 Don't just think there's one answer.

19:30 It's really how you want to run the tests.

19:32 And it helps me if I'm...

19:34 Like, if I'm working on a behavior or sub...

19:37 If you're normally working on a subsystem at that level, then subsystem makes sense to test.

19:42 So you can just...

19:43 Like, I'm working on my particular bit of code.

19:46 And so I'm running those tests related to that.

19:49 You know, it's...

19:51 How you're running the test is how you're going to...

19:53 Anyway.

19:54 Depends how you think about it, right?

19:55 And how it's organized in your brain.

19:57 Also, there's reporting considerations.

19:59 So if you have to...

20:00 If you're reporting to outside people that your, like, tests are passing in different chunks,

20:05 it might...

20:07 The reporting might be easier if you're structured one way versus another.

20:10 Structured like your software or subsystems, that's straightforward to me.

20:13 Like, I've got a data access layer.

20:15 So here's my test that tests the data access layer.

20:17 Presumably mocking out the database, but maybe not.

20:20 When you think about having it in sub-directories, you would just have maybe sub-directories of those.

20:27 And I guess another you could think about is, like, really, really slow integration-style tests

20:33 versus more units, for lack of a better word, tests.

20:37 Right?

20:39 Like, these are ones I can run now.

20:41 These are ones that take an hour.

20:44 Let me run the ones I could take now.

20:45 Like, what's your style?

20:47 So I don't...

20:48 I'm not really that great at, like, throwing in unit tests because I don't really see much value in unit tests.

20:53 I know that I have to have behavior and, like, tests that test the user...

20:58 The end-user functionality.

21:00 Yeah.

21:00 At the point where that is testing all of my code, then I don't feel the need to add unit tests.

21:06 But there are times where they are needed, which is great.

21:10 I do think it's great to separate those.

21:12 Like, so the top hierarchy of separating, like, behavior versus unit tests into one big chunk.

21:18 Yeah.

21:18 The main reason why I like that is because I really want my behavior tests to be...

21:23 If I'm going to do coverage, I really want to know if my behavior tests are covering all of my code.

21:28 It doesn't really help me much to know that my unit tests are covering all my code because that could lead to dumb tests.

21:36 There might be a corner case in my code that I can write a unit test for, but I can't reach otherwise.

21:44 So I think it's better just to delete the code.

21:46 We might get our pytest certifications taken away, but...

21:50 Or our Agile certifications taken away.

21:52 I don't really have one.

21:53 But I agree with you completely that some of these larger-scale tests that kind of, you know, test larger bits of code,

22:03 they're really...

22:04 They might not be 100% on catching all the little things you could test,

22:07 but usually if I break something, many of those break.

22:12 You know, it's usually enough to catch the mistakes.

22:15 Yeah.

22:16 They're easier to write 20 big-scale tests than 1,000 small ones.

22:22 I've also never had a customer-reported issue that could be reproduced as a unit test.

22:27 Avaro out there says, pytestDocs introduces...

22:31 Yeah, it talks about a slow mark just for that use case, which we will...

22:35 We will get to marks as well, but that's definitely a good recommendation.

22:39 So maybe not actually using the directory structure for slow, but using marks.

22:43 Coming back to that.

22:44 But let's carry on.

22:46 Yeah.

22:46 That was structure in a test suite, which is excellent.

22:49 Okay, so I don't know how the transition is here, but I picked...

22:54 Fixtures is the next thing.

22:56 I think one of the first things people need to get used to with pytest is fixtures,

23:00 because it is the big brain shift from any other test framework.

23:05 And they're pretty cool.

23:07 And it's really just a function that's split in half.

23:11 We've got a setup half and a teardown half.

23:13 And they're separated by a yield keyword.

23:16 That separates the setup and teardown.

23:19 And pytest will call that before your test and then finish it up afterwards.

23:24 That's about it.

23:25 Well, I think part of the transition is you talked about the three A's, the arrange, act, and assert.

23:32 The arrange part, if that's going to get reused, well, fixtures are pretty well built for that.

23:39 And you could have more than one fixture in a test, right?

23:41 You could say this part arranges the database and this part arranges, I don't know, some set state of the app that you're going to make them collide and see what happens.

23:50 Yeah, or connections to resources and all sorts of stuff.

23:53 Yeah.

23:53 Or data.

23:54 You can have fixtures that generate data for you.

23:56 And there's many that do that.

23:58 But one of the things, I guess I should have this as one of the tips.

24:02 In writing a test, I recommend putting, like, especially the first test for something you write down, just write it in the test.

24:10 Now, when you go to write the second one, that's a good time to go, how much of this setup is shared?

24:16 And if all of the setup is shared mostly, then you can throw, maybe it makes sense to throw that in one or more fixture.

24:23 I have seen people just copy it, just, like, take it and put it into a fixture and call the fixture setup.

24:29 You can do that.

24:31 It's a little dangerous, though, because what are you setting up?

24:34 I'd rather have it be maybe multiple different ones, like setup database or, you know, configure the network or something like that.

24:43 Have it be descriptive.

24:44 You've got a word there.

24:45 Setup just doesn't say much.

24:46 So say something, I guess.

24:49 Well, that goes back to the whole part of testing that is documentation of how the system works.

24:55 Yeah.

24:55 And part of that should be really good names for your tests.

24:59 You have the advantage that no one is ever going to try to use your test code as an API.

25:05 So it can be a ridiculous name.

25:07 Yeah.

25:08 It's like 15, 20, 30 characters because no one's going to be upset.

25:12 But the test runner doesn't care that the name is long.

25:15 And no one's going to use it.

25:16 The person that is going to look at it is either you or somebody else when something's broken and they're stressed out.

25:22 So when they're trying to get done with their day and the test doesn't work and they're looking at the code going, what are we doing here?

25:29 So yeah, being verbose there is fine.

25:31 In back to the RF world, let's say I'm setting up both a transmitter and a receiver before a test.

25:38 I might be tempted to throw both a test.

25:46 And have a test runner and have set up and have set up transmitter, set up receiver, set up measurement system.

25:50 Have those separate because they're more reusable as parts later and stuff.

25:57 Right.

25:57 Maybe you need a receiver, not a transmitter for some particular reason somewhere.

26:02 Yeah.

26:02 Another thing is it's okay to not reuse fixtures and they can be in the same file.

26:07 So you can, if you just have like this huge setup and a little tiny, like do something section, it's really nice to just throw that into a fixture.

26:17 There's lots of reasons to throw that in the fixture.

26:20 One of the great reasons is you can put asserts in the fixture and you don't want to sprinkle asserts through your test because then your test fails and you're like, did, was the setup failure or not?

26:32 But pytest is awesome that if the assert happens in the fixture, it doesn't report the test as a failure or reports as an error.

26:39 So fixture asserts are errors.

26:42 And then, so you can separate when you're, when you're seeing the whole, all of your, your entire systems failing, but there's really only one failure and all the rest of them are errors.

26:50 It might be that like, you're just not connecting to the database or something like that.

26:56 So.

26:56 Yeah.

26:56 Interesting.

26:57 Out in the audience, Jeff says, one thing I missed on my first trials pytest is the differentiation between error and fail.

27:03 Yeah.

27:03 Which sounds a lot like what you're talking about there.

27:06 Oh yeah.

27:07 And, and his comment around unit test, because unit test is a little different.

27:11 Unit test makes the, I think it's the assertion error versus other exceptions.

27:15 So I think that's the case in unit test that if you, if it's an, like an assertion error, it's a failure.

27:23 And if it's any other exception, it's an error.

27:26 pytest, completely different.

27:29 Any exception, assertion or otherwise that happens within the test itself is a failure.

27:34 And any exception that happens that's uncaught in a setup or in, in a fixture, that's an error.

27:40 Oh, that's cool.

27:41 I didn't realize that differentiation.

27:42 Yeah.

27:43 Also a question from Thomas.

27:44 If you're just having the fixture there to provide data, is it necessary to use yield instead of just returning the value, the data?

27:52 I usually just return the value.

27:54 I only use yield if I have some work to do with the, for a teardown.

27:58 I think also it's just kind of interesting, just the, that yield and what a clever use of generators, right?

28:05 It's very clever and also very nice because in, you can, you can have variables there that are needed for the cleanup, but you don't need to return to anybody or something or save them in a global variable.

28:18 They can just be in the function and that's it.

28:20 So like, you know, database, connect to the database, keep a handle to the database so that you can close it at the end.

28:26 It's very, very clean.

28:28 Right.

28:28 Or start a transaction and roll it back.

28:30 So it's, whatever you did to it's unaffected.

28:32 Yeah.

28:33 Oh yeah.

28:33 All right.

28:34 What's next?

28:34 That was fixtures.

28:35 Well.

28:35 Or unless you got more fixture.

28:37 You made this comment and I'm like, I've been doing pytest so long that I forgot about it.

28:42 Old time pytest stuff had to add finalizer before we kind of settled on the yield system.

28:48 I would say avoid add finalizer.

28:50 It's just going to confuse people.

28:52 So don't do that.

28:53 Also you can nest them.

28:55 So leveraging, using scopes.

28:58 So you can have like, connect to a database by a session scope and then cleaning up the database as a function scope thing so that you're, you know, save time.

29:08 And then ConfTest files, if you want to share them between just, between tests, you can throw the fixture in a ConfTest file.

29:14 Yeah.

29:14 It's not necessarily obvious that if I put a fixture and then I have a bunch of tests below in the same file, it's obvious I can just use it.

29:21 But then if I go to another one, I could be like, well, what about that?

29:24 Those over there, I want to share them across these files.

29:26 So this ConfTest, this is what that's about, right?

29:28 Yeah.

29:29 And a lot of people think you can, or their first attempt is to throw them all into a different module and import the module.

29:35 You can't do that.

29:35 Don't do that.

29:36 And you never want to import the ConfTest file.

29:39 It's not an importable thing.

29:40 It's just, pytest deals with it.

29:42 Yeah.

29:43 Indeed.

29:43 Okay.

29:44 So on fixtures, there's a bunch of built-in ones that are super cool.

29:48 And in the long time, like if you've used things, pytest for a while, we used to have, and we still do, a couple of fixtures called tempter and tempter factory.

29:57 But there's newer ones.

30:00 They've been in for a while, but some people don't know about them, called temp path and temp path factory.

30:05 And they use pathlib path objects, which are awesome.

30:10 So use that if you can.

30:11 It took me a while to love the path class, but I love it now.

30:15 It's really nice.

30:16 I mean, the old one was like just this pi.path.local object, which was very undocumented.

30:22 So I don't recommend it.

30:24 The temp files within pytest, so it's great.

30:27 If you're generating a file or whatever, or you want to save some CSV stuff, it's good.

30:33 It sticks around too, which is kind of cool.

30:35 It sticks around for a little while.

30:37 So you can interrogate your temp files after a test run is done.

30:42 You can look at the, and if you're trying to debug the failures, those temp files will still be there.

30:47 They're not cleaned up directly after.

30:49 They're cleaned up in a future test run.

30:51 Oh, that's interesting.

30:52 Yeah.

30:52 They're like a N minus one or N plus one lifespan or something.

30:56 Yeah.

30:56 There's a bunch of built-in fixtures.

31:00 There's only a handful I use very much.

31:01 I use temp path and temp path.

31:03 So there's temp path and temp path factory.

31:05 The factory versions are used if you, because temp path like is a per test run.

31:12 Like every function, it gets generated.

31:15 So you can't use it if you've got a session scope fixture.

31:18 So the factory ones are session scoped.

31:22 So if you want to use it, anything larger than function scope, use the factory to generate a temp directory.

31:29 So use that.

31:30 Capsys, a lot of, if you want to look at, if you're checking your output, Capsys is good for checking the output of something, the standard out or standard error.

31:39 Because pytest captures and eats some of it, right?

31:42 Yeah.

31:42 By default, pytest will always capture the errors and output and only print it out for, it prints it out for failing tests.

31:50 It'll say, oh, here's the output for the test and it failed.

31:53 So that's helpful, but it's normally gone.

31:56 And if you want to, so there's, you can use Capsys also just to disable that for temporary bits of your code.

32:02 If you want to throw a log out there all the time or something, you can use that.

32:06 So, but I usually use it just to look at the output.

32:09 So especially with pytest plugins, I want to see if I've modified the output, I want to see the output.

32:14 So I can use that to grab that.

32:16 So there's monkey patch as well.

32:18 You can use this for all sorts of stuff, but I usually, if I'm doing fancy things, I usually like actually use just mock.

32:26 But for things like changing your environment, it's great.

32:31 So you can change environmental variables or quick patches, it works great.

32:37 The neat things about these, other than just doing it yourself, is that it cleans up afterward.

32:42 If you patch like a system with like a dummy bit of system or something, after your test is done, it goes back to what it was before.

32:51 So that's pretty cool.

32:52 Yeah, because otherwise you can end up with a problem of the order of operations is left in this half patched state where if something else depends upon it, right?

33:01 pytest config is used for grabbing command line flags a lot.

33:05 That's mostly what I use it for.

33:06 And then the only thing I usually use request for anymore is if from a fixture, I want to know what the test name was.

33:13 I can use, you can use request node name to grab the test name.

33:17 I don't think I use it for anything else anymore, except for parameters to grab the parameter values.

33:23 Yeah.

33:24 Anyway.

33:24 Nice.

33:25 All right.

33:25 I pointed out Mark or the audience.

33:28 They pointed out Mark.

33:29 And here we are.

33:31 Mark.

33:31 Markers.

33:32 Markers.

33:33 pytest.mark.whatever.

33:34 You can use custom markers.

33:36 Markers are great.

33:37 But don't.

33:38 When I learned about markers, I put them everywhere.

33:40 And then I'm like, oh, that's just sort of, it ends up being messy.

33:44 So it can be.

33:45 But it's a great way to, you just, it's like just adding a tag to a test or a test case or something to say that you can use it to run it.

33:55 So you can say, I want to run all the tests that are marked like user interface.

33:59 You can run all the UI tests.

34:01 If you didn't separate them by directory.

34:04 Or like somebody said, you can mark all the slow ones and only run the, only run the slow ones or avoid running the slow ones.

34:10 You can do a not in your execution.

34:12 You can say, run the things not marked slow.

34:15 Yeah.

34:15 You just say, well, it's dash M.

34:17 I should throw that in there.

34:18 Dash M, like not slow.

34:21 Got it.

34:22 But it's two words.

34:23 So you have to put it in quotes, like dash M quote, not slow.

34:27 It'll work.

34:27 And you can mark files with a magic word, magic, magic keyword called pytest Mark.

34:33 With no spaces.

34:34 If you throw that in your file, pytest will see it.

34:37 There's a bunch of built-in ones.

34:38 Mark's the ones that I think are probably most common are skip, skip if, and x fail.

34:43 X is you expect it to fail.

34:45 Like, I know it's failing, but that's okay.

34:47 Yeah.

34:47 So a lot of people might think, why would you ever expect a test to fail?

34:52 You should just fix it.

34:52 No, I know.

34:53 No, no, no, no, no.

34:54 It's Friday, three o'clock.

34:56 You got plans.

34:57 What?

34:58 You got to fix the build.

34:59 Yeah.

34:59 No, seriously though.

35:00 Why would you use this?

35:01 Believe it or not.

35:02 Some people are not responsible for all the code.

35:04 Yeah.

35:05 There's teams.

35:05 So one great reason to use x fail is to submit a defect.

35:12 And then you say, I know this test is failing because of this issue.

35:16 You've submitted a defect.

35:17 And then you throw the defect number in the x fail reason string and move on.

35:23 Now your build is still working.

35:25 And there's, but just be careful.

35:28 I mean, x fails is a big thing.

35:30 So I think as, as whether or not use x fail, it needs to be like your entire software team

35:35 needs to understand it and agree on the process because there needs to be a process around how

35:41 to utilize x fail because it can just sort of hide failures.

35:45 And you don't want that.

35:46 Yeah.

35:46 That's one of the reasons why I really like x fail strict.

35:49 It makes it so that all, it makes it so that like, if they pass, you, if you market as a fail and it passes, it'll just pass.

36:00 But it, we want it to, well, it x, it x passes expected, which means I expected it to fail, but it passed.

36:06 But I like to just have it be a failure, which, so that somebody can look at it and go, oh, yeah, we need to take these out of the test and close the defect or something like that.

36:18 This portion of Talk Python to Me is brought to you by Brilliant.org.

36:21 You're a curious person who loves to learn about technology.

36:24 I know because you're listening to my show.

36:26 That's why you would also be interested in this episode's sponsor, Brilliant.org.

36:31 Brilliant.org is entertaining, engaging, and effective.

36:34 If you're like me and feel that binging yet another sitcom series is kind of missing out on life, then how about spending 30 minutes a day getting better at programming or deepening your knowledge and foundations of topics you've always wanted to learn better, like chemistry or biology over on Brilliant.

36:50 Brilliant has thousands of lessons, from foundational and advanced math to data science, algorithms, neural networks, and more, with new lessons added monthly.

37:00 When you sign up for a free trial, they ask a couple of questions about what you're interested in, as well as your background knowledge.

37:05 Then you're presented with a cool learning path to get you started right where you should be.

37:09 Personally, I'm going back to some science foundations.

37:12 I love chemistry and physics, but haven't touched them for 20 years.

37:16 So I'm looking forward to playing with PV equals NRT, you know, the ideal gas law, and all the other foundations of our world.

37:24 With Brilliant, you'll get hands-on on a whole universe of concepts in math, science, computer science.

37:30 And solve fun problems while growing your critical thinking skills.

37:33 Of course, you could just visit brilliant.org directly.

37:36 Its URL is right there in the name, isn't it?

37:38 But please use our link because you'll get something extra.

37:41 20% off an annual premium subscription.

37:44 So sign up today at talkpython.fm/brilliant and start a 7-day free trial.

37:49 That's talkpython.fm/brilliant.

37:51 The link is in your podcast player's show notes.

37:53 Thank you to brilliant.org for supporting the show.

37:58 The other thing that people should be aware of that I don't think a lot of people know is a --run x fail flag.

38:04 And this is especially useful like to just say, okay, screw it.

38:09 Ignore all the x fails and just run as if I haven't marked them x fail.

38:13 Because maybe you're fixed and you don't know.

38:15 Maybe they didn't take away the x fail.

38:17 Yeah, but they might.

38:19 Or you just want to make you like in a CI system, for instance.

38:22 Like if you're running, most CI systems don't understand all of the different variations of like outputs from pytest.

38:30 Like they don't understand x passes, x fails and skips and all that sort of stuff.

38:35 So in a lot of times then x fails and x passes just show up as just passes and fails.

38:42 So you don't want just to pass everything.

38:45 So run x fail.

38:45 If you just want to say, I want to just run everything.

38:49 And if there's any failure, I want to see it.

38:51 So that's good.

38:53 But anyway, just be careful with x fails.

38:55 I've seen it.

38:56 I've seen it confuse people.

38:58 Yeah, it makes sense.

38:59 What's the story with skip and skip if?

39:02 I guess it's the same.

39:03 I mean, like, why are you skipping something?

39:05 I guess you have to be careful.

39:07 So skip is just skip this test.

39:09 It doesn't run it at all.

39:11 And skip if you can put logic in there to say like, well, if it's on.

39:15 And so a great example of skip if is if you've got operating specific, like maybe if you have operating specific chunks of tests or chunks of code or something.

39:28 Skip if platform equals Darwin.

39:30 Yeah.

39:31 Skip the macOS ones.

39:32 Something like that.

39:33 You got no chance.

39:33 Yeah.

39:34 Or if we're talking coverage in unit test again, for example, maybe you've got functionality that depends on Python 312.

39:43 But you also want to test on Python 3.7.

39:47 And so you know some code is only going to run.

39:51 You're running different code for the same functionality on two Pythons.

39:54 You might want to like have two tests and one of them gets run on Python 3.11 and one of them or 12 and one of them gets run on all of the other versions.

40:03 And you can use skip if to to gate those.

40:06 Interesting.

40:07 Yeah.

40:07 Okay.

40:07 Yeah, that's really cool.

40:08 Hey, before we move on, we've got an interesting question or idea out here from Jeff in the audience, who also is a hardware tester.

40:17 So I like to distribute fixtures in some way to people as a Python package.

40:21 That's a great idea.

40:22 Yeah.

40:23 What do you think about that?

40:24 I think that's a plugin.

40:25 Okay.

40:26 Let's jump to plugins then.

40:27 Let's do it.

40:28 Did I have a plugin section?

40:29 I didn't.

40:30 Maybe I don't.

40:31 Hmm.

40:32 Let's go to the top.

40:33 Notes for a new section?

40:35 Yeah.

40:35 Plugins.

40:36 Totally.

40:36 It's a living blog post.

40:38 Yeah.

40:38 Yes.

40:40 I think it's important to be able to package them as plugins.

40:43 And we don't cover, plugins are kind of a little advanced thing.

40:47 I don't think we cover, we cover using plugins in the course, but in three and a half hours, I don't cover how to write a plugin.

40:54 Yeah.

40:55 There's a ton of plugins on, yeah, you've got the pytest plugin list on pytest, but also you can search for, they're usually pytest dash something.

41:04 So you can search for that on PyPI as well and see a bunch of plugins.

41:07 Yep.

41:07 You even have some out there for yourself, right?

41:09 Quite a few, actually.

41:10 I mean, I'm scrolling and scrolling.

41:12 I'm still in the dash A.

41:14 There's a lot of content there.

41:17 So I guess one tip is people should just go scroll through that list and go, look at all these things they could just fixture into their code, right?

41:25 Or one option is to go to PyCascades this year and watch my talk because I'm giving a talk at PyCascades for about packaging pytest fixtures.

41:36 That's cool.

41:37 When is that?

41:37 It's in March.

41:38 I should look it up.

41:39 Nice.

41:39 Real time.

41:40 Yeah.

41:40 I'm pretty sure those videos will be online afterwards if people are not at the conference in Vancouver.

41:45 Although Vancouver is lovely.

41:47 Oh yeah, they'll be online.

41:48 And I'm also going to publish the slides.

41:50 I just got the slides done.

41:52 So it's March 18th through the 20th.

41:54 And I think mine's on the 19th.

41:56 All right.

41:57 Nice.

41:57 So anyway.

41:57 What section want to do next?

41:59 We got a little bit more time.

42:00 So we talked about markers and fixtures.

42:02 Parameterization is definitely something I think people should learn about.

42:07 And because, especially if you, I've seen a lot of test writing, utilize copy, paste, modify.

42:14 And it should be a red flag for all software engineers.

42:17 But for some reason, it happens a lot in test code of copy, paste, modify.

42:23 You've got a bunch of tests that are kind of the same.

42:26 And you just take one that's similar to what you need and change it.

42:31 And you end up with a lot of test code that way.

42:34 And one way to fix it is to use parameterization.

42:38 Yeah.

42:38 Anytime you've got a lot of, you're like, this is happening over and over again in my code.

42:43 It should be, it's a code smell, right?

42:45 You should know there's some refactoring.

42:46 Or alternatively, Brian, you could get this fancy new Stack Overflow keyboard.

42:51 That's awesome.

42:54 Which has three keys.

42:56 Yes, exactly.

42:57 Go ahead.

42:58 Three keys.

42:58 One of them goes to Stack Overflow.

43:00 One of them is copy.

43:02 And one of them is C and V.

43:04 So copy and paste.

43:05 So that's awesome.

43:08 Power of copy and paste, indeed.

43:11 I assume you have to have a mouse connected to, you know, select the stuff.

43:15 Yeah, probably.

43:16 It really does happen a lot of people like copy another test, change what they need, and then run it.

43:23 Now, there's a bunch of problems with that.

43:26 One is people sometimes forget to change the test name.

43:29 Oh, yeah.

43:30 And then the test, you can have two functions with the same name in Python, and it just like, it just like runs the second one.

43:37 So that's one of the reasons why I'd like to also run coverage.

43:41 If I'm going to run coverage, I want coverage on my tests, too.

43:44 So I have to make sure I have 100% test code coverage.

43:47 So what happens when you run into that scenario in pytest?

43:50 Does it just pretend the first one wasn't there and it got overwritten before it got to it?

43:55 Yeah.

43:55 Just like in any other Python module, if you write the function name again, and even if you have different parameters.

44:01 It's so easy to do.

44:02 It doesn't care.

44:03 Python doesn't care.

44:04 So different web frameworks will handle this differently.

44:07 Flask will throw an error and say, you've tried to use this function before.

44:11 No, when you do an app.get or something on it with the decorator.

44:16 Yeah.

44:16 But for example, Pyramid, which I've used a lot, doesn't.

44:20 It just erases it.

44:21 So you just end up getting like 404s for whatever was there before.

44:24 You're like, well, it was just working.

44:26 Where did it go?

44:27 I didn't even touch that part of the program, and it's just gone.

44:30 It's like, I don't understand, you know?

44:32 And I can only see that it's even less obvious with pytest.

44:37 Like that, how much would you notice when it goes dot, dot, dot, dot, dot, that like it

44:42 didn't increment a dot when you added a test?

44:44 Might not.

44:44 No.

44:46 Well, yeah, it's dangerous.

44:48 But OK, so you get around that.

44:50 It's the other thing of just like thinking about it.

44:53 So if I write a test to begin with, and I think, well, I've set up like, OK, so if I

45:00 go to this web, really, I'm just making like a web page thing.

45:04 I just want to make sure this page gets a 200.

45:06 Is it 200, right?

45:07 Yeah.

45:08 And I want to make sure that gets 200 in the titles right or something like that.

45:12 Now, I might have just a list.

45:14 I mean, that would be an easy test just to make sure all my pages, normal pages are alive,

45:19 is to just go through and test all those.

45:22 Now, I could either just have a list of all the different pages I want to go to and

45:26 just ping through those.

45:27 That could be a loop within my test.

45:29 But that's a loop within a test at the assert at the bottom that doesn't count as the assert

45:34 at the bottom because you're asserting through the whole thing.

45:36 May as well just make that a parameterization and go through all of your all the different

45:42 pages you want to hit.

45:43 And for each of those pages, make sure it's a 200.

45:46 And then you could also like have the title in the parameterization to say, this is the page.

45:51 This is the title.

45:52 Now, for each of those, go through and test it.

45:55 And those are different tests.

45:56 And it's going to be almost as easy to write one test as it is to write now a bunch of test

46:03 cases with parameterization.

46:05 But pytest has a whole bunch of cool parameterization tricks.

46:09 You can do function parameterization.

46:11 You can parameterize a fixture.

46:12 You can even use pytest generate tests to do some fancy parameterization.

46:18 For the most part, if you're new to it, stick with function parameterization.

46:21 It's powerful.

46:23 And hopefully that's all you need.

46:24 Yeah.

46:24 If you've got all these different cases to test.

46:26 I mean, the value of testing often is to give it the good value and see the good value

46:32 comes out.

46:33 Yeah.

46:33 That's true.

46:33 But it's also really valuable to give it all those weird edge cases where you want to

46:38 check boundaries.

46:38 Like if I give it one less than it should have, it should tell me that's an error instead of

46:42 crash.

46:43 If I give it something, you know, like just all the little weird situations.

46:47 So testing all the failing cases and having those scenarios as a parameterized story is

46:53 nice.

46:54 And one of the comments, which I have seen before and I kind of agree with, is that my coat

46:59 is dry and my tests are wet.

47:00 What that means is because dry testing, people can go overboard with dry to the point where

47:09 you can't understand what's going on.

47:12 And so for, especially for tests, you want tests to tell a story of I'm doing this thing

47:18 and I did this other action.

47:21 And then now I can tell that it works because of this.

47:24 And if you break that story up too much, then you don't know.

47:27 You don't know what the story is.

47:30 If you hide your, all of your asserts in a helper function that just says like check stuff,

47:35 you don't know what you're checking and it hides it too much.

47:39 If you're going to do that, make sure that you like name it something that's meaningful.

47:44 And I like to have all of my assert helpers be start with assert.

47:48 So like I could say assert 200 and correct title, for instance, you could do that.

47:53 That'd be fine.

47:54 But one of the reasons for parameterization isn't just to type less.

47:59 It's to be focused on what's failing.

48:01 So let's say in that case I had before my test failed with the loop.

48:06 And I could say, well, okay, so one page on my website isn't working.

48:12 Which one?

48:13 I have to go figure that out.

48:14 I have to look at the error message.

48:15 But if I had them iterated on the page name, I could go, oh, my contact one isn't returning.

48:21 So there's something wrong with my contact page.

48:23 And I know exactly where to go.

48:25 Isolating the test failure is good.

48:27 I had a comment before about if you have multiple asserts, you might not see all of the errors, all the details about that.

48:34 And we talked a little bit about that, too.

48:35 And this helps show the status for the different parameters.

48:41 Instead of I just loop through all the options and make sure they all pass or there's an error.

48:45 Yeah.

48:45 And with like a website, for instance, you know, there might be two pages.

48:49 Whereas if you had them all in a loop, you'd only see the one.

48:52 You're like, oh, contact page is broken.

48:54 I'll go fix that.

48:55 And you come back.

48:56 Oh, something else is broken.

48:57 Whereas if you if it had like three failures, you'd be like, oh, like seven of them are failing.

49:04 All of a sudden something else must be wrong.

49:06 Yeah.

49:06 Related on that same side.

49:09 In my mind, this is like taken to the maximum of parameterization is things like hypothesis where you don't even tell what the parameters you're like.

49:18 Very some ideas and give it to the test.

49:22 What do you think about this?

49:23 Do you find this useful for you?

49:24 I do.

49:25 Hypothesis is an awesome tool.

49:27 It doesn't really it helps you think about a problem differently because you have to think about like.

49:33 What are the because you can't say like add, you can't test add by making sure that it returns for because it's only going to return for in particular cases.

49:42 But you can say, hmm, maybe all the test, all the whole bunch of positive numbers.

49:48 And I want to make sure that the result is positive.

49:50 And so there's like these aspects of your system that you can test for.

49:55 But the other thing that hypothesis is awesome at isn't actually testing the output.

50:01 It's just making sure your code doesn't blow up.

50:04 So throwing hypothesis at systems, I think the first the first awesome thing about it is just it tests some corner cases that your code might not handle.

50:13 Right.

50:13 So anything that throws an exception is going to get dealt with as you know, pytest is going to fail because an exception is hit.

50:21 So that helps.

50:23 Maybe not everyone knows the hypothesis.

50:25 Maybe just tell them like a little bit how it works and how it's like like parameterization, but not exactly.

50:30 Well, hypothesis is just going.

50:32 So you can you set up strategies and different things around and they're decorators you put on top of your test.

50:38 And then like you've got an example of like given a string that's text and then you have S.

50:45 So somehow hypothesis will fill in the variables that you put your there.

50:51 Like normally in a if a test had a parameter, it would either be a parameterization or a fixture.

50:56 But hypothesis has utilizes that also and fills it in with hypothesis values.

51:02 And so if you give it if you say it's a string, it'll come up with a whole bunch of them and it'll run your test a whole bunch of times based on.

51:10 And I don't remember what the default is, but it's quite a few.

51:13 It also checks the time.

51:14 I think it doesn't make sure it doesn't like run for hours or something like that.

51:19 But you can tell it how robust to do and it just like makes up stuff.

51:24 But the people behind hypothesis actually are pretty good at coming up with some decent test cases that break a lot of kinds of software.

51:31 So that bit of that we think of as the old style that you think of as a test engineer of like coming up with wacky values.

51:39 You don't need that anymore.

51:41 You can just give have hypothesis come up with wacky values for you.

51:44 Right.

51:45 Think of strategies of like, well, these scenarios we should try to run through and just have it automatic.

51:50 Things that you don't know that are constraints on your system.

51:54 Like maybe your input system, hypothesis tells you, guess what?

51:59 It like breaks on all German names or something like that.

52:02 Yeah, yeah, yeah.

52:04 Or Unicode.

52:05 And you're like, oh, yeah, actually I don't.

52:07 That's neat, but I don't actually expect it to ever get called with Unicode.

52:11 So you can restrict the strategies and stuff.

52:15 Yeah.

52:15 Last thing on this one.

52:16 Jeff asks, how reproducible are these?

52:18 And I see that hypothesis says it'll remember failing tests.

52:22 Yeah.

52:22 I can't hear you.

52:23 Yeah.

52:24 Just maybe the last thing on hypothesis here is, you know, Jeff asks, how reproducible are tests with hypothesis?

52:29 I don't know.

52:30 They do say that it remembers the failing examples.

52:33 So into like SQLite database or something.

52:35 So maybe, maybe it'll replay that.

52:38 Potentially.

52:39 And I'll try the failing ones before, but I haven't played with it either.

52:42 So I think it reports like some seed thing or something that you can reseed it to be the same run or something like that.

52:49 There's a whole section on reproducing failures here.

52:51 And it does say you can provide, one of the things you can say is provide examples of, in addition to the random stuff you pick, please do these things.

52:58 And so I suppose you could take a failing one and put it in there.

53:01 Or if you always do it with the same seed, then its randomness becomes deterministic.

53:07 Which is kind of odd, but.

53:11 Pseudo random is part of CS.

53:14 Yes.

53:14 Yes, indeed.

53:15 All right.

53:17 Well, Brian, we're pretty short on time.

53:18 What else do you want to throw out there real quick before we.

53:20 I think I want to circle back to the beginning and just say, pytest can do a whole bunch of cool stuff.

53:27 Don't do it all at once.

53:30 Gradually add bells and whistles, especially if you're working on a team, because it's a different mindset.

53:36 So it's a make sure that the team is all up to speed.

53:39 You want to make sure that, like all software, don't design a system so complex that you're not smart enough to debug it.

53:45 I love thinking about that.

53:46 That's a really good way to put it.

53:47 Because if you write the most clever code that you can, you're right at the limit of your ability to keep it in your mind and understand it.

53:55 And debugging code is harder than writing code.

53:57 You're not qualified to debug your own code.

53:59 You're not qualified.

54:00 You're writing codes that you can't write code that your body can't pay the check for or whatever.

54:05 Yeah.

54:05 I can't remember who said that first, but it's definitely very true.

54:09 It is indeed.

54:10 Awesome.

54:11 Well, thank you for putting this together.

54:12 Obviously, I'll link to this in the show notes.

54:15 People can check out your course.

54:16 They can check out your book.

54:17 And yeah, it's all your other pytest things.

54:22 Yeah.

54:22 Looking forward to having testing code back.

54:24 And also, everybody that's listening here should be listening on Python Bytes.

54:28 I think you'll enjoy it.

54:29 I agree.

54:29 A lot of fun over there.

54:30 All right.

54:30 Thanks a lot, Michael.

54:31 Yeah.

54:32 Thank you for being here, Brian.

54:33 Thank you, everyone, for listening.

54:34 See y'all later.

54:36 This has been another episode of Talk Python to Me.

54:39 Thank you to our sponsors.

54:41 Be sure to check out what they're offering.

54:42 It really helps support the show.

54:44 Don't miss out on the opportunity to level up your startup game with Microsoft for Startups

54:48 Founders Hub.

54:49 Get over six figures in benefits, including Azure credits and access to OpenAI's APIs.

54:54 Apply now at talkpython.fm/founders hub.

54:57 Stay on top of technology and raise your value to employers or just learn something fun in

55:03 STEM at brilliant.org.

55:05 Visit talkpython.fm/brilliant to get 20% off an annual premium subscription.

55:11 Want to level up your Python?

55:13 We have one of the largest catalogs of Python video courses over at Talk Python.

55:17 Our content ranges from true beginners to deeply advanced topics like memory and async.

55:22 And best of all, there's not a subscription in sight.

55:25 Check it out for yourself at training.talkpython.fm.

55:28 Be sure to subscribe to the show, open your favorite podcast app, and search for Python.

55:33 We should be right at the top.

55:34 You can also find the iTunes feed at /itunes, the Google Play feed at /play,

55:39 and the direct RSS feed at /rss on talkpython.fm.

55:44 We're live streaming most of our recordings these days.

55:47 If you want to be part of the show and have your comments featured on the air,

55:50 be sure to subscribe to our YouTube channel at talkpython.fm/youtube.

55:54 This is your host, Michael Kennedy.

55:56 Thanks so much for listening.

55:58 I really appreciate it.

55:59 Now get out there and write some Python code.

56:01 Now get out there and write some Python code.

56:01 We'll see you next time.

Back to show page
Talk Python's Mastodon Michael Kennedy's Mastodon