Learn Python with Talk Python's 270 hours of courses

#407: pytest tips and tricks for better testing Transcript

Recorded on Monday, Feb 27, 2023.

00:00 If you're like most people, the simplicity and ease of getting started is a big part of pytest's appeal.

00:06 But beneath that simplicity, there's a lot of power and depth.

00:09 We have Brian Okken on this episode to dive into his latest pytest tips and tricks for beginners and power users.

00:16 This is Talk Python to Me, episode 407, recorded February 27th, 2023.

00:22 Welcome to Talk Python to Me, a weekly podcast on Python.

00:38 This is your host, Michael Kennedy.

00:40 Follow me on Mastodon, where I'm @mkennedy, and follow the podcast using @talkpython, both on fosstodon.org.

00:47 Be careful with impersonating accounts on other instances.

00:50 There are many.

00:51 Keep up with the show and listen to over seven years of past episodes at talkpython.fm.

00:57 We've started streaming most of our episodes live on YouTube.

01:00 Subscribe to our YouTube channel over at talkpython.fm/youtube to get notified about upcoming shows and be part of that episode.

01:09 This episode is brought to you by Microsoft for Startups Founders Hub.

01:12 Get early stage support for your startup without the requirement to be VC backed or verified at talkpython.fm/foundershub.

01:20 It's also brought to you by brilliant.org.

01:23 Stay on top of technology and raise your value to employers or just learn something fun in STEM at brilliant.org.

01:31 Visit talkbython.fm/brilliant to get 20% off an annual premium subscription.

01:36 Brian, welcome back to Talk Python to me.

01:39 - Well, thank you.

01:40 It's good to be back.

01:41 - It's really good to have you back.

01:42 I don't know if you've noticed, but over on Talk Python, I put up a guest page.

01:47 And who is out of the 440 guests we've had, you are the most common guest and you are now pulling further away from Brett Cannon and Anthony Shaw who are hot on your tails there.

01:59 So I'm sure people know you and are familiar with your work, but for those who don't, quick introduction.

02:05 People probably know me from Python Bytes actually.

02:07 And also I have a test podcast called Test & Code that's kind of on pause right now, but it'll start up again in a month or so.

02:16 Okay, so I am a software developer, mostly embedded as so day job software embedded to C++ stuff. Work with RF test equipment. I got involved with Python and pytest around the testing part of that the the system level testing.

02:33 And then I started writing about pytest and about test stuff, and then I started podcasting about it, and then I started writing books about it and all that sort of stuff.

02:41 So all of this came from me, and my love for Python and pytest came from my embedded world.

02:47 I suspect a lot of people don't really think about it that often, but Python is great for testing things that are not just other Python code.

02:55 Yeah. I mean, there's a lot of people that, like, for instance, use it even to test websites that are not written in Python, just because or not written in Python or not written.

03:05 Like you can test, you can really test any website.

03:08 So if you can get access to it from Python, you can test it with pytest.

03:12 >> Right on. We're going to see a whole bunch of cool tips and tricks and ideas about how to do that with pytest and do it better.

03:21 Because as I said, you've been using it in your day job.

03:24 You've also been using it in a semi non-standard way, like testing both C++ code and actual hardware, which is pretty awesome.

03:34 And you've got a lot of exposure through your book and other things.

03:37 So it's going to be really fun.

03:38 I also wanted to say that early on, so getting started with podcasting is nerve-wracking.

03:43 It's a stressful thing.

03:45 And way back when, when I was just starting out on testing code, you were very encouraging and wanted me to be successful and keep going.

03:54 And that's meant a lot. So thank you.

03:56 No, thank you for saying that here on the show.

03:58 I really appreciate it.

03:59 And five, six years later, however long it's been, you're still going for sure.

04:04 I mean, tomorrow we're doing another show on Python Bytes.

04:07 - Yeah, it's been great.

04:08 - Yeah, thanks.

04:09 And I guess I did want to give a shout out to Python Bytes.

04:12 I don't speak about it that often on Talk Python.

04:14 Every now and then I do, but usually I'm speaking to guests who maybe are not familiar with it.

04:18 But that's the show you and I do.

04:20 And so I really want to encourage people who like the show to also check that one out.

04:25 Tell people what Python Bytes is real quick, Brian.

04:27 Well, it's Python headlines and news and headlines delivered directly to your earbuds.

04:32 But the gist of it is, is we both pick a couple topics per week, and we don't even really talk about it ahead of time. We just pick a couple of Python related topics that we want to talk about.

04:44 And could be like a new library, could be an old library or a tool or a blog post, or something happening in the news. And then we just talk about it for a few minutes.

04:56 and then the other one asks questions.

04:58 And when we have guests on, they come in too.

05:01 One of the things I really like about it is it's always fresh.

05:04 And then also, people have said, if a topic I'm not interested in, I just wait a few minutes and there'll be something else I can listen to.

05:12 So that's cool.

05:13 Exactly. That's really fantastic that it's just, it's always something new.

05:16 And, you know, it's great for us.

05:18 We are always on top of things, always learning, but it's, I think it's really cool.

05:22 Way to stay on top of what's happening in the Python space.

05:25 Yeah, and for me personally, it's an excuse to stay on top of things in the Python space.

05:29 So, yeah.

05:30 Not just an excuse, a requirement.

05:33 You got to be on the microphone in two hours.

05:36 We're going to figure it out.

05:38 Yeah. Awesome.

05:39 All right.

05:40 Now, also, I do want to point out that over on Talk Python Training, you did a really great course calling Getting Started with pytest.

05:48 And people can check that out, as well as for a couple years now, Now you've iterated on your book, Python Testing with pytest.

05:57 On to the second edition.

05:59 So those are two really good resources.

06:00 I feel like you're probably gonna be drawing from for some of these tips, right?

06:04 - Yeah, definitely.

06:05 And the first book launched into having the ability to be able to teach more people.

06:10 So I was able to teach other corporate people and individuals.

06:13 And then I took that learning from like how to teach people how to use pytest and use that to influence the second edition, a complete rewrite, and then took all of that and leveraged it into the short three, it's like three and a half hours, it's really short, for the Getting Started with pytest course.

06:32 I really kind of like what we've done with that.

06:34 You helped out a lot with that course as well.

06:37 - Yeah, I feel, I look back on my college career, I don't know how you feel about this, Brian, but I look back and think, you know, a lot of times there's sort of the meme, like that could have been an email for a meeting these days.

06:47 Like, I kind of feel that way about college.

06:50 Like a lot of stuff I did in college could have been, it could have been four hours.

06:53 I could have been a four hour course that I spent a week on, but no, I spent a whole semester and you know.

06:53 m

06:58 - Yeah, exactly.

06:59 - Six hours.

07:00 You feel the same way looking back?

07:01 (laughing)

07:03 - I do and yeah, there's a lot of stuff that I'm like, wow, that could have been like even a half an hour course.

07:09 - Yeah, exactly.

07:10 So I feel like you've condensed it down really well here.

07:12 So people can check out the course and they can check out the book as well.

07:16 So a lot of kind comments in the audience as well.

07:19 I'm seeing for your book, so thank you folks.

07:21 - Nice.

07:22 - Yeah, well, I guess I envisioned us to kind of talk about this as a podcast, but you leveled this up a little bit here.

07:28 You took this and put it together as an article, so people will be able to come back to it, right?

07:32 This whole idea of these tips and tricks.

07:34 - Yes, because we were bouncing around ideas for what to do, and you mentioned like maybe some tips and tricks from the course and pulling them together as an episode, and I'm like, you know what, I haven't ever done that.

07:46 So I pulled together a blog post called pytest Tips and Tricks.

07:49 And it is a blog post, but it's at pythontest.com.

07:53 But I do want to keep it going.

07:55 So I'll probably, some of these topics, I'll probably bring into, create full articles out of them.

08:01 And some of them are just as good as is things.

08:05 - It's a living blog post.

08:07 - Yes.

08:08 You started it.

08:09 There was a bunch of the, you started a document that had some of the stuff that you pulled out.

08:14 I don't know where you got all this stuff that you started.

08:16 - I went through your course.

08:17 - I paid attention.

08:18 - Nice.

08:19 - It's good.

08:20 There's a lot of, you know, for a, like a getting started sort of story.

08:23 There's a lot of really good tips that I think are, are useful for a lot of folks.

08:27 All right, well, let's jump in.

08:29 There's, you've broken into these different areas and I mean, let's, maybe I'll, I'll kick it over to your screen for you to, to follow along.

08:37 Yeah.

08:37 But yeah, let's kick it off.

08:38 - I want to start out really with one of the things that it's so simple to start pytest.

08:43 So they like the, and I think a lot of people get into it.

08:46 So it's really simple and easy to use.

08:48 If you, to start with, you can just write a file called test_something, and then stick a function in it called test_something, and that pytest will run that.

08:59 And if you can access, whatever you can access with that, you can get started.

09:03 And I think it's a really cool feature of pytest that you can get started that easily, but also I don't think very many people like learn too much more.

09:13 They'll like look up, they'll like, somebody will mention fixtures, So they'll look that up or parameterization, they'll look that up.

09:19 - Probably they'll say, how do you catch an exception?

09:21 This is supposed to be an exception.

09:23 Like, how do I do that?

09:24 And maybe that part, that aspect of it confused me a little bit when I first did pytest.

09:29 I'm like, well, how do I make it do pytest things?

09:32 I just, it's just a file.

09:34 And then I sure I can do Python asserts, but how do I do like tell pytest is it supposed to be greater than seven or not?

09:41 - Yeah.

09:42 - There's a lot of sort of implicit magic behind the scenes as part of that, right?

09:45 - There is, and I, like for instance, just using normal asserts, because with like unit tests, you have to do a whole bunch of extra assert methods and helper methods and stuff, and you have to derive, with unit, it's often people coming from unit tests or some other X unit style that are confused at the simplicity, because people are used to deriving from a test class or something.

10:06 And then I actually had saw the reverse of it.

10:10 I saw people that were not used to X unit style.

10:13 I just wanted to write some test code.

10:15 And this whole notion on the unit test of deriving from a class, I saw a whole bunch of people that I tried to teach at unit test to say, oh, well, I'm gonna have to go and learn about like object oriented programming.

10:31 And I'm like, oh, you don't, it's just the structure.

10:33 That's all, you don't need to know how to use it.

10:35 But so that's one of the nice things.

10:37 So I have on the screen, just a simple test.

10:40 I've also had a lot of people ask me, well, can you give me a template for what a default template for a unit test or a test?

10:46 And I'm like, well, it's just test underscore, that's your template.

10:50 I mean, there's nothing to put in it.

10:54 I've said, okay, for my template, you get at the top of your test, you write getting ready, like a comment that's like getting ready for setup or something.

11:04 And then you set up all the stuff.

11:06 And then you do an action in the middle.

11:09 And then at the end, you assert what, if the action worked.

11:13 And there you go, there's a template.

11:15 - There's two traditional styles of structuring this.

11:19 The early testing days were the three A's, the arrange, act, and assert, which is kind of a little bit like what you got here.

11:27 And then there's the given, when, then, which I feel like a little more BDD world, maybe.

11:34 What do you prefer?

11:35 - I learned given, when, then with some of the early writings around BDD.

11:39 And I just liked it.

11:40 I liked the notion of like, oh, given some state, if I do that, when I do something, then something happens and I can test that something.

11:50 Now there wasn't any structure around it.

11:52 There wasn't any code.

11:53 So now BDD has these Gherkin syntaxes and stuff.

11:57 And for people that that works for, awesome.

12:00 It just doesn't work for me.

12:01 But the notion of behavior-driven development, not the syntax, but the notion of, think about the different behaviors of the system and test for it.

12:09 That I love.

12:11 But the give and win then and arrange active search essentially the same thing.

12:14 Do you feel like the nomenclature of BDD maybe hampered its adoption?

12:19 Like talking about cucumber and gherkin and--

12:24 you go to your boss and say, we're working on the gherkin fathering, and you need to do real work.

12:29 I don't want to be a pessimist, but I really think what happened--

12:32 my guess is that it's such a simple concept that there wasn't hardly anything to charge people for as a consultant.

12:41 Yeah, or to give speeches and conference talks about or whatever.

12:44 But if you add this extra gherkin layer on top of it, now you have something you can train people about.

12:50 There you go. Way to productize it.

12:54 Maybe. So I think it's all good, just don't pay for it.

12:58 All right. Before we move on from this topic, I think the Arrange, Act, Assert or the Give and Win Then is a really nice way to think about it.

13:06 I think there's still a large set of folks who struggle to know what scale should this be.

13:13 Should I do 15 asserts? I've done all the work to get them ready. Should I do one assert? If I do two, am I failing? I was supposed to do one, now two is the wrong, you know, two is too many.

13:23 How do you feel about what goes under those three comment sections?

13:28 I'm glad you brought that up. Really the one action should be like one function call or one method call or something, if you can get away with that.

13:37 And the getting ready stuff, it could be a lot, especially for the first time.

13:42 So, one comment, dollar driven development, that's funny.

13:47 We're going to get to fixtures later, but I think it's okay if it's a whole bunch of setup.

13:52 So it could be like a ton of setup that you have, like most of your code of your function might be getting ready to test something.

13:58 And especially for the first iteration of the test, that's fine, I think.

14:02 And that's where it's good to have comments or a comment, even like a very visible comment block with a bunch of lines and stuff like that to separate the different parts.

14:10 As far as the asserts, there's a lot of people that think, like, you have to do just one assert, and you're failing as a developer if you do more than one assert.

14:18 And I think that's not true.

14:20 There's problems with doing more than one assert.

14:22 But am I doing an action?

14:24 And if there's, like, several things I have to test about whether or not the action is right?

14:30 Like in my world with the RF systems, if I'm setting up a signal and I'm measuring a signal, now, what is the, if I test that I got the right signal, at the end, I might be testing the power level and the burst width and the burst length, and I might be testing a whole bunch of stuff around it.

14:48 And yes, I could separate those all into different tests, but if it's really just, it's really conceptually the same thing, I think it's fine to have multiple asserts.

14:58 But if you really don't want to, there's ways to get around not doing multiple.

15:01 - So you use semicolons, a lot of ands?

15:03 - Well, you can, like you can stick like, for Booleans, for instance, if you had like six Booleans that you were testing, you could stick those in a list and compare the list to a list of expected Booleans.

15:16 - Sure, you could, I mean, you're kind of like saying, like, how can we draw this out?

15:20 But there are legitimate ways, like you could use the any iterator type of thing, or you could use the all, say, all of these things have to pass this or if any of them fail, then that's a certain not any, something like that.

15:34 >> One of the problems of not doing that is that your test stops at the first failure.

15:40 If that's a problem, and sometimes it is, you really do want to see the entire state of all of the, because it might help you debug it.

15:48 Oh, the return code of a web page was like 400, and I expect it to be 200.

15:55 But if you could see the message, it would be so much better.

15:58 >> Yeah, if you could see more or if you're testing the title and that's wrong, but you know what I mean.

16:04 If you see more than one bit of information, it helps.

16:08 This portion of Talk Python to Me is brought to you by Microsoft for Startups Founders Hub.

16:13 I'm here to tell you about a fantastic opportunity for startup founders, especially those of you interested in artificial intelligence.

16:20 With over six figures and benefits, this program is a game changer for startups.

16:25 you'll get $150,000 in Azure credits.

16:27 And Founders Hub is offering a unique chance to access OpenAI's APIs as well as a new Azure OpenAI service, so you can easily infuse generative AI capabilities into your applications.

16:40 The team at Microsoft will also provide you with one-on-one technical advice to help you with architectural plans, scalability, implementation best practices, and security.

16:50 Plus, you'll have access to the network of mentors plugged into the startup world, which is a huge asset when building your network.

16:56 You'll get expert help with topics like products, fundraising, go to market, and more.

17:01 And the best part, the program is open to everyone and has no funding requirements, whether you're in idea phase or further along.

17:08 It just takes five minutes to apply and you'll get massive benefits immediately.

17:13 Harness the power of AI for your startup.

17:15 Sign up for Microsoft for Startups Founders Hub today at talkbython.fm/foundershub.

17:21 This is a no-brainer opportunity for startup founders, so don't miss out.

17:25 Oh, and one more thing, just to make a point of how powerful these offerings are, I use the same AI on offer above to completely write that ad you just heard.

17:34 It's incredibly powerful, and if you have the chance to incorporate OpenAI into your startup, you'd better not miss that chance.

17:41 Sign up and get access today at talkbython.fm/foundershub.

17:44 Thank you to Microsoft for sponsoring the show and to OpenAI for helping write this ad.

17:52 - Now, I think that's also to be differentiated, as we'll get to later in some other tips, from, well, there's different cases, like what if I pass it a zero, and I pass it a 10, or something above 100, all these different inputs, and then I wanna test every, that's a different story.

18:07 - Oh, yeah.

18:08 - Right, right, so we'll get to that.

18:10 All right, what's next?

18:11 - Well, I wanted to, since we're talking about structuring a test function, I thought it'd be great to just remind people that you have a whole suite, so it's good to function, to think about how you're structuring your whole test suite.

18:24 And by test suite, I just mean a directory of stuff that has tests in it.

18:29 You can, being pytest allows you to have your tests interspersed with your code, but I haven't seen that for a long time.

18:35 A lot of people just have really a tests directory and that's what I'm used to.

18:40 Anyway, or a couple, a couple of directories.

18:42 Like I often have an examples directory that I want to make sure all those examples still work, like for pytest plugins.

18:49 And I also want to have the tests themselves pass.

18:52 But anyway, structuring the test directory is good.

18:55 I like, there's a bunch of ways to do it.

18:57 I like to separate based on like functionality.

19:00 So different like behaviors of a system and conceptual separations of the system into different functionality bits and separate those into directories.

19:11 You can also structure it based on like some actual software subsystems in your software.

19:17 And then some people do like the same code structure.

19:19 So they have like the exact same directory structure in their tests as they do in their source code.

19:24 But I think just thinking, making sure you think about it and don't just know, don't just think there's one answer.

19:30 It's really how you wanna run the tests.

19:33 And it helps me if I'm, like if I'm working on a behavior or sub, if you're normally working on a subsystem at that level, then subsystem makes sense to test.

19:42 So you can just, like I'm working on my particular bit of code And so I'm running those tests related to that.

19:49 You know, it's how you're running the test is how you're going to, anyway.

19:54 - Depends how you think about it, right?

19:55 And how it's organized in your brain.

19:57 - Also, there's reporting considerations.

19:59 So if you have to, if you're reporting to outside people that you're like tests are passing in different chunks, it might, the reporting might be easier if you're structured one way versus another.

20:10 - Structured like your software or subsystems, that's straightforward to me.

20:13 Like I've got a data access layer.

20:15 So here's my test that tests the data access layer, presumably mocking out the database, but maybe not.

20:21 When you think about having it in subdirectories, you would just have maybe subdirectories of those.

20:27 And I guess another you could think about is like really, really slow integration style tests versus more units, for lack of a better word, tests.

20:38 Right, like these are ones I can run now.

20:41 These are the ones that take an hour.

20:44 Let me run the ones I could take now.

20:46 What's your style?

20:47 - So I don't, not really that great at throwing in unit tests, 'cause I don't really see much value in unit tests.

20:53 I know that I have to have behavior and tests that test the end user functionality.

21:00 At the point where that is testing all of my code, then I don't feel the need to add unit tests, but there are times where they are needed, which is great.

21:10 I do think it's great to separate those.

21:12 The top hierarchy of separating behavior versus unit tests into one big chunk.

21:18 >> Yeah.

21:18 >> The main reason why I like that is because I really want my behavior tests to be, if I'm going to do coverage, I really want to know if my behavior tests are covering all of my code.

21:28 It doesn't really help me much to know that my unit tests are covering all my code because that could lead to dumb tests.

21:37 There might be a corner case in my code that I can write a unit test for, but I can't reach otherwise, so I think it's better just to delete the code.

21:46 We might get our Pytest certifications taken away, or our Agile certifications taken away.

21:53 I don't really have one.

21:54 But I agree with you completely that some of these larger scale tests that kind of test larger bits of code, they're really, they might not be 100% on catching all the little things you could test, But usually if I break something, many of those break.

22:13 You know, it's usually enough to catch the mistakes.

22:15 - Yeah.

22:16 - They're easier to write 20 big scale tests than a thousand small ones.

22:22 - I've also never had a customer reported issue that could be reproduced as a unit test.

22:27 - Avaro out there says, "Pytest docs introduces," or yeah, it talks about a slow mark just for that use case, which we will get to marks as well, but that's definitely a good recommendation.

22:39 So maybe not actually using the directory structure for slow, but using marks.

22:43 So coming back to that, but let's carry on.

22:46 - Yeah.

22:47 - That was structuring a test suite, which is excellent.

22:49 - Well, okay, so I don't know how the transition is here, but I picked fixtures as the next thing.

22:56 I think one of the first things people need to get used to with pytest is fixtures, because it is the big brain shift from any other test framework.

23:06 And it's, they're pretty cool.

23:07 It's really just a function that's split in half.

23:11 We've got a setup half and a teardown half.

23:14 And they're separated by a yield keyword that separates the setup and teardown.

23:18 And pytest will call that before they're testing and then finish it up afterwards.

23:25 That's about it.

23:26 - Well, I think part of the transition is, you talked about the three A's, the arrange, act, and assert.

23:33 The arrange part, if that's gonna get reused, well, fixtures are pretty well built for that.

23:39 And you could have more than one fixture in a test, right?

23:41 You could say this part arranges the database and this part arranges, I don't know, some set state of the app that you're gonna, you know, make them collide and see what happens.

23:50 - Yeah, or connections to resources and all sorts of stuff, yeah.

23:54 Or data, you can have fixtures that generate data for you.

23:57 And there's many that do that.

23:58 But one of the things, I guess I should have this one of the tips when writing a test, I recommend putting like, especially the first test for some something you write down, just write it in the test. Now, when you go to the right, the second one, that's a good time to go. How much of this setup is shared? And if the all of the setup is shared, mostly, then you can throw, maybe it makes sense to throw that in one or more fixture. I have seen people just copied it, just like take it and put into a fixture and I call the fixture setup.

24:29 You can do that.

24:31 It's a little dangerous though, because what are you setting up?

24:34 I'd rather have it be maybe multiple different ones like setup database or configure the network or something like that.

24:43 Have it be descriptive.

24:44 You've got a word there, setup just doesn't say much, so say something, I guess.

24:49 Well, that goes back to the whole part of testing that is documentation of how the system works.

24:55 And part of that should be really good names for your tests.

24:59 You have the advantage that no one is ever going to try to use your test code as an API.

25:05 So it can be a ridiculous name.

25:07 It's like 15, 20, 30 characters, because no one's going to be upset that the test runner doesn't care that the name is long, and no one's going to use it.

25:16 The person that is going to look at it is either you or somebody else when something's broken and they're stressed out.

25:22 So when they're trying to get done with their day and the test doesn't work, and they're looking at the code going, "What are we doing here?" So yeah, being verbose there is fine.

25:31 In back to the RF world, like let's say I'm setting up both a transmitter and receiver before a test.

25:38 I might be tempted to throw both of those in one fixture, and I have before, but I almost always end up splitting those up and have like setup transmitter, setup receiver, setup measurement system.

25:50 have those separate because they're more reusable as parts later and stuff.

25:57 >> Right. Maybe you need a receiver, not a transmitter for some particular reason somewhere.

26:02 >> Another thing is, it's okay to not reuse fixtures, and they can be in the same file.

26:08 If you just have this huge setup and a little tiny do something section, it's really nice to just throw that into a fixture.

26:17 There's lots of reasons to throw that in the fixture.

26:20 One of the great reasons is you can put asserts in the fixture.

26:24 And you don't want to sprinkle asserts through your test because then your test fails and you're like, was the setup failure or not?

26:32 But pytest is awesome that if the assert happens in the fixture, it doesn't report the test as a failure, it reports it as an error.

26:39 So fixture asserts are errors and then so you can separate when you're seeing the whole, all of your your entire system's failing, But there's really only one failure and all the rest of them are errors.

26:50 It might be that like you're just not connecting to the database or something like that.

26:56 Yeah, interesting.

26:57 Out in the audience, Jeff says, "One thing I missed on my first trials, pytest is the differentiation between error and fail." Yeah.

27:03 Which sounds a lot like what you're talking about there.

27:06 Oh yeah. And his comment around unit test, because unit test is a little different.

27:11 Unit test makes the, I think it's the assertion error versus other exceptions.

27:16 So I think that's the case in unit test, that if it's an assertion error, it's a failure.

27:23 And if it's any other exception, it's an error.

27:27 Py test, completely different.

27:29 Any exception, assertion or otherwise, that happens within the test itself is a failure.

27:35 And any exception that happens that's uncaught in a setup or in a fixture, that's an error.

27:40 - Oh, that's cool.

27:41 I didn't realize that differentiation.

27:43 Also a question from Thomas.

27:45 If you're just having the fixture there to provide data, is it necessary to use yield instead of just returning the value, the data?

27:52 - I usually just return the value.

27:54 I only use yield if I have some work to do for a teardown.

27:58 - I think also, it's just kind of interesting, just that yield.

28:02 Now what a clever use of generators, right?

28:05 - It's very clever and also very nice because you can have variables there that are needed for the cleanup, but you don't need to return to anybody or something or save them in a global variable.

28:18 They can just be in the function and that's it.

28:21 So like, you know, database, connect to the database, keep a handle to the database so that you can close it at the end.

28:27 It's very, very clean.

28:28 - Right, or start a transaction and roll it back.

28:30 So it's whatever you did to it, it's unaffected, yeah.

28:33 - Oh yeah.

28:33 - All right, what's next?

28:34 That was fixtures?

28:35 - Well--

28:36 - Or unless you got more fixture.

28:37 - You made this comment and I'm like, I've been doing pytest so long that I forgot about it.

28:42 - Old time pytest stuff had to add finalizer before we kind of settled on the yield system.

28:48 I would say avoid add finalizer, it's just gonna confuse people, so don't do that.

28:53 Also, you can nest them, so leveraging, using scopes.

28:58 So you can have like, connect to a database by a session scope, and then cleaning up the database as a function scope thing, so that you, you know, save time.

29:08 And then conf test files, if you wanna share in between, Just between tests, you can throw the fixture in a conf test file.

29:14 >> Yeah, it's not necessarily obvious that if I put a fixture and then I have a bunch of tests below in the same file, it's obvious I can just use it.

29:21 But then if I go to another one, I could be like, well, what about those over there?

29:25 I want to share them across these files.

29:26 So this conf test, this is what that's about, right?

29:28 >> Yeah, and a lot of people think you can, or their first attempt is to throw them all into a different module and import the module.

29:35 You can't do that.

29:36 Don't do that.

29:36 And you never want to import the conf test file.

29:39 It's not an importable thing.

29:40 It's just pytest deals with it.

29:42 >> Yeah, indeed.

29:44 >> Okay. On fixtures, there's a bunch of built-in ones that are super cool.

29:48 In the long time, if you've used pytest for a while, we used to have, and we still do, a couple of fixtures called Tempter and TempterFactory.

29:58 But there's newer ones, they've been in for a while, but some people don't know about them, called TempPath and TempPathFactory.

30:05 They use pathlib path objects, which are awesome. So use that if you can.

30:11 Took me a while to love path, the path class, but I love it now. It's really nice.

30:16 I mean, the old one was like just this py.path.local object, which was very undocumented.

30:23 So I don't recommend it.

30:24 The temp files within pytest, so it's great. If you're like gonna you're generating a file or whatever, you want to save some CSV stuff, it's good.

30:33 It sticks around too, which is kind of cool.

30:35 It sticks around for a little while. So you can interrogate your temp files like after a test run is done.

30:42 You can look at the, and if you're trying to debug the failures, those temp files will still be there.

30:48 They're not cleaned up directly after, they're cleaned up in a future test run.

30:51 >> That's interesting.

30:52 >> Yeah.

30:52 >> They're like a N minus one or N plus one lifespan.

30:56 Yeah. There's a bunch of built-in fixtures.

31:00 There's only a handful I use very much.

31:01 I use a Tempath and Tempath.

31:03 So there's Tempath and Tempath Factory.

31:06 The factory versions are used because Tempath is a per test run, like every function, it gets generated.

31:15 You can't use it if you've got a SessionScope fixture.

31:20 The factory ones are SessionScope.

31:22 If you want to use it, anything larger than FunctionScope, use the factory to generate a temp directory. Use that.

31:30 >> Cool.

31:30 >> Capsys, if you want to look at, if you're checking your output, Capsys is good for checking the output of something, the standard out or standard error.

31:39 >> Because pytest captures and eats some of it, right?

31:42 >> Yeah, by default, pytest will always capture the errors and output, and it prints it out for failing tests.

31:50 It'll say, "Oh, here's the output for the test and it failed." That's helpful, but it's normally gone.

31:57 You can use Capsys also just to disable that for temporary bits of your code.

32:02 If you want to throw a log out there all the time or something, you can use that. But I usually use it just to look at the output.

32:09 Especially with pytest plugins, I want to see if I've modified the output, I want to see the output, so I can use that to grab that.

32:16 There's monkey patch as well.

32:18 You can use this for all sorts of stuff, but if I'm doing fancy things, I usually actually use just mock, unit test mock.

32:28 But for things like changing your environment, it's great.

32:31 So you can change environmental variables or quick patches, it works great.

32:37 The neat things about these, other than just doing yourself, is that it cleans up afterward.

32:42 If you patch a system with a dummy bit of system or something, after your test is done, it goes back to what it was before. So that's pretty cool.

32:52 >> Yeah. Because otherwise, you can end up with a problem of the order of operations that's left in this half patched state where if something else depends upon it, right?

33:01 - pytest config is used for grabbing command line flags a lot.

33:05 That's mostly what I use it for.

33:06 And then the only thing I usually use request for anymore is if from a fixture, I want to know what the test name was.

33:13 I can use, you can use request node name to grab the test name.

33:17 I don't think I use it for anything else anymore, except for parameters to grab the parameter values.

33:23 Yeah, anyway.

33:24 - Nice.

33:25 All right.

33:25 I pointed out Mark or audience.

33:28 They pointed out Mark and here we are.

33:31 Mark.

33:31 Markers.

33:32 Markers.

33:33 Pytest.mark.whatever.

33:35 You can use custom markers.

33:36 Markers are great, but don't.

33:37 I, when I learned about markers, I put them everywhere and then I'm like, Oh, that's just sort of, it ends up being messy.

33:44 So it can be, but it's a great way to, you just, it's like just adding a tag to a a test or a test case or something to say, you can use it to run it.

33:55 So you can say, I want to run all the tests that are marked, like user interface, you can run all the UI tests.

34:01 If you didn't separate them by directory.

34:03 Or like somebody said, you can mark all the slow ones and only run the slow ones or avoid running the slow ones.

34:10 >> You can do a not in your execution.

34:13 You can say run the things not marked slow.

34:15 >> Yeah. You just say, well, it's -m, I should throw that in there, -m, like, not slow.

34:22 - Got it.

34:23 - But it's two words, so you have to put it in quotes, like -m, quote, not slow.

34:27 It'll work.

34:28 And you can mark files with a magic word, magic keyword called pytestmark, with no spaces.

34:34 If you throw that in your file, pytest will see it.

34:37 There's a bunch of built-in ones.

34:38 Marks, the ones that I think are probably most common are skip, skipif, and xfail.

34:43 - X is you expect it to fail?

34:45 Like, I know it's failing, but that's okay?

34:47 - Yeah, so a lot of people might think, why would you ever expect a test to fail?

34:52 You should just fix it.

34:52 - No, I know, no, no, no, no.

34:54 It's Friday, three o'clock, you got plans.

34:57 What, you gotta fix the build.

34:59 - Yeah.

35:00 - No, seriously though, why would you use this?

35:01 - Believe it or not, some people are not responsible for all the code.

35:04 - Yeah.

35:05 - There's teams.

35:06 So one great reason to use XFail is to submit a defect, and then you say, I know this test is failing because of this issue.

35:16 You've submitted defect, and then you throw the defect number in the XFail reason string and move on.

35:23 Now your build is still working.

35:26 And there's, but just be careful.

35:28 I mean, XFail is this big thing.

35:30 So I think as, whether or not you use XFail, it needs to be like your entire software team needs to understand it and agree on the process because there needs to be a process around how to utilize XFail because it can just sort of hide failures.

35:45 and you don't want that.

35:46 - Yeah.

35:47 - That's one of the reasons why I really like XFailStrict.

35:49 It makes it so that all, it makes it so that like, if they pass, if you mark it as fail and it passes, it'll just pass.

36:00 But we want it to, well, it X passes.

36:04 It expected, which means I expected it to fail, but it passed.

36:07 But I like to just have it be a failure, which so that somebody can look at it and go, "Oh, yeah, we need to take these out of the test and close the defect or something like that.

36:16 - This portion of Talk Python to Me is brought to you by Brilliant.org.

36:22 You are a curious person who loves to learn about technology.

36:25 I know because you're listening to my show.

36:27 That's why you would also be interested in this episode's sponsor, Brilliant.org.

36:31 Brilliant.org is entertaining, engaging, and effective.

36:35 If you're like me and feel that binging yet another sitcom series is kind of missing out on life, then how about spending 30 minutes a day getting better at programming or deepen in your knowledge and foundations of topics you've always wanted to learn better, like chemistry or biology over on Brilliant.

36:51 Brilliant has thousands of lessons, from foundational and advanced math to data science, algorithms, neural networks, and more, with new lessons added monthly.

37:00 When you sign up for a free trial, they ask a couple of questions about what you're interested in, as well as your background knowledge.

37:06 Then you're presented with a cool learning path to get you started right where you should be.

37:10 Personally, I'm going back to some science foundations.

37:13 I love chemistry and physics, but haven't touched them for 20 years.

37:16 So I'm looking forward to playing with PV equals NRT, you know, the ideal gas law, and all the other foundations of our world.

37:25 With Brilliant, you'll get hands-on on a whole universe of concepts in math, science, computer science, and solve fun problems while growing your critical thinking skills.

37:34 Of course, you could just visit brilliant.org directly.

37:36 Its URL is right there in the name, isn't it?

37:38 But please use our link because you'll get something extra, 20% off an annual premium subscription.

37:44 So sign up today at talkbython.fm/brilliant and start a seven day free trial.

37:49 That's talkpython.fm/brilliant.

37:51 The link is in your podcast player show notes.

37:54 Thank you to brilliant.org for supporting the show.

37:56 - The other thing that people should be aware of that I don't think a lot of people know is the --run X fail flag.

38:05 And this is especially useful like to just say, okay, screw it, ignore all the X fails and just run as if I haven't marked them X fail.

38:13 Because maybe they are fixed and you don't know.

38:15 Maybe they didn't take away the X fail.

38:17 Yeah, but they might or you just want to make, like in a CI system for instance, like if you're running, most CI systems don't understand all of the different variations of outputs from pytest.

38:30 They don't understand X passes, X fails, and skips, and all that sort of stuff.

38:35 A lot of times then X fails and X passes just show up because it just passes and fails.

38:42 So you don't want it just to pass everything.

38:45 So run xfail, if you just want to say, I want to just run everything, and if there's any failure, I want to see it.

38:51 So that's good.

38:53 But anyway, just be careful with xfails.

38:56 I've seen it confuse people.

38:58 - Yeah, it makes sense.

39:00 What's the story with skip and skipif?

39:02 - I guess it's the same.

39:04 I mean, like, why are you skipping something?

39:05 I guess you have to be careful.

39:07 So, skip is just skip this test.

39:10 It doesn't run it at all.

39:11 And skip if, you can put logic in there to say, like, well, if it's on, and so a great example of skip if is if you've got operating specific, like maybe if you have operating specific chunks of tests or chunks of code or something.

39:28 - Skip if platform equals Darwin.

39:31 - Yeah.

39:31 - Skip the macOS ones.

39:32 - Something like that.

39:33 - You got no chance.

39:34 Or if we're talking coverage in unit test again, for example, maybe you've got functionality that depends on Python 3.12, but you're also wanna test on Python 3.7.

39:48 And so you know some code is only gonna run, you're running different code for the same functionality on two Pythons.

39:55 You might wanna like have two tests and one of them gets run on Python 3.11 and one of them or 12 and one of them gets run on all of the other versions.

40:04 >> And you can use skip if to gate those.

40:06 >> Interesting. >> Yeah.

40:07 >> Okay, yeah, that's really cool.

40:08 Hey, before we move on, we've got an interesting question or idea out here from Jeff in the audience, who also is a hardware tester.

40:16 Said, I'd like to distribute fixtures in some way to people as a Python package.

40:21 >> That's a great idea.

40:23 >> Yeah, what do you think about that?

40:24 >> I think that's a plugin.

40:25 >> Okay.

40:26 >> Let's jump to plugins then.

40:28 >> Let's do it. >> Do I have a plugin section?

40:30 I didn't, maybe I don't.

40:31 Let's go to the top.

40:33 - Notes for a new section?

40:35 - Yeah, plugins.

40:37 - It's a living blog post.

40:38 - Yeah.

40:39 Yes, I think it's important to be able to package them as plugins.

40:43 And we don't cover, plugins are kind of a little advanced thing.

40:47 I don't think we cover using plugins in the course, but in three and a half hours, I don't cover how to write a plugin.

40:55 There's a ton of plugins on, yeah, you've got the pytest plugin list on pytest.

41:00 But also you can search for, they're usually pytest dash something.

41:04 So you can search for that on PyPI as well and see a bunch of plugins.

41:07 - Yep, you even have some out there for yourself, right?

41:09 - Quite a few, actually. - There's a lot.

41:10 I mean, I'm scrolling and scrolling, I'm still in the dash A.

41:14 (laughing)

41:16 That's a lot of content there.

41:18 So I guess one tip is people should just go scroll through that list and go, look at all these things they could just fixture into their code, right?

41:25 - Or, one option is to go to PyCascades this year and watch my talk because I'm giving a talk at PyCascades for about packaging pytest fixtures.

41:36 - That's cool.

41:37 When is that?

41:38 - It's in March.

41:39 I should look it up.

41:40 - Nice.

41:40 - Real time.

41:41 - Yeah, I'm pretty sure those videos will be online afterwards if people are not at the conference in Vancouver.

41:46 Although Vancouver is lovely.

41:47 - Oh yeah, they'll be online.

41:48 And I'm also gonna publish the slides.

41:51 I just got the slides done.

41:52 So it's March 18th through the 20th.

41:55 And I think mine's on the 19th.

41:56 - All right, nice.

41:57 - So anyway.

41:58 - What section do you want to do next?

41:59 we got a little bit more time.

42:01 - So we talked about markers and fixtures.

42:03 Parameterization is definitely something I think people should learn about.

42:07 And because, especially if you, I've seen a lot of test writing, utilize copy, paste, modify.

42:14 And it should be a red flag for all software engineers, but for some reason it happens a lot in test code of copy, paste, modify.

42:23 You got a bunch of tests that are kind of the same, and you just take one that's similar to what you need and change it.

42:31 And you end up with a lot of test code that way.

42:34 And one way to fix it is to use parameterization.

42:38 - Yeah, anytime you've got a lot of, you're like, this is happening over and over again in my code, it should be, it's a code smell, right?

42:45 You should know there's some refactoring.

42:46 Or alternatively, Brian, you could get this fancy new Stack Overflow keyboard.

42:52 (Brian laughs)

42:53 - That's awesome.

42:55 >> Which has three keys.

42:56 >> That's exactly. Go ahead.

42:58 >> Three keys, one of them goes to Stack Overflow, one of them is copy, and one of them is C and V, so copy and paste.

43:06 That's awesome.

43:08 >> Power of copy and paste, indeed.

43:10 >> I assume you have to have a mouse connected to select the stuff to.

43:15 >> Yeah, probably.

43:16 >> It really does happen a lot of people like copy another test, change what they need, and then run it.

43:23 Now there's a bunch of problems with that.

43:26 One is people sometimes forget to change the test name.

43:29 And then the test, you can have two functions with the same name in Python and it just like, it just like runs the second one.

43:37 So that's one of the reasons why I'd like to also run coverage.

43:41 If I'm going to run coverage, I want coverage on my tests too.

43:44 So I, and to make sure I have a hundred percent test code coverage.

43:47 So what happens when you run into that scenario on pytest?

43:50 Does it just pretend the first one wasn't there and it got overwritten before it got to it?

43:55 - Yeah, just like in any other Python module, if you write the function name again, and even if you have different parameters--

44:01 - It's so easy to do, it doesn't care.

44:03 - Python doesn't care.

44:04 - So different web frameworks will handle this differently.

44:07 Flask will throw an error and say, you've tried to use this function before, no, and you do an app.get or something on it with the decorator.

44:16 But for example, Pyramid, which I've used a lot, It just erases it.

44:21 So you just end up getting like 404s for whatever was there before.

44:24 You're like, "Whoa, it was just working.

44:26 "Where did it go?

44:27 "I didn't even touch that part of the program "and it's just gone.

44:30 "It's like, I don't understand." You know, and it's, I can only see that it's even less obvious with pytest.

44:38 Like that, you would, how much would you notice when it goes dot, dot, dot, dot, dot, that like it didn't increment a dot when you added a test?

44:44 Might not.

44:45 (laughing)

44:46 - No, well, yeah, it's dangerous.

44:49 But okay, so you get around that.

44:51 It's the other thing of just like thinking about it.

44:53 So if I write a test to begin with, and I think, well, I've set up like, okay, so if I go to this web, really I'm just making like a webpage thing.

45:04 I just wanna make sure this page gets a 200.

45:06 Is it 200, right?

45:07 For the good? - Yeah, yep.

45:09 Yep. - And I wanna make sure that gets 200 in the titles, right?

45:11 Or something like that.

45:13 Now, I might have just a list.

45:14 I mean, that would be an easy test just to make sure all my pages, normal pages are alive, is to just go through and test all those.

45:22 Now I could either just have a list of all the different pages I wanna go to and just ping through those.

45:27 That could be a loop within my test, but that's a loop within a test at the assert at the bottom that doesn't count as the assert at the bottom because you're asserting through the whole thing.

45:37 Mace will just make that a parameterization and go through all the different pages you wanna hit.

45:43 And for each of those pages, make sure it's a 200.

45:46 And then you can also like have the title in the parameterization to say, this is the page, this is the title.

45:52 Now for each of those, go through and test it.

45:55 And those are different tests.

45:57 And it's gonna be almost as easy to write one test as it is to write now a bunch of test cases with parameterization.

46:05 But pytest has a whole bunch of cool parameterization tricks.

46:09 You can do function parameterization, you can parameterize a fixture, You can even use pytest generate tests to do some fancy parameterization.

46:18 For the most part, if you're new to it, stick with function parameterization.

46:22 It's powerful and hopefully that's all you need.

46:24 - Yeah, if you've got all these different cases to test.

46:27 I mean, the value of testing often is to give it the good value and see the good value comes out.

46:33 - Yeah. - That's true.

46:34 But it's also really valuable to give it all those weird edge cases where you wanna check boundaries.

46:38 Like if I give it one less than it should have, it should tell me that's an error instead of crash.

46:43 if I give it something, you know, like just all the little weird situations.

46:47 So testing all the failing cases and having those scenarios as a parameterized story is nice.

46:54 - And one of the comments, which I have seen before, and I kind of agree with, is that my code is dry and my tests are wet.

47:00 What that means is because dry testing, people can go overboard with dry to the point where you can't understand what's going on.

47:12 And so for, especially for tests, you want tests to tell a story of I'm doing this thing and I did this other action.

47:21 And then now I can tell that it works because of this.

47:24 And if you break that story up too much, then you don't know, you don't know what the story is.

47:30 If you hide all of your asserts in a helper function that just says like check stuff, you don't know what you're checking and it hides it too much.

47:39 If you're gonna do that, make sure that you like name it something that is meaningful.

47:44 And I like to have all of my assert helpers be start with assert.

47:48 So like I could say, assert 200 and correct title, for instance, you could do that.

47:53 That'd be fine.

47:54 But one of the reasons for parameterization isn't just to type less, it's to be focused on what's failing.

48:02 So let's say in that case I had before, my test failed with the loop and I could say, well, okay, So one page on my website isn't working.

48:12 Which one?

48:13 I have to go figure that out.

48:14 I have to look at the error message.

48:15 But if I had them iterated on the page name, I could go, oh, my contact one isn't returning.

48:21 So there's something wrong with my contact page.

48:23 And I know exactly where to go.

48:25 Isolating the test failure is good.

48:27 - Yeah, there was a comment before about if you have multiple search, you might not see all of the errors, all the details about that.

48:34 And we talked a little bit about that too.

48:35 And this helps show the status for the different parameters.

48:40 Instead of I just loop through all the options and make sure they all pass or there's an error.

48:45 - Yeah, and look like a website, for instance, there might be two pages.

48:50 Whereas if you had them all in a loop, you'd only see the one.

48:52 You're like, oh, contact pages for it was broken.

48:55 I'll go fix that.

48:56 And you come back, oh, something else is broken.

48:58 Whereas if it had like three failures, you'd be like, oh, like seven of them are failing?

49:04 all of a sudden something else must be wrong.

49:06 >> Yeah, related on that same side.

49:09 In my mind, this is like it taken to the maximum of parameterization is things like hypothesis where you don't even tell what the parameters.

49:18 You're like, "Vary some ideas and give it to the test." What do you think about this? Do you find this useful for you?

49:24 >> I do. Hypothesis is an awesome tool.

49:27 It helps you think about a problem differently because you have to think about, what are the, because you can't say, like, add, you can't test add by making sure that it returns four, because it's only going to return four in particular cases.

49:43 But you can say, "Hmm, maybe test a whole bunch of positive numbers, and I want to make sure that the result is positive." There's like these aspects of your system that you can test for.

49:55 But the other thing that Pythopothesis is awesome at, isn't actually testing the output, it's just making sure your code doesn't blow up.

50:04 So throwing a hypothesis at systems, I think the first awesome thing about it is just it tests some corner cases that your code might not handle right.

50:13 So anything that throws an exception is gonna get dealt with, as you know, pytest is gonna fail because an exception's hit.

50:22 So that helps.

50:23 - Maybe not everyone is familiar with the hypothesis.

50:25 Maybe just tell them like a little bit how it works and how it's like primitization, but not exactly.

50:30 - Well, hypothesis is just going, so you set up strategies and different things around, and they're decorators you put on top of your test.

50:38 And then like, you've got an example of like, given a string that's text, and then you have S.

50:45 So somehow hypothesis will fill in the variables that you put there.

50:52 Like normally if a test had a parameter, it would either be a parameterization or a fixture.

50:57 But hypothesis utilizes that also and fills it in with hypothesis values.

51:03 And so if you give it, if you say it's a string, it'll come up with a whole bunch of them and it'll run your test a whole bunch of times based on, and I don't remember what the default is, but it's quite a few.

51:13 It also checks the time, I think.

51:16 It doesn't make sure it doesn't like run for hours or something like that, but you can tell it how robust to do.

51:22 And it just like makes up stuff.

51:24 But the people behind the hypothesis actually are pretty good at coming up with some decent test cases that break a lot of kinds of software.

51:31 That bit that we think of as the old style that you think of as a test engineer of coming up with wacky values, you don't need that anymore.

51:41 You can just have hypothesis come up with wacky values for you.

51:44 >> Right. Think of strategies of, well, these scenarios we should try to run through and just have it automatic.

51:50 >> Things that you don't know that are constraints on your system, Like maybe your input system, hypothesis tells you, guess what, it like breaks on all German names or something like that, or Unicode.

52:05 And you're like, oh, yeah, actually I don't, that's neat, but I don't actually expect it to ever get called with Unicode.

52:11 So you can restrict the strategies and stuff.

52:15 - Yeah, last thing on this one.

52:17 Jeff asks, how reproducible are these?

52:18 And I see that hypothesis says, it'll remember failing tests, yeah.

52:23 - I can't hear you.

52:23 - Yeah, just maybe the last thing on hypothesis here is, you know, Jeff asks, "How reproducible are tests with hypothesis?" - I don't know.

52:30 - They do say that it remembers the failing examples, so into like SQLite database or something.

52:36 So maybe it'll replay that, essentially.

52:40 And I'll try the failing ones before, but I haven't played with that either.

52:42 - I think it reports like some seed thing or something that you can reseed it to be the same run or something like that.

52:49 - There's a whole section on reproducing failures here.

52:51 And it does say you can provide, one of the things you can say is provide examples of, in addition to the random stuff you pick, please do these things.

52:58 And so I suppose you could take a failing one and put it in there, or if you always do it with the same seed, it's randomness becomes deterministic.

53:07 Which is kind of odd, but.

53:12 - Pseudo-random is part of CS, yes.

53:14 - Yes, indeed.

53:15 All right, well, Brian, we're pretty short on time.

53:18 What else do you want to throw out there real quick before we--

53:21 I want to circle back to the beginning and just say, pytest can do a whole bunch of cool stuff.

53:27 Don't do it all at once.

53:29 Gradually add bells and whistles, especially if you're working on a team, because it's a different mindset.

53:36 So make sure that the team is all up to speed.

53:39 You want to make sure that, like all software, don't design a system so complex that you're not smart enough to debug it.

53:45 I love thinking about that.

53:46 That's a really good way to put it, because if you write the most clever code that you can, you're right at the limit of your ability to like keep it in your mind and understand it and debugging code is harder than writing code.

53:57 So you're not qualified to write in codes that you can't write code that your body can't handle, pay the check for or whatever.

54:05 Yeah, I can't remember who said that first, but it's definitely very true.

54:10 It is indeed.

54:11 Yeah.

54:12 Awesome.

54:13 Well, thank you for putting this together.

54:14 Obviously, I'll link to this in the show notes.

54:15 People can check out your course, they can check out your book.

54:18 And yeah, it's all your other pytest things.

54:21 I'm looking forward to having test and code back.

54:24 And also everybody should, that's listening here, should be listening on Python Bytes.

54:28 I think you'll enjoy it.

54:29 I agree.

54:30 A lot of fun over there.

54:31 All right.

54:32 Thanks a lot, Michael.

54:33 Yeah.

54:34 Thank you for being here, Brian.

54:35 Thank you everyone for listening.

54:36 See y'all later.

54:37 This has been another episode of Talk Python to Me.

54:39 Thank you to our sponsors.

54:41 Be sure to check out what they're offering.

54:43 It really helps support the show.

54:45 Don't miss out on the opportunity to level up your startup game with Microsoft for Startups Founders Hub.

54:49 Get over six figures in benefits including Azure credits and access to open AIs, APIs.

54:55 Apply now at talkpython.fm/foundershub.

54:58 Stay on top of technology and raise your value to employers or just learn something fun in STEM at brilliant.org.

55:05 Visit talkpython.fm/brilliant to get 20% off an annual premium subscription.

55:11 Want to level up your Python?

55:14 We have one of the largest catalogs of Python video courses over at Talk Python.

55:18 Our content ranges from true beginners to deeply advanced topics like memory and async.

55:23 And best of all, there's not a subscription in sight.

55:25 Check it out for yourself at training.talkpython.fm.

55:28 Be sure to subscribe to the show, open your favorite podcast app, and search for Python.

55:33 We should be right at the top.

55:34 You can also find the iTunes feed at /itunes, the Google Play feed at /play, and the Direct rss feed at /rss on talkpython.fm. We're live streaming most of our recordings these days.

55:47 If you want to be part of the show and have your comments featured on the air, be sure to subscribe to our YouTube channel at talkpython.fm/youtube. This is your host, Michael Kennedy. Thanks so much for listening. I really appreciate it. Now get out there and and write some Python code.

56:01 [MUSIC PLAYING]

56:04 [Music]

56:18 (upbeat music)

56:21 [BLANK_AUDIO]

Back to show page
Talk Python's Mastodon Michael Kennedy's Mastodon