Learn Python with Talk Python's 270 hours of courses

#464: Seeing code flows and generating tests with Kolo Transcript

Recorded on Thursday, May 9, 2024.

00:00 Do you want to look inside your Django requests?

00:02 How about all of your requests in development and see where they overlap?

00:06 If that sounds useful, you should definitely check out Kolo.

00:10 It's a pretty incredible extension for your editor.

00:13 VS Code at the moment, more editors to come.

00:15 Most likely we have Wilhelm Klopp on to tell us all about it.

00:19 This is Talk Python to Me episode 464 recorded May 9th, 2024.

00:28 You're listening to Michael Kennedy on Talk Python to Me.

00:31 Live from Portland, Oregon.

00:33 And this segment was made with Python.

00:35 Welcome to Talk Python to Me, a weekly podcast on Python.

00:41 This is your host, Michael Kennedy.

00:43 Follow me on Mastodon where I'm @mkennedy and follow the podcast using @talkpython, both on fosstodon.org.

00:50 Keep up with the show and listen to over seven years of past episodes at talkpython.fm.

00:56 We've started streaming most of our episodes live on YouTube.

00:59 Subscribe to our YouTube channel over at talkpython.fm/youtube to get notified

01:04 about upcoming shows and be part of that episode.

01:06 This episode is sponsored by Sentry.

01:09 Don't let those errors go unnoticed.

01:11 Use Sentry.

01:12 Get started at talkpython.fm/sentry.

01:15 And it's also brought to you by us over at Talk Python Training.

01:19 Did you know that we have over 250 hours of Python courses?

01:24 Yeah, that's right.

01:25 Check them out at talkpython.fm/courses.

01:28 Well, welcome to Talk Python to Me.

01:31 Hello.

01:32 Yeah.

01:32 Excited to be here, Michael.

01:33 I've been listening to Talk Python like for, I can't even remember how long, but

01:36 I'm pretty sure it was before I had a first Python job.

01:38 So yeah, a long, long time.

01:40 That's amazing.

01:42 Well, now you're helping create it.

01:44 Yeah, exactly.

01:44 We're going to talk about Kolo, your Visual Studio Code Django.

01:49 I don't even know what to call it.

01:50 It's pretty advanced, pretty in-depth extension seems to be not quite enough.

01:54 So what any project people are going to really dig that people who Django and

01:59 we'll see what the future plans are.

02:01 If we could talk you into other, other ones, but for now Django plus.

02:06 100%.

02:06 Yeah.

02:07 Yeah.

02:07 Django plus VS Code is going to be super interesting.

02:09 When we get to that, of course, you must know the drill.

02:12 Tell us a bit about yourself.

02:13 Yeah, for sure.

02:14 So my name is Will.

02:15 I've been using Django since, well, I guess I've been using Python since about

02:20 2013, I want to say.

02:22 So a little over 10 years and yeah, just kind of like fell in love with it.

02:26 Wanted to make websites, started using Django and yeah, I guess never really

02:31 looked back.

02:32 That was in school back then, but kind of always had a love for like tinkering and

02:37 building side projects.

02:38 I actually studied, I did a management degree in university, but I really loved

02:42 hanging out with all the computer science kids, all the computer science students.

02:44 And I think a part of me really wanted to impress them.

02:47 So I was always building side projects.

02:49 And one of them was actually a Slack app called SimplePol.

02:52 And yeah, we were trying to like, you know, organize something in Slack and really

02:56 felt like the need for polls.

02:57 So built this little side project just like during university.

03:01 And then it became really, really popular.

03:02 And a few years later it became my full-time job.

03:06 So for the past like four years, I've been running SimplePol as a Slack app,

03:11 building out the team up to like seven, eight of us.

03:13 And I had a great time doing that.

03:15 In the middle, I actually worked at GitHub for two years, working on Ruby and Rails.

03:19 And that was super fun, like a great company, great people, huge code base.

03:23 Learned a lot there.

03:24 That was really fun.

03:25 But yeah, I left after about two years to work full-time on SimplePoll.

03:29 So SimplePoll had been running as a side project kind of in the background.

03:31 And actually it's interesting, like, and in kind of the order of events, thinking

03:35 back, Microsoft was, acquired GitHub while I was there.

03:39 And then suddenly all of my colleagues started talking about buying boats and

03:45 leaving the company.

03:45 And I thought, hmm, I don't quite have boat money, but how can I, what's an ace

03:53 I might have up my sleeve?

03:54 And it was SimplePoll, which had got like tons of users, but I never monetized it.

03:58 So I set out to monetize it.

04:00 And then a year later it was actually bringing in more revenue than my salary

04:04 at GitHub.

04:04 So I decided, decided to quit.

04:06 So that's kind of the SimplePole backstory.

04:08 So SimplePole is a Django app, reasonably sized now, a bunch of people working on

04:12 it.

04:12 And then, yeah, at some point in the journey of building SimplePoll, I kind of

04:16 started playing around with Kolo.

04:18 So Kolo also kind of, just like SimplePoll started as a side project, but now not to

04:22 make polls in Slack, but instead to improve my own developer experience building

04:27 SimplePoll.

04:27 So kind of built it as an, as my own tool for making Django, working with Django

04:33 more fun, give me more insight, give me access to some of the data that I felt

04:36 was so close, but that I had to just like manually get in there and print out.

04:41 So the reason Kolo started out as supporting just Django and VS Code is because

04:46 that's what I was using and it was an internal side project.

04:48 And now actually handed over SimplePoll to a new CEO.

04:53 I'm no longer involved day to day and I'm working full time on Kolo.

04:57 Man, congratulations on like multiple levels.

05:01 That's awesome.

05:01 Thank you.

05:02 Yeah.

05:02 I want to talk to you a bit about SimplePoll for just a minute, but before then you

05:07 pointed out like, look, I made this side project and how many hours are you

05:11 spending on it a week maybe?

05:12 Oh, it was interesting.

05:13 So honestly, like, so this would like right at the beginning, like when it, when it

05:16 was first started, I, yeah, it's a good time.

05:17 It's a good, it's a good question.

05:19 The, I always joke that the best thing about my management degree was that I had

05:22 a lot of free time to like do build side projects.

05:25 Honestly, I think it could have been like 20, 30, 40 hours a week.

05:28 Yeah.

05:28 Yeah.

05:29 That was, yeah, I think, yeah, it definitely varied week to week.

05:32 And then later on?

05:33 Yeah.

05:33 And then while I was working, when I had a full-time job as a software engineer,

05:36 yeah, that was a lot tougher.

05:37 It was like nights and weekends rarely had energy during the week to work on it.

05:41 And then honestly, like, since it was a real project with real users, I ended up

05:45 spending a lot of the weekend doing like support, like support stuff.

05:49 Yeah, absolutely.

05:50 Support.

05:50 And like, then you charge and then now you have finance stuff and like legal stuff to

05:54 do so that, that wasn't super fun.

05:56 Like it really slows down the features and the creation of stuff.

06:00 Exactly.

06:01 Yeah.

06:01 I would say I probably spent fully 50% of my full-time job doing email support, that

06:08 kind of stuff, you know, just like there's tons of people taking courses and listen to

06:12 the podcast and they'll have questions and thoughts and you know, and it's, it's

06:16 awesome, but it also is really tricky.

06:18 So the reason I ask is I always find it fascinating.

06:21 You'll see like news articles.

06:24 I don't know.

06:24 They're always click baity or whatever.

06:25 This person makes three times their job working 10 hours a week on this other

06:30 thing, like you make three times what you make for your job.

06:32 What are you doing at your job?

06:34 Right.

06:36 The ability to say you can make that step where you go from kind of tired at nights,

06:41 extra time and squeezing on the weekends to full-time full energy.

06:44 Yeah.

06:44 If it's already doing well, you know, on like a very thin life support, like then

06:49 give it full energy, you know, full time and energy.

06:50 It's just, of course it's going to be better.

06:52 Right.

06:52 It's so interesting.

06:53 I actually have a lot of thoughts about this.

06:55 Maybe I should write something about this at some point, but yeah, I actually think

06:58 running like a bootstrap side project kind of business as you have a job can be really

07:03 good because it really forces you to prioritize and build the most important

07:07 things.

07:08 Yeah.

07:08 It's kind of like having kids.

07:09 Oh, nice.

07:10 Yeah.

07:10 Yeah.

07:10 I need to try that someday.

07:11 You'll be real tired.

07:13 I tell you, you'll, you'll love to prioritize your time.

07:15 Yeah.

07:16 I think it really forced you to prioritize.

07:17 So I actually sometimes recommend when folks ask me like for advice, like should I quit

07:21 my job to go all in or not?

07:23 I actually sometimes think there's a lot of nice stability and that comes from, from

07:27 having a job.

07:28 Plus it's actually really nice to have coworkers.

07:30 It's nice to have structure.

07:31 Like you actually need to take all of that.

07:33 Well, work in a way on yourself.

07:36 Like it's, you know, if you have to make your own structure, like if you're building

07:39 your own thing and that can actually be a bit tricky.

07:41 Like I really struggled with that at the beginning.

07:42 So I think there's something to be said for, yeah.

07:45 For, for spending like a limited time on something basically and prioritizing just,

07:48 just the most interesting angle.

07:50 And I don't necessarily disagree with that.

07:52 I know that's interesting.

07:52 So for me, it was interesting, like in terms of like how much, like, you know, life

07:57 support energy you put in versus like full time energy.

08:00 It was growing decently.

08:02 Like while I was still at GitHub and I thought, okay, I'm going to go in on this

08:07 full time.

08:07 And if I go from like 10 hours a week or less to like 40 hours a week, that would

08:13 probably Forex the growth rate as well.

08:14 That's how it works.

08:15 Right.

08:16 And like, totally didn't like it totally didn't work.

08:19 In fact, like the month after I left, I had like my first down month where like the

08:25 revenue decreased.

08:26 And I was like, wait a minute, what's going on here?

08:28 How that doesn't make any sense.

08:29 That's not fair.

08:30 So I think that also points that like there, yeah, you can definitely spend more hours

08:34 on something and it can be like the wrong things or not doubling down on something

08:38 that's really working.

08:39 So, but overall, obviously you, at some point, like just being able to like test out

08:44 more ideas is like really valuable.

08:45 And for that, like, if you only have time to do support on your product, that's really

08:50 working well.

08:50 And your full time job is the rest of the, how you spend your week, then yeah, it feels

08:55 like you should give yourself some time to build features and maybe quit the job.

08:58 Yeah.

08:59 It's also an interesting point about the structure because not everyone is going to

09:05 get up at eight o'clock, sit at their desk and, and they're going to be like, you know,

09:08 I kind of could just do whatever.

09:10 And it's, it's a, it's its own discipline, its own learned skill.

09:14 A hundred percent.

09:14 Yeah.

09:14 I remember like one of the first weeks after I was full time on SimplePool, I woke up in

09:20 the morning and said, well, the money's coming in.

09:22 I don't need to work.

09:23 I don't have a boss.

09:23 And I just sit in bed and watch YouTube videos all day.

09:26 And then I just felt miserable at the end of the day.

09:29 Like the, I was like, this is supposed to feel great.

09:31 Why all this freedom I've wanted and dreamt about for so long?

09:34 Where like, why does it not feel great?

09:36 Yeah.

09:38 Also, also feels like risk and more different kinds of responsibility.

09:42 All right.

09:42 So SimplePoll, the reason I said it'd be worth talking about a little bit is, you know,

09:47 Slack's a popular platform and this is based on Django, right?

09:50 So Simplepolll is a full on Django app.

09:51 Yeah.

09:52 And it's funny.

09:53 Sometimes people joke that, I don't know if you've gone through the official Django

09:57 tutorial, but in there you actually make a polls app in the browser.

10:00 Sometimes people joke, wait, did you just turn this into like a Slack app?

10:05 And then you productize it.

10:06 The getting started tutorial.

10:07 Yeah, exactly.

10:09 But yeah, like it turned out that like polls and then yeah, getting, you know, your team

10:14 more connected and Slack and more engaged are like things people really care about.

10:19 So it came to the Slack, SimplePoll joined the Slack platform like at the perfect time

10:23 and has just been growing super well since then.

10:27 Tell people a little bit about what it takes technically to make a Slack app.

10:32 I mean, Slack is not built in Python as far as I know.

10:35 And it's, it's probably JavaScript and Electron, mostly the people interact with.

10:39 Right.

10:40 So what is the deal here?

10:41 It's actually super interesting.

10:42 So the way you build like a Slack app, it's actually all backend based.

10:47 So when a user interacts in Slack, Slack sends your app, your backend, like a JSON

10:51 payload saying like this user clicked this button, and then you can just send a JSON

10:56 payload back saying, all right, now show this message.

10:59 Now show this modal.

11:00 And they have their own JSON based block kit framework where you can render different

11:05 types of content.

11:06 So you don't actually have to think about JavaScript or react or any of their stack at

11:10 all.

11:10 It's basically all sending JSON payloads around and calling various parts of the Slack

11:14 API.

11:15 So you can build a Slack app in your favorite language, any kind of exotic language if you

11:20 want it to.

11:20 But yeah, I love Python.

11:23 So I decided to build it in Python and Django.

11:25 So yeah, actually building Slack apps is a really like pleasant experience.

11:29 What's the deployment backend story look like?

11:32 Is it a pass sort of thing?

11:35 Serverless?

11:36 VMs?

11:37 At the time it was Heroku.

11:39 Simplify was running on Heroku.

11:40 And then I think a few years ago we migrated it to AWS.

11:45 So now it's running on AWS and ECS.

11:48 Nice.

11:49 Okay.

11:49 So Docker for the win.

11:51 Right on.

11:51 How does it work in Talk Python?

11:52 I'm curious.

11:52 How, what's, where are you deployed?

11:54 It's all DigitalOcean.

11:55 And then I have one big like eight CPU server running, I think, 16 different Django apps.

12:04 Not Django, sorry, Docker apps.

12:06 No, sorry, Docker apps that are all doing like, you know, some of them share a database that's in

12:12 Docker and some of them do sort of have their own self-contained pair of like web app and

12:19 database and so on.

12:20 But it's all Docker on one big server, which is fairly new for me.

12:25 And it's glorious.

12:26 It's glorious.

12:27 That's awesome.

12:27 Very cool.

12:28 Yeah.

12:28 All right.

12:29 So again, congrats on this.

12:32 Very, very neat.

12:33 Let's talk Kolo.

12:35 Let's do it.

12:35 I first came across this, I've come across it independently twice.

12:40 Once when the Django chat guys recommended that I talk to you because they're like, Will's doing

12:48 cool stuff.

12:48 You should definitely talk to him.

12:50 Saying a thing for VS Code is super cool.

12:52 But also I can't remember, there's somebody on your team whose social media profile I came across

12:58 and I saw this and I'm like, oh, this is, this is pretty neat.

13:01 I think we even covered it on the Python Bytes podcast.

13:04 Oh, no way.

13:04 Let's see.

13:05 Yeah, sure.

13:06 In January we did.

13:07 So that's what we talked about a little bit, but this just looks like such a neat thing.

13:11 And it's, I encourage people to, who may be interested in this, to visit colo.app because

13:17 it's a super visual sort of experience of understanding your code.

13:21 Right?

13:21 Would you agree?

13:22 Yeah.

13:22 I mean, a hundred percent.

13:23 Yeah.

13:23 Funny thought.

13:24 I hadn't really thought that a podcast is going to be a hard way to describe the visual beauty

13:30 and magic that Kolo can bring to your code.

13:32 But yeah, a hundred percent.

13:33 Yeah.

13:33 So Kolo like very much started as like the idea of, Hey, like I should be able to see

13:38 like how my code actually flows.

13:40 I think like all of us, as we build software, as we write our Python code, we have this

13:45 kind of like mental model of how all the different functions like fit together.

13:49 How like a bit of data ends up from like the beginning, like to the end, like it passes

13:54 through maybe a bunch of functions, it passes through a bunch of like classes, a bunch of

13:58 loops, all the state gets like modified.

14:00 And we have this kind of like mental picture of all of that in our head.

14:04 And the kind of very beginning of Colo, the question I asked myself was like, is there

14:09 a way we can just like visualize that?

14:11 Is there a way we can just actually print that out onto a screen?

14:15 So if you go to kolo.app, it kind of looks like this funny sun chart with like lots of

14:19 kind of a sunny tree chart with lots of nodes, nodes going from the center and like going

14:25 off into the distance, which I think is like, yeah, similar to like what folks kind of

14:29 might already have in their head about like how the code flows.

14:32 Maybe another way to describe it is imagine like you enable a debugger at the

14:40 beginning of every function and at the end of every function in your code and you print

14:45 out like what was the function name, what were the input arguments, what was the return

14:49 value? And then you arrange all of that in a graph that then shows which function called

14:54 which other function.

14:55 It almost looks like what you get out of profilers.

14:57 Right. You know, where you say like, OK, this function took 20 percent, but if you

15:01 expand it out, I'll say, well, it really spent 5 percent there, 10 percent there, and

15:05 then a bunch of it. And you kind of traverse that.

15:07 A hundred percent.

15:08 Yeah. I'm guessing you're not really interested in how long it took, although maybe you

15:12 can probably get that out of it.

15:13 It's the important is more what is the dependency?

15:16 What are the variables being passed and like understanding individual behavior, right?

15:21 Or maybe. Yeah.

15:22 What do you think? Yeah, a hundred percent.

15:23 I think like it's interesting because Kolo actually uses under the hood like a bunch of

15:26 the Python profiling APIs and people often think of Kolo as a profiler.

15:31 We do actually have a traditional profiling based chart which puts the timing at the

15:35 center. But you're absolutely right that the focus of our like main chart, the one that

15:40 we're both looking at that has like this idea of the function overview and like which

15:46 function calls which the idea there is like absolutely the hierarchy and seeing like

15:50 giving yourself that same mental model that someone who's worked on a code base for

15:54 three months has in their head immediately like yourself by just looking at it.

15:58 This portion of Talk Python To Me is brought to you by Sentry.

16:03 Code breaks. It's a fact of life.

16:05 With Sentry, you can fix it faster.

16:07 As I've told you all before, we use Sentry on many of our apps and APIs here at

16:12 Talk Python. I recently used Sentry to help me track down one of the weirdest bugs I've

16:17 run into in a long time.

16:19 Here's what happened. When signing up for our mailing list, it would crash under a

16:23 non-common execution pass like situations where someone was already subscribed or

16:28 entered an invalid email address or something like this.

16:31 The bizarre part was that our logging of that unusual condition itself was crashing.

16:38 How is it possible for her log to crash?

16:41 It's basically a glorified print statement.

16:43 Well, Sentry to the rescue.

16:45 I'm looking at the crash report right now, and I see way more information than you'd

16:49 expect to find in any log statement.

16:51 And because it's production, debuggers are out of the question.

16:54 I see the traceback, of course, but also the browser version, client OS, server OS,

17:01 server OS version, whether it's production or Q&A, the email and name of the person

17:05 signing up. That's the person who actually experienced the crash.

17:08 Dictionaries of data on the call stack and so much more.

17:11 What was the problem?

17:12 I initialized the logger with the string info for the level rather than the

17:18 enumeration dot info, which was an integer based enum.

17:22 So the logging statement would crash saying that I could not use less than or equal to

17:27 between strings and ints.

17:28 Crazy town.

17:30 But with Sentry, I captured it, fixed it, and I even helped the user who experienced

17:35 that crash.

17:36 Don't fly blind.

17:37 Fix code faster with Sentry.

17:39 Create your Sentry account now at talkpython.fm/sentry.

17:43 And if you sign up with the code TALKPYTHON, all capital, no spaces, it's good for two

17:49 free months of Sentry's business plan, which will give you up to 20 times as many

17:53 monthly events as well as other features.

17:56 Usually in the way these charts turn out, you can notice that there's like points of

18:02 interest. Like there's one function that has a lot of children.

18:04 So that clearly is coordinating like a bunch of the work where you can see kind of

18:08 similarities in the structure of some of the sub trees.

18:12 So, you know, OK, maybe that's like a loop and it's the same thing happening a couple

18:15 times. So you can essentially I get this overview and then it's fully interactive and

18:21 you can dive in to like what exactly is happening.

18:23 Yeah. Is it interactive?

18:25 So I can like click on these pieces and it'll pull them up.

18:28 We actually and this is what's it'll be live by the time this podcast goes live.

18:32 We actually have a playground in the browser.

18:34 This is also super fun.

18:36 We can talk about this. Well, let me drop you a link real quick.

18:38 This will be a play.kolo.app.

18:40 So with this, yeah, this is super fun because this is fully Python just running in the

18:45 browser using Pyodide and like WebAssembly.

18:47 Nice. OK.

18:48 But yeah, so this is the fully visual version where you can.

18:51 Yeah. It defaults to loading like a simple Fibonacci algorithm.

18:55 Yeah. And you can see like what the visualization of Fibonacci looks like.

19:00 And you can actually edit the code and see how it changes with your edits and all of

19:03 that. We have a couple other examples.

19:05 Wow. The pandas one and the whack-a-mole one are pretty intense.

19:08 They're pretty wild pictures.

19:09 They look like sort of Japanese fans or whatever.

19:12 You know, those little gamer ones.

19:13 We once had a competition at a conference to see who could make like the most fun

19:17 looking algorithm and then visualize it with Colo.

19:20 But yeah, like it's fun.

19:21 Like visualizing code is really great.

19:23 That's awesome. So this is super cool.

19:26 It's just just all from scratch.

19:28 It's besides Pyodide here.

19:30 Not like VS Code in the browser or anything like that.

19:34 I think it's using Monaco in this case or CodeMirror.

19:37 But otherwise, this is all is this Pyodide and a little bit of React to like pull kind

19:41 of the data together. But yeah, we're really.

19:43 Yeah. It's otherwise homemade.

19:46 This is kind of like the kind of what Kolo has been for like like the past like two

19:51 years or so has been this kind of side project for our SimplePoll to help like just

19:55 visualize and understand code better.

19:57 The SimplePoll codebase, to be honest, has grown so large that like there's parts of

20:01 it that I wrote like five years ago that I don't understand anymore.

20:05 And it's like annoying to get back to that and having to spend like a day to

20:08 re-familiarize myself with everything.

20:10 It's a lot nicer to just like to actually kind of explain like end to end how it

20:14 works. You install like in a Django project, you install Kolo as a as a middleware.

20:19 And then as you just browse and use your Django app and make requests, traces get

20:26 saved. So Kolo records these traces.

20:28 They actually get saved in a local SQLite database.

20:32 Then you can view the traces, which includes the visualization, but also like lots of

20:36 other data like you can actually see in the version you have there.

20:39 Like we show every single function call, like the inputs and outputs for each

20:43 function call. So that main idea of Kolo is to like really show you everything that

20:47 happened in your code.

20:48 So in a Django app, that would be like the request, the response, like all the headers,

20:53 every single function call, input and output, outbound requests, SQL queries as

20:57 well. So really the goal is to show you everything.

21:00 You can view these stored traces either through VS Code.

21:04 And this is also will be live by the time this episode goes live through like a web

21:08 middleware version, which is a bit similar to Django debug toolbar.

21:12 Not sure if you've played around much with Django debug.

21:14 Yeah, a little bit. Yeah. And those things are actually pretty impressive.

21:17 Right. I played out with that one and the pyramid one.

21:20 And you can see more than I think you would reasonably expect from just a little thing

21:25 on the side of your web app.

21:26 Yeah, exactly. And that's very much our goal to like very kind of deep insight in our

21:31 minds. This is almost like a bit like old news.

21:34 Like we've been using this for like a few years, basically.

21:36 And then at some point, like last year, we started playing around with this idea of

21:41 like, OK, so we have this trace that has information about like pretty much everything

21:46 that happened in like a request.

21:48 Is there any way we could use that to solve this like reasonably large pain point for

21:52 us, which is like writing tests?

21:54 I'm actually curious. Do you enjoy writing tests?

21:56 I'll tell you what I used to actually.

21:58 I used to really enjoy writing tests.

22:01 I used to enjoy thinking a lot about it.

22:02 And then as the projects would get bigger, I'm like, you know, this is these tests don't

22:08 really cover what I need them to cover anymore.

22:10 And they're kind of dragging it down.

22:12 And then, you know, the thing that really kind of knocked it out for me is I'd have

22:16 like teammates and they wouldn't care about the test at all.

22:18 So they would write, break the tests or just write a bunch of code without tests.

22:23 And I felt kind of like like a parent cleaning up after kids.

22:27 You're like, why is it so?

22:28 Can we just pick up? Why are there dishes here?

22:30 And they're just going around.

22:32 I'm like, this is not what I want to do.

22:34 Like, I want to just write software.

22:35 And I understand the value of tests, of course.

22:38 A hundred percent.

22:39 Yeah. At the same time, I feel like maybe higher order integration tests often, for

22:45 me at least, serve more value because it's like I could write 20 little unit tests or I

22:50 could write two integration tests and it's probably going to work.

22:53 I'm actually completely with you on that.

22:54 OK, right on.

22:55 The bang for the buck of integration tests are like great, like really, really useful.

23:00 You can almost think of tests as having like two purposes, one being like, well,

23:05 actually, I think this would be too simple an explanation.

23:08 Let me let me not make grand claims about all the uses of tests.

23:11 I think the most the use of it that most people are after is this idea of like what I've

23:17 built isn't going to break by accident.

23:20 Yeah. Like you want confidence that any future change you make doesn't impact a bunch

23:25 of unrelated stuff that it's not supposed to impact.

23:27 I think that's what most people are after with with tests.

23:32 And I think for that specific desired result, like integration tests are the way to go.

23:37 And there's some cool writing about this from I wrote a little blog post about Colos

23:41 test generation abilities.

23:43 And in there, I link to a post from Kent C.

23:46 Dodds from the JavaScript community, who has a great post about I think it's called

23:49 write tests, not too many, mostly integration, kind of after this idea of like, yeah, nice.

23:55 Yeah, yeah, yeah.

23:56 Eat not too much, mostly vegetables.

23:58 I think that's the.

23:59 Yeah, exactly.

24:00 Exactly.

24:00 Yeah.

24:01 I'm a big fan of that.

24:02 And actually, it's interesting.

24:03 I've speaking to a bunch of folks over the past year about tests.

24:06 A lot of engineers think about writing tests as vegetables.

24:10 And obviously, some people love vegetables and some of us love writing tests.

24:14 But it seems like for a lot of folks, it's kind of like a obviously necessary part of

24:19 creating great software.

24:20 But it's maybe not like the most fun part of our job.

24:24 Or you pick up some project, you're a consultant, or you're taking over some open

24:29 source project, you're like, this has no tests, right?

24:31 It's kind of like running a linter.

24:34 And it says there's a thousand errors.

24:35 You're like, well, we're not going to do that.

24:36 Yeah, we're just not going to run the linter against it because it's just too messed up

24:41 at this point, right?

24:42 It's interesting you mentioned the picking up like a project with no tests, because I

24:45 think within the next three months, we're not quite there yet.

24:48 But I think in the next three months with Kolo's test generation abilities, we'll have

24:51 a thing where, yeah, all we need is a Python code base to get started.

24:55 And then we can bring that to like a really respectable level of code coverage just by

25:00 using Kolo.

25:01 Okay.

25:01 How?

25:02 That's kind of describing a second ago how like we, SimplePoll has tons of integration

25:08 tests.

25:08 SimplePoll actually is about 80,000 lines of application code, not including migrations

25:13 and like config files.

25:14 And then it's about 100,000 lines of tests.

25:17 And most of that is integration tests.

25:20 So SimplePoll very well, very well tested lots of really mostly integration tests.

25:24 But it is always a bit of a chore to like write them.

25:27 So we started thinking about like, hmm, this like Kolo tracing we're doing, can that help

25:33 us with making tests somehow?

25:34 And then we started experimenting with it.

25:36 And like to our surprise, it's actually, yeah, I'm still sometimes surprised that it

25:39 actually works.

25:40 But basically the idea is that if you have a trace that has that captures everything in

25:47 the request, you can kind of invert it to build a integration test.

25:53 So let me give an example of what that means.

25:55 The biggest challenge we found with creating integration tests is actually the test data

26:01 setup.

26:02 So getting your application into the right shape before you can send a request to it or

26:07 before you can call a certain function, that's like kind of the hardest part.

26:11 Writing the asserts is almost like easy or even like fun.

26:15 Right.

26:15 There's the three A's of unit testing range, assert, and act, wait, arrange, act, and

26:20 assert.

26:21 Exactly.

26:21 Yeah.

26:21 Yeah.

26:21 The first and the third one that you kind of have data on, right?

26:25 Exactly.

26:25 Yeah.

26:25 So we're like, wait a second.

26:27 We actually can like kind of extract this, like the act.

26:31 So like the setting, sorry, the arrange, setting up the data, the act, like actually

26:36 making the HDP request and then the assert, like to ensure the status change or that the

26:42 request go to 200 or something.

26:43 We actually have the data for this.

26:44 It's reasonably straightforward.

26:46 Like if you capture in, you know, just your like normal, like imagine you have a local

26:50 to do app and you browse like a to do kind of demo, simple to do app and you browse to

26:55 the homepage and the homepage maybe lists the to dos.

26:57 And if you've got Kolo enabled, then Kolo will have captured the request, right?

27:01 So like the request went to the homepage and it returned a 200.

27:05 So that's already like two things we can now turn into code in our integration test.

27:09 So first step being, well, I guess this is the act and the assert in the sense that the

27:14 assert is the 200 and then the act is firing off a request to the homepage.

27:19 Now the tricky bit, and this is where it gets the most fun, is the arrange.

27:23 So if we just put those two things into our test and our two imaginary test, there

27:29 wouldn't have been any to dos there.

27:30 Right.

27:30 So it's actually not an interesting test yet, but in your local version where the

27:33 trace was recorded, you actually had maybe like three to dos already, already in your

27:38 database.

27:39 Does that make sense so far?

27:40 Yeah.

27:41 Yeah, absolutely.

27:41 On the homepage, like your to do app might make a SQL query to like select all the to

27:47 dos or all the to dos for the currently logged in user.

27:50 And then Kolo would store that SQL query, would store that select and would also store

27:55 actually what data the database returned.

27:57 This is actually something where, yeah, Kolo goes beyond a lot of the existing kind of

28:01 like debugging tooling that might exist, like actually showing exactly what data the

28:05 database returned in a given SQL query.

28:07 But imagine we get like a single to do returned, right?

28:11 We now know that to replicate this like trace in our test, we need to start by seeding

28:18 that to do into the database.

28:20 That's where like the trace inversion comes in.

28:22 If like a request starts with a select of like the to do table, then the first thing

28:29 that needs to happen in the integration test is actually a like creating like an insert

28:33 into the database for that to do.

28:35 And now when you fire off the request to the homepage, it actually goes through your real

28:40 code path where like an actual to do gets loaded and gets printed out onto the page.

28:44 So that's like the the most basic kind of example of like, how can you turn like a

28:49 locally captured trace of a request that like made a SQL query and return 200 into like

28:55 an integration test?

28:56 Yeah, that's awesome.

28:57 One of the things that makes me want to write fewer unit tests or not write a unit test

29:02 in a certain case is I can test given using mocking, given my let's say SQLAlchemy or

29:10 Beanie or whatever Django ORM model theoretically matches the database.

29:15 I can do some stuff, set some values and check that that's all good.

29:19 But in practice, if the if the shape, if the schema in the database doesn't match the

29:24 shape of my object, the system freaks out and crashes and says, well, that's not going

29:28 to work, right?

29:29 There's no way.

29:30 And so it doesn't matter how good I mock it out.

29:32 It has to go kind of end to end before I feel very good about it.

29:36 Oh, yeah.

29:37 OK, it's going to really, really work, right?

29:38 Exactly.

29:39 That's an interesting story.

29:40 Like you're saying to like, let's actually see if we can just create the data, but like

29:45 let it run all the way through.

29:46 Right.

29:46 I'm totally with you.

29:47 And and I think I've often seen like unit tests pass and say, I mean, there's like lots

29:51 of memes about this, right?

29:52 How like unit tests say everything is good, but the server is down.

29:55 Like, how is that possible?

29:56 I think in Django world, it's reasonably common to write integration tests like this, as in

30:01 like the actual database gets hit.

30:03 The you have this idea of like the Django test client, which sends like a, you know,

30:08 real in air quotes, HTTP request through the entire Django stack as opposed to doing the

30:14 more unit test approach.

30:15 So it hits the routes.

30:16 It hits like all of that sort of stuff all the way.

30:19 Yeah.

30:20 And the template.

30:21 Yeah.

30:21 And then at the end, you can assert based on like the content of the response or you can

30:26 check.

30:26 Like, imagine if we go back to the to do example, if we're testing like the add to do

30:31 endpoint or form submission, then you could make a database query at the end.

30:36 And Kolo actually does this as well, because like, again, we know like that you inserted

30:41 to do in your request, so we can actually make an assert.

30:44 This is a different example of the trace inversion.

30:47 If there's an insert in your request that you captured, then we know at the end of the

30:52 integration test, we want to assert that this row now exists in the database.

30:56 So you can assert at the very end to say, does this row actually exist in the database

31:01 now?

31:01 So it's a very nice kind of reasonably end to end, but still integration test.

31:05 It's not like a brittle click around in the browser and kind of hope for the best kind of

31:09 thing.

31:10 It's like, as we said at the beginning, I think like integration tests just get you

31:13 great bang for your buck.

31:14 They really do.

31:15 It's like the 80/20 rule of unit testing for sure.

31:20 Yeah.

31:20 Talk Python to me is partially supported by our training courses.

31:24 If you're a regular listener of the podcast, you surely heard about Talk Python's online

31:29 courses, but have you had a chance to try them out?

31:31 No matter the level you're looking for, we have a course for you.

31:34 Our Python for absolute beginners is like an introduction to Python plus that first

31:39 year computer science course that you never took.

31:41 Our data-driven web app courses, build a full pypi.org clone along with you right on the

31:47 screen.

31:48 And we even have a few courses to dip your toe in with.

31:50 See what we have to offer at training.talkpython.fm or just click the link in your

31:55 podcast player.

31:56 So is this all algorithmic?

31:59 Yup.

31:59 Great question.

32:00 Is it LLMs?

32:01 Like how much VC funding are you looking for?

32:04 Like, you know, like if you got LLMs in there, like coming out of the woodwork.

32:07 No, I'm just kidding.

32:08 No, how do you, how does this happen?

32:10 It's actually all algorithmic and rule-based at the moment.

32:13 So this idea of a select becomes like a, like a, an insert and an insert becomes like a

32:20 select assert.

32:21 We were surprised how far we could get with just rules.

32:24 The benefit we have is that we kind of have this like full-sized SimplePoll Django code

32:29 base to play around with.

32:30 And yeah, like generating integration tests in SimplePoll just like fully works.

32:35 There's a bunch of tweaks we like had to make to, as soon as I guess you work in kind of

32:40 like outside of a demo example, you want like time mocking and HTTP mocking and you want

32:46 to use your like factory boy factories and maybe you have a custom unit test, like base

32:52 class and all of this.

32:53 But yeah, it like, it actually works now.

32:55 I gave a talk at DjangoCon Europe last year.

32:58 It's kind of like a bit of a wow moment in the audience where, yeah, you just click

33:02 generate test and it generates you like a hundred line integration test and the test

33:06 actually passes.

33:07 So that was like, people started, just started clapping, which was a great feeling.

33:11 I'm still a bit surprised that it works on it, but yeah, no LLM at all.

33:15 I do think like LLMs could probably make these tests like even better.

33:19 Or you know how I was saying a second ago, like in three months we could go take a code

33:23 base from like zero test coverage to maybe like 60%, 80%.

33:28 I imagine if we made use of LLMs that would help make that happen.

33:33 Yeah.

33:33 Yeah.

33:34 You could talk to it about like, well, these things aren't covered.

33:37 Right.

33:38 What can we do to cover them?

33:39 Yeah.

33:39 I don't know if you maybe could do fully, fully automated, just push the button and

33:44 let it generate it.

33:45 But you know, it could also be like a conversational, not a conversation, sort of a

33:49 guided, let's get the rest of the test, you know, like, okay, we're down to 80, we've

33:53 got 80%, but there's the last bit are a little tricky, like what ones are missing?

33:57 All right.

33:57 So how do you think we could do this?

33:59 Is that about, no, no, you need to, that's not really the kind of data we're going to

34:02 pass.

34:02 You know, I don't know.

34:03 It seems something like that.

34:04 Right.

34:04 I really liked that.

34:05 I had not thought about like a conversation as a way to generate tests, but that makes so

34:09 much sense.

34:10 Right.

34:10 It kind of bring in the developer along with them where it's gotten too hard or something,

34:15 you know?

34:15 Yeah.

34:16 There's something cool about just clicking a button and see how much code coverage you

34:18 could get to, but chatting to it.

34:21 I think also, honestly, like so far, like our test generation logic is a bit of a black

34:27 box.

34:27 It just kind of like works until the point where like it doesn't.

34:31 So we're actually kind of in the, in the process of like shining a bit more of a light

34:35 into like, like essentially the like internal data model that Kolo keeps track of to know

34:40 what the database state should be like in this arrange part of the integration test.

34:45 And yeah, we're actually like in the process of like, yeah.

34:49 Talking to a bunch of users who are already using it and also finding like companies who

34:54 want to increase their, increase their test coverage or who have problems with their

34:58 testing and want to improve that and kind of working closely with them to make that

35:02 happen.

35:03 That's kind of a huge focus for us as we figure out like, how do we want to monetize

35:07 Kolo?

35:07 Like so far Colo has just been kind of supported by SimplePoll as a, as a side project,

35:11 but we're kind of making it real, making it its own business.

35:14 So, and we think the test generation is going to play a big part in that.

35:18 Right.

35:18 Like that could be a, certainly a premium team feature sort of thing.

35:21 Exactly.

35:22 Yeah.

35:22 Yeah.

35:22 Yeah.

35:23 Enterprise.

35:24 Enterprise version comes with auto testing.

35:26 Yeah, exactly.

35:28 Something like that.

35:28 Yeah.

35:29 If there's anyone listening and like they're keen to increase their code coverage, please

35:32 email me.

35:32 Maybe we can leave my email in the, in the notes or something like that.

35:35 Yeah.

35:35 I'll put your contact info in the show notes for sure.

35:37 It's actually really nice.

35:38 It's just w@colo.app.

35:40 Oh, very nice.

35:41 So yeah.

35:41 If, if anyone's listening and wants to kind of like increase their code coverage or has a

35:45 lot of code bases that have zero coverage that would benefit from getting to like some

35:49 level of coverage, we'd love to help you and talk to you.

35:52 Even if the solution doesn't like involve using Kolo, just really, really keen to talk to

35:56 anyone about like Python tests and, and what can be done there.

35:59 So yeah, please hit me up.

36:01 Awesome.

36:01 Yeah.

36:02 I'll definitely put some details in the show notes for that.

36:04 I have some questions as well.

36:05 Please.

36:05 Yes.

36:06 Right here.

36:06 I'm looking at the webpage and the, the angle bracket title is Kolo for Django.

36:12 But in the, the playground thing you sent me, it was on plain Python code.

36:17 It was on algorithms.

36:19 It was on pandas, which I thought was pretty interesting how much you could see inside

36:22 pandas.

36:22 Makes me wonder, you know, if you look at the web frameworks, there's two or three more

36:27 that are pretty popular out there and they all support middleware.

36:29 Yeah.

36:30 A hundred percent.

36:30 So Colo kind of started as like this, like side project for our Django app.

36:35 And I think that that's why we kind of went there first.

36:37 It's kind of the, the audience we know best.

36:40 You do dog food as well.

36:41 Yeah.

36:41 Exactly.

36:42 Dog fooded Lily.

36:43 Who's, who's an engineer on the team is, and who's been building a lot of, yeah, a

36:48 lot of the Python side of Kolo is like a core contributor to Django.

36:53 So Django is like really where we're home.

36:55 And to be honest, I think when building a new product, it's kind of nice to keep the

36:59 audience somewhat small initially keep like building for very specific needs as

37:04 opposed to going like very wide, very early.

37:06 That was kind of very much, very much the intention, but there's no reason why Kolo

37:10 can't support Flask, FastAPI, the scientific Python stack, as you can see in

37:16 the playground, it does totally work on, on plain Python.

37:18 It's really just a matter of.

37:20 Honestly, like FastAPI support would probably be like a 40 line config file in

37:26 it, exactly in like our code.

37:28 And there's actually, yeah, we're thinking of ways to make that actually a bit more

37:33 pluggable as well.

37:34 There's only like so many things we can reasonably support.

37:38 Well, ourselves.

37:39 I was going to say if somebody else out there has an open source project, they want it to

37:42 have good support for this, right?

37:44 Like, Hey, exactly.

37:44 Yeah.

37:45 I run HTTPS or I run Litestar or whatever, and I want mine to look good here too.

37:50 Right.

37:50 Totally.

37:51 So the thing you can do already today is there's a little bit of config you can pass

37:55 in.

37:55 And actually, if you look back on the pandas example, you'll, you'll see this by

37:59 default.

37:59 Kolo actually doesn't show you library code if you use it in your own code base, but you

38:04 can tell it, show me everything that happened, like literally everything.

38:08 And then it will, it will do that for you.

38:09 So in this example you're looking at, or if anyone's looking at the playground, if you

38:13 look at the pandas example, it'll say like include everything in pandas.

38:16 And that'll give you like a lot more, a lot more context.

38:19 The thinking there is that most people don't really need, like the issues you're going

38:23 to be looking at will be in your own code or you're in your own company's code base.

38:27 You don't really need to look at the abstractions, but you totally can.

38:30 But yeah, to answer the question, like we have this like internal version of a plugin

38:33 system where yeah, like anyone could add FastAPI support or like a great insight into

38:40 PyTorch or what have you.

38:42 The way it all works technically really is it's totally built on top of this Python API

38:47 called set profile.

38:48 I'm not sure.

38:48 Have you used, have you come across this before?

38:51 It's a bit similar to set trace actually.

38:52 Yeah, I think so.

38:53 I think I've done it for some C profile things before.

38:58 I'm not totally sure.

38:59 Yeah.

38:59 It's a really neat API to be honest, because Python calls back to your, like the callback

39:05 that you register on every function, enter and exit.

39:08 And then Kolo essentially looks at all of these functions, enters and exits and decides

39:12 which ones are interesting.

39:13 So the matter of like supporting say FastAPI is basically just telling Kolo these are

39:19 the FastAPI functions that are interesting.

39:21 This is the FastAPI function for, for like an HTTP request that was served.

39:25 This is the HTTP response.

39:27 Or similarly for SQLAlchemy, this is the function where the query was actually executed

39:32 and sent to the database.

39:33 This is the variable which has the query result.

39:35 Like there's a little bit more to it and I'm definitely like, yeah, generalizing, but

39:40 it's kind of like in principle, it's as simple as that.

39:43 It's like telling Kolo, here's the bits of code in a given library that are interesting.

39:46 Now just kind of like display that and make that available for the, for the test generation.

39:50 Excellent.

39:51 Yeah.

39:51 I totally agree with you that it getting focused, it probably gets you some more full

39:56 attention from the Django audience and the Django audience is quite a large and

40:01 influential group in the Python web space.

40:02 So that, that makes it totally, especially since you're using it.

40:05 By the way, it was Lily's Mastodon profile, I believe that I ran across that I first

40:10 discovered Kolo from.

40:11 So of all the places, yeah.

40:13 Or a, a, a post or something like that.

40:16 That's awesome.

40:17 Cool.

40:17 All right.

40:17 So let's talk about a couple other things here.

40:20 For people who haven't seen it yet, like you get quite a bit of information.

40:24 So if you see like the get request, you actually see the JSON response that that

40:29 was returned out of that request and it integrates kind of into your editor

40:34 directly, right?

40:35 If you've seen code lens before, it's kind of like code lens, right?

40:38 Yeah.

40:38 This is another thing which I think is, is pretty novel with, with Kolo.

40:41 Like I think it's reasonably common for existing debugging tools to show you like,

40:46 Oh yeah, this is the headers for the request, or this is like the response status

40:50 code.

40:50 But especially working with the Slack API in simple poll, you're constantly looking

40:55 at payloads and what were the values for things and what are you returning.

40:59 In production, you don't directly get to even make those or receive those requests,

41:03 right?

41:03 There's some like system in Slack who was like chatting with your thing.

41:07 You're like, well, what is happening here?

41:09 Right.

41:09 Not that you would actually run this in there, but you know.

41:12 I mean, it's funny you mentioned this because there is one experiment we want to

41:16 run of kind of actually enabling these extremely deep and detailed Kolo traces

41:20 in production.

41:21 We haven't explored this too much yet, and I think we're going to focus a little bit

41:25 more on the test generation, but you could imagine like a user who's using, who's on

41:30 the talk Python side and they've, they've got some incredibly niche error that no one

41:35 else is, is like encountering and you've tried to reproduce it, but you can't

41:40 reproduce it.

41:40 Maybe there's a little bit of information in like your logging system, but it's just

41:45 not enough.

41:45 And you keep adding more logging and you keep adding more logging and it's just not

41:48 helping.

41:49 Like imagine a world where you can say just for that user, like enable Colo and

41:53 enable like these really deep traces.

41:55 And then you can see whenever the user next interacts, like the value for every

42:01 single variable for every single code path that executed for that user.

42:04 That's just like, yeah.

42:06 I think one of our users described as like a debugger on steroids.

42:09 Yeah.

42:09 Yeah.

42:09 It's pretty interesting.

42:10 Sounds a little bit like, like what you get with Sentry and some of those things,

42:15 but maybe also a little bit different.

42:18 So, you know, you could do something like, here's a dear user with problem.

42:23 Here's a URL.

42:24 If you click this, it'll set a cookie in your browser and then all subsequent

42:28 behavior, it just hits on it.

42:30 You know what I mean?

42:31 It's like recording it.

42:32 Yeah.

42:32 That'd be pretty interesting.

42:33 Yeah.

42:33 I think it makes sense in the case.

42:35 Like if a user, it could even be an automated support thing, right?

42:39 Like if a couple of sites have this where you can like do like a debug dump before

42:43 you submit your support ticket, this is almost like, like that.

42:47 And then as an engineer who's tasked with digging into that user's bug, you don't

42:51 have to start with like piecing together.

42:53 What was this variable at this time when they made that request three days ago?

42:58 You like, you can just see it.

43:00 If an error ever encounters an exception on your site, you just set the cookie.

43:04 Right.

43:04 All everything else they do is now just recorded until you turn it off.

43:07 Oh my God.

43:08 You're giving me so many good ideas.

43:09 That'd be fun.

43:10 I'm going to start writing this stuff down.

43:11 Let's record it.

43:13 It'll be fine.

43:13 That's awesome.

43:14 Yeah.

43:14 There's a bunch of stuff that's, that's, that's interesting.

43:16 People can check it on the site.

43:17 It's, it's all good.

43:19 However, we talked a little bit about the production thing.

43:22 Like another thing you could do for production, if this requires both a decent amount of traffic and maybe, maybe you could actually pull this off on just a

43:30 single server, but you could do like, let's just run this for 1% of the traffic so that

43:35 you don't kill the system, but you get, you know, if that's why I see you have

43:39 enough traffic, it's like statistically significant sampling of what people do

43:43 without actually recording a million requests a day or something insane.

43:48 A hundred percent.

43:48 I think there's really something there, or like, I could go on about this whole

43:52 idea of like runtime data and like improving software understanding for days.

43:55 Because I just think like, it's really this like missing layer, right?

43:58 Like all of us constantly imagine like, what is like, we play computer looking at

44:03 our code, imagining what the values can be, but like, yeah, say you're looking at

44:07 some complex function in production and you want to understand how it works.

44:11 Like how useful would it be if you could see like the 10, the last 10 times it was

44:14 called, like what, what were the values going into it and what were the values

44:17 coming out of it?

44:18 Like, that would be, I just think like, why do we not have this already?

44:22 Like, why does your editor not show you for every single function in the code base,

44:26 give examples of like how it's actually used like in production.

44:30 Yeah.

44:30 And then use those to generate unit tests.

44:32 And if there's an error, use that to generate the edge case, like the negative

44:35 case, not the positive case unit test.

44:37 Right.

44:37 There you go.

44:38 Exactly.

44:38 It's all like kind of hanging together.

44:40 Like, yeah.

44:41 Yeah.

44:42 Once you have the data, you have interesting options.

44:44 Yeah.

44:44 Business model.

44:45 This is not this, I maybe should have started sooner with this, but it's not

44:49 entirely open source that maybe a little, little bits and pieces of it, but in

44:53 general, it's not open source.

44:54 That's correct.

44:55 Yeah.

44:55 No, I'm putting that out there as a negative, right?

44:57 This looks like a super powerful tool that people can use to write, write code

45:00 and that's fine.

45:01 Yeah.

45:02 I think the open source question is super interesting.

45:03 Like it's always been like something we've thought about or, or considered.

45:08 I think there is, yeah, with, with developer tools, I think business models are always super interesting and we want to make sure that we can have a

45:15 business model for Kolo and like run it as like a sustainable thing, as opposed to

45:19 it just being like a simple poll side project kind of indefinitely be great.

45:23 If Kolo could like support itself and yeah, have a business model, I think that's how

45:27 it can like really fulfill its potential in a way, but that's not to say that like

45:30 Kolo won't ever be open source.

45:32 Like I think there's a lot to be said for open sourcing it.

45:35 I think especially like the, the capturing of the traces is maybe something like I

45:41 could see us open sourcing.

45:42 I think the open source community is fantastic.

45:44 I do also think it's not like a thing you get for free, right?

45:48 Like as soon as you say, Hey, we're open source, you open yourself up to

45:53 contributions, right.

45:54 And to like the community actually getting involved and that's great, but it also

45:58 takes time.

45:59 And I think like that's a path I would like to go down when we're a little bit

46:03 clearer on like what Kolo actually is and like where it's valuable.

46:08 If that makes sense.

46:09 Yeah, sure.

46:10 If it turns out that no one cares about like what, how, like how to visualize code,

46:14 then like, that's a great like learning for us to have made, but I'd rather get

46:20 there without like a lot of work in the middle that we could have kind of avoided

46:24 if that makes sense.

46:25 So for sure, it feels like once we have a better sense of the shape of Kolo and what

46:29 the business model actually looks like, then we can be a bit more, yeah, we can

46:35 invest into open source a little bit more.

46:36 But to be honest, like based on how everything's looking right now, I would

46:40 not be surprised at all.

46:41 If like big chunks, it's like Kolo becomes open core or big chunks of it are like

46:46 open source.

46:46 It makes sense to me.

46:47 It is fully free at the moment.

46:49 So I should, that's worth calling out.

46:50 Yeah.

46:51 There's no cost or anything.

46:52 You can also like, you know, you download the Python package and guess what?

46:55 You can look at all of the code.

46:57 Like it actually is all of theirs.

46:59 It is all kind of visible.

47:01 That kind of leads into the next question is I've never used GitHub Copilot and a few

47:06 of those other things because it's like here, check this box to allow us to upload

47:11 all of your code and maybe your access keys and everything else that's interesting.

47:15 So we can one, train our models and two, you know, give you some answers.

47:19 And that just always felt a little bit off to me.

47:21 What's the story with the data?

47:24 At the moment, Kolo is like entirely like a local product, right?

47:27 So it's all local.

47:29 Like you don't have to, you can get like all of the visualization, everything just

47:33 by using local Kolo in, in VS Code.

47:36 We do have a way to like upload traces and share them like with a colleague.

47:40 This is actually also something I think is like kind of playing with the idea of like

47:44 writing a little like Kolo manifesto.

47:46 Like what are the things that we believe in?

47:47 One of them that I believe in, and this goes back to the whole like runtime layer on top

47:52 of code.

47:52 And like, there is this whole dimension, this like third dimension to code that we're

47:57 all simulating in our heads.

47:58 I think like it should totally be possible to not just like link to a snippet of code

48:03 like on GitHub, but it should be possible to have a like link, like a URL to a specific

48:09 execution of code, like a specific function and actually talk about that.

48:13 It's kind of wild to me that we don't have this at the moment.

48:15 Like you can't send a link to a colleague saying, Hey, look at this execution.

48:20 That looks a bit weird.

48:21 We ran this in continuous integration and it crashed, but I understand.

48:25 Let's look at the exact.

48:26 Right.

48:27 The whole deal.

48:28 You can link to like CI runs.

48:29 You can link to like sentry errors, but like if you're just seeing something slightly weird

48:33 locally or like even something slightly like weird in production where there's no area,

48:37 you can't really like link, link to that.

48:40 Anyway, like this is kind of a roundabout way of me saying that, like, I think that

48:44 totally should be a thing.

48:45 Like you should be able to link like generically to like a execution of a function or an

48:49 execution of a request.

48:50 And like that would totally have to live somewhere.

48:54 Right.

48:54 So this is where there's some idea of like Kolo cloud comes in and this is where you

48:58 could like connect your, your repository.

49:01 And then Kolo would like, as part of that, you know, just like get help, does have

49:05 access to your code and like show you the code and in like the colo cloud.

49:08 So I think there's definitely like useful things that are possible there, but at the

49:13 moment it's a fully local experience.

49:15 Like your, your code doesn't ever leave your, your system.

49:19 You can, if you want to like upload traces and then Kolo stores the like trace data,

49:24 not, not the code, just the trace data.

49:26 But yeah, very local experience right now.

49:28 Yeah.

49:29 A little SQLite database.

49:30 Exactly.

49:30 Yep.

49:31 Yeah.

49:31 SQLite's pretty awesome.

49:32 It's a fightable piece of software.

49:34 Yeah, it really, really is.

49:35 Let's close out our conversation here with a little bit of a request from Michael.

49:39 Right now it's VS Code only.

49:41 Any chance for some PyCharm in there?

49:43 This is our top request, like PyCharm support.

49:45 Yeah.

49:46 And we've decided super small team, like we want to kind of support everyone, but

49:50 we've been working very heavily actually the past few months on a web-based version,

49:55 which is, I'm happy to say like very much nearing completion.

49:58 And there's a few bits and pieces where like, it's really nice to be integrated

50:02 super deeply into the editor, like the code lenses and, and all of that.

50:06 And I think there's a chance we'll have that for PyCharm eventually as well.

50:09 But we actually found that like building out this web version, there's a few things

50:13 that are actually much nicer when you have the full control over the UI in terms of

50:17 like browsing around a trace, highlighting little bits of code.

50:20 So for example, in Kolo, like a given function call, we call a frame and you

50:25 can look at a given frame, both in VS Code, but also in the web version and see the

50:29 code and see all of the data that passed through the code.

50:32 But something we can do in the web version we can't do in VS Code is actually show

50:36 where the current function was called from and actually show like a preview of that

50:40 code.

50:40 In VS Code, you can't really show like, you can layer multiple files together or

50:45 different.

50:45 Yeah, exactly.

50:46 Yeah.

50:46 There's actually a lot of, like, I was surprised by how many different novel,

50:50 like kind of ways we had in the web that we just never even considered with like a

50:56 direct editor integration in terms of displaying this runtime data.

50:59 So like, long story short, like you, you know, want a PyCharm integration.

51:03 Let me give you something even better.

51:05 Yeah.

51:05 Web version.

51:06 So would that work like you run a certain command or something when you run your web

51:12 app and then it just generates the SQL lite file and then you could just explore it

51:16 with a web view or what are you?

51:17 Yeah, it's actually kind of cooler than that.

51:19 So if you're using Django or in the future, like other things with a typical

51:23 middleware, you would just like go to your, you would just go to localhost

51:27 8000/hello.

51:28 Yeah, yeah, yeah.

51:29 Kind of like you do for open API docs.

51:33 Yep.

51:33 And then the whole experience is just there.

51:35 If you're not using a middleware, we'll have a command like colo serve or something

51:39 like that.

51:40 And that'll yeah.

51:41 Host the same experience for you.

51:43 Just make sure it's off by default or it only, it only responds on localhost or

51:48 something like that.

51:49 You know, like, yeah, exactly.

51:50 Don't let people ship it on accident.

51:52 That would be bad news.

51:54 No production use of this.

51:55 Yeah.

51:55 I mean, people already know about the Django debug settings, but right.

52:00 I guess you could sort of layer onto that, right?

52:02 Probably.

52:02 Yeah.

52:03 I think we actually do that at the moment.

52:04 But yeah, it's worth, worth remembering.

52:06 No, I just think, you know, like, well, this is really cool.

52:10 It makes for a hundred percent.

52:12 CNN.com is awesome.

52:13 Look what it's doing.

52:14 Look at all these requests and all this.

52:16 Yeah, exactly.

52:18 A hundred percent.

52:19 Yeah.

52:19 Oh, and the API key is so interesting.

52:21 Anyway, that's a bit of a side, a side conversation.

52:25 So let's just, let's wrap it up with the final call to action.

52:28 People are interested.

52:29 What do they do?

52:30 Yeah.

52:30 Colo.app and check it out.

52:32 We have a playground link there.

52:33 Play.kolo.app.

52:35 Easiest way to kind of see what Kolo is and what Colo does, but we'll say the

52:39 most powerful way to, to actually see Colo in action is to use it on your own

52:43 code base.

52:43 So seeing the visualization and the test generation capabilities is just like,

52:48 yeah, the most useful when you use it on your code base.

52:51 So hopefully the playground can entice that a little bit.

52:53 And yeah, really the main, most important thing for us right now is, yeah.

52:57 Chatting to folks who want to increase their test coverage, want to like build

53:01 automated testing as part of their workflow.

53:03 And yeah, work very closely with you to make that happen.

53:05 So if that's you, please email me at w@colo.app.

53:09 You need that pause for the W that's right.

53:12 Or the two, the two at's.

53:13 Awesome.

53:14 Will, thanks for being on the show.

53:16 Congrats on both of your projects.

53:18 They look really neat.

53:19 Thanks so much for having me.

53:19 Yeah.

53:20 So excited to have been on.

53:21 Yeah, you bet.

53:21 Bye.

53:22 Bye.

53:22 This has been another episode of Talk Python to Me.

53:26 Thank you to our sponsors.

53:28 Be sure to check out what they're offering.

53:30 It really helps support the show.

53:32 Take some stress out of your life.

53:33 Get notified immediately about errors and performance issues in your web

53:38 or mobile applications with Sentry.

53:39 Just visit talkpython.fm/sentry and get started for free.

53:44 And be sure to use the promo code talkpython, all one word.

53:48 Want to level up your Python?

53:50 We have one of the largest catalogs of Python video courses over at Talk Python.

53:54 Our content ranges from true beginners to deeply advanced topics like memory and async.

53:59 And best of all, there's not a subscription in sight.

54:02 Check it out for yourself at training.talkpython.fm.

54:04 Be sure to subscribe to the show.

54:06 Open your favorite podcast app and search for Python.

54:09 We should be right at the top.

54:10 You can also find the iTunes feed at /itunes, the Google Play feed at /play,

54:16 and the direct RSS feed at /rss on talkpython.fm.

54:20 We're live streaming most of our recordings these days.

54:23 If you want to be part of the show and have your comments featured on the air,

54:26 be sure to subscribe to our YouTube channel at talkpython.fm/youtube.

54:31 This is your host, Michael Kennedy.

54:33 Thanks so much for listening.

54:34 I really appreciate it.

54:35 Now get out there and write some Python code.

54:38 [MUSIC]

54:40 [END]

54:42 [MUSIC]

54:44 [END]

54:46 [MUSIC]

54:48 [END]

54:50 [MUSIC]

54:52 [END]

54:54 [MUSIC PLAYING]

Back to show page
Talk Python's Mastodon Michael Kennedy's Mastodon