#464: Seeing code flows and generating tests with Kolo Transcript
00:00 Do you want to look inside your Django requests?
00:02 How about all of your requests in development and see where they overlap?
00:06 If that sounds useful, you should definitely check out Colo.
00:10 It's a pretty incredible extension for your editor.
00:13 VS Code at the moment, more editors to come most likely.
00:16 We have Wilhelm Klopp on to tell us all about it.
00:19 This is Talk Python in May, episode 464, recorded May 9th, 2024.
00:25 Are you ready for your host, here he is!
00:28 You're listening to Michael Kennedy on Talk Python to Me.
00:31 Live from Portland, Oregon, and this segment was made with Python.
00:35 Welcome to Talk Python to Me, a weekly podcast on Python.
00:41 This is your host, Michael Kennedy.
00:43 Follow me on Mastodon, where I'm @mkennedy, and follow the podcast using @talkpython, both on fosstodon.org.
00:50 Keep up with the show and listen to over seven years of past episodes at talkpython.fm.
00:56 We've started streaming most of our episodes live on YouTube.
00:59 Subscribe to our YouTube channel over at talkpython.fm/youtube to get notified about upcoming shows and be part of that episode.
01:06 This episode is sponsored by Sentry.
01:09 Don't let those errors go unnoticed.
01:11 Use Sentry.
01:12 Get started at talkpython.fm/sentry.
01:16 And it's also brought to you by us over at Talk Python Training.
01:20 Did you know that we have over 250 hours of Python courses?
01:24 Yeah, that's right.
01:25 Check them out at talkpython.fm/courses.
01:28 Well, welcome to Talk Python to Me.
01:31 Hello.
01:32 Yeah, excited to be here, Michael.
01:33 I've been listening to Talk Python like for, I can't even remember how long,
01:36 but I'm pretty sure it was before I had a first Python job.
01:38 So, yeah, a long, long time.
01:41 That's amazing.
01:42 Well, now you're helping create it.
01:44 Yeah, exactly.
01:44 We're going to talk about Colo, your Visual Studio Code, Django.
01:49 I don't even know what to call it.
01:50 It's pretty advanced, pretty in-depth.
01:52 Extension seems to be not quite enough.
01:54 So, what any project?
01:56 People are going to really dig that, people who Django.
01:58 And we'll see what the future plans are if we can talk you into other ones.
02:04 But for now, Django plus...
02:06 100%.
02:07 Yeah, Django plus VS Code is going to be super interesting.
02:09 When we get to that, of course, you must know the drill.
02:12 Tell us a bit about yourself.
02:13 Yeah, for sure.
02:14 So, my name is Will.
02:15 I've been using Django since...
02:18 Well, I guess I've been using Python since about 2013, I want to say.
02:22 So, a little over 10 years.
02:23 And, yeah, I just kind of fell in love with it.
02:26 Wanted to make websites.
02:28 Started using Django.
02:29 And, yeah, I guess never really looked back.
02:32 That was in school back then.
02:34 But kind of always had a love for tinkering and building side projects.
02:38 I actually studied...
02:39 I did a management degree in university.
02:41 But I really loved hanging out with all the computer science kids, all the computer science
02:44 students.
02:44 And I think a part of me really wanted to impress them.
02:47 So, I was always building side projects.
02:49 And one of them was actually a Slack app called Simple Poll.
02:53 And, yeah, we were trying to, like, organize something in Slack and really felt like the
02:57 need for polls.
02:57 So, built this little side project just, like, during university.
03:00 And then it became really, really popular.
03:02 And a few years later, it became my full-time job.
03:06 So, for the past, like...
03:08 Awesome.
03:08 Yeah.
03:08 Four years, I've been running Simple Polls, a Slack app, building out the team up to,
03:12 like, seven, eight of us.
03:13 And I had a great time doing that.
03:16 In the middle, I actually worked at GitHub for two years, working on Ruby and Rails.
03:20 And that was super fun.
03:20 Like, a great company, great people, huge code base.
03:23 Learned a lot there.
03:24 That was really fun.
03:25 But, yeah, I left after about two years to work full-time on Simple Polls.
03:29 So, Simple Polls have been running as a side project kind of in the background.
03:32 And actually, it's interesting, like, I think kind of the order of events, thinking back,
03:35 Microsoft was...
03:38 Acquired GitHub while I was there.
03:39 And then suddenly, all of my colleagues started talking about buying boats and leaving the company.
03:45 And I thought, hmm, I don't quite have boat money.
03:50 But how can I...
03:52 What's an ace I might have up my sleeve?
03:54 And it was Simple Poll, which had got, like, tons of users, but I never monetized it.
03:58 So, I set out to monetize it.
04:00 And then a year later, it was actually bringing in more revenue than my salary at GitHub.
04:04 So, I decided to quit.
04:06 So, that's kind of the Simple Poll backstories.
04:08 So, Simple Polls, a Django app, reasonably sized now, a bunch of people working on it.
04:12 And then, yeah, at some point in the journey of building Simple Poll, I kind of started playing around with Colo.
04:18 So, Colo also kind of, just like Simple Poll, started as a side project.
04:21 But now, not to make polls in Slack, but instead to improve my own developer experience building Simple Poll.
04:28 So, kind of built it as my own tool for making Django, working with Django more fun, give me more insight, give me access to some of the data that I felt was so close.
04:37 But that I had to just, like, manually get in there and print out.
04:41 So, the reason Colo started out as supporting just Django and VS Code is because that's what I was using.
04:47 And it was an internal side project.
04:48 And now, actually handed over Simple Poll to a new CEO.
04:53 I'm no longer involved day to day, and I'm working full-time on Colo.
04:58 Man, congratulations on, like, multiple levels.
05:01 That's awesome.
05:01 Thank you.
05:02 Yeah.
05:02 I want to talk to you a bit about Simple Poll for just a minute.
05:05 But before then, you pointed out, like, look, I made this side project.
05:09 And how many hours were you spending on it?
05:11 A week, maybe?
05:12 Oh, it was interesting.
05:13 So, honestly, like, so this was, like, right at the beginning, like, when it was first started.
05:16 Yeah, it's a good time.
05:18 It's a good question.
05:19 I always joke that the best thing about my management degree was that I had a lot of free time to, like, do, build side projects.
05:25 Honestly, I think it could have been, like, 20, 30, 40 hours a week.
05:28 Yeah.
05:29 Yeah.
05:29 That was, yeah.
05:30 I think, yeah, it definitely varied week to week.
05:32 And then later on?
05:33 Yeah.
05:33 And then while I was working, when I had a full-time job as a software engineer, yeah, that was a lot tougher.
05:37 It was, like, nights and weekends.
05:39 Rarely had energy during the week to work on it.
05:41 And then, honestly, like, since it was a real project with real users, I ended up spending a lot of the weekend doing, like, support.
05:48 Like, support stuff.
05:49 Yeah, absolutely.
05:50 Support.
05:50 And, like, then you charge, and then now you have finance stuff and, like, legal stuff to do.
05:55 So that wasn't super fun.
05:56 It really slows down the features and the creation of stuff, doesn't it?
06:01 Yeah.
06:01 I would say I probably spent fully 50% of my full-time job doing email support, that kind of stuff.
06:09 You know, just, like, there's tons of people taking courses and listening to podcasts.
06:12 Yeah, yeah, yeah.
06:13 And they'll have questions and thoughts.
06:15 And, you know, it's awesome.
06:16 But it also is really tricky.
06:18 So the reason I ask is I always find it fascinating.
06:21 You'll see, like, news articles.
06:24 I don't know.
06:24 They're always click-baity or whatever.
06:26 This person makes three times their job working 10 hours a week on this other thing.
06:31 Like, you make three times what you make for your job.
06:33 What are you doing at your job?
06:34 You know what I mean?
06:35 Right?
06:36 The ability to say you can make that step where you go from kind of tired at night's
06:41 extra time and squeezing on the weekends to full-time, full energy.
06:44 Yeah.
06:45 If it's already doing well, you know, on, like, a very thin life support, like, then give it
06:49 full-time and energy.
06:51 It's just, of course, it's going to be better, right?
06:53 It's so interesting.
06:53 I actually have a lot of thoughts about this.
06:55 Maybe I should write something about this at some point.
06:57 But yeah, I actually think running, like, a bootstrap side project kind of business as
07:01 you have a job can be really good because it really forces you to prioritize and build the
07:07 most important things.
07:08 Yeah.
07:08 It's kind of like having kids.
07:09 Oh, nice.
07:10 Yeah, yeah.
07:10 I need to try that someday.
07:11 Oh, you'll be real tired.
07:13 I tell you, you'll love to prioritize your time.
07:15 Yeah, I think it really forces you to prioritize.
07:17 So I actually sometimes recommend when folks ask me, like, for advice, like, should I quit my
07:21 job to go all in or not?
07:23 I actually sometimes think there's a lot of nice stability and that comes from having a
07:28 job.
07:28 Plus, it's actually really nice to have coworkers.
07:30 It's nice to have structure.
07:32 Like, you actually need to take all of that, well, work in a way on yourself.
07:36 Like, it's, you know, if you have to make your own structure, like, if you're building your
07:39 own thing, and that can actually be a bit tricky.
07:41 Like, I really struggled with that at the beginning.
07:43 So I think there's something to be said for, yeah, for spending, like, limited time on something,
07:47 basically, and prioritizing just the most important.
07:49 That's an interesting angle.
07:50 And I don't necessarily disagree with that.
07:52 That's interesting.
07:52 So for me, it was interesting, like, in terms of like, how much, like, you know, life support
07:57 energy you put in versus like, full time energy.
08:00 It was growing decently, like while I was still at GitHub.
08:04 And I thought, okay, I'm gonna go in on this full time.
08:07 And if I go from like, 10 hours a week, or less to like 40 hours a week, that would probably
08:13 4x the growth rate as well.
08:14 That's how it works.
08:16 Right?
08:16 And like, totally didn't.
08:17 Like, it totally didn't work.
08:19 In fact, like, the month after I left, I had like my first down month, where like the revenue
08:25 decreased.
08:26 And I was like, wait a minute, what's going on here?
08:28 How that doesn't make any sense.
08:29 That's not fair.
08:30 So I think that also points that like, there, yeah, you can definitely spend more hours on
08:34 something.
08:35 And it can be like the wrong things or not doubling down on something that's really working.
08:40 So, but overall, obviously, you at some point, like just being able to like test out more
08:44 ideas is like really valuable.
08:45 And for that, like, if you only have time to do support on your project, that's really working
08:50 well, and your full time job is the rest of the how you spend your week, then yeah, feels
08:55 like you should give yourself some time to build features and maybe quit the job.
08:59 Yeah.
08:59 It's also an interesting point about the structure, because not everyone is gonna get up at eight
09:06 o'clock, sit at their desk, and they're gonna be like, you know, I kind of can just
09:09 do whatever.
09:10 And it's, it's a, it's its own discipline, its own learned skill.
09:14 100%.
09:14 Yeah.
09:14 I remember like one of the first weeks after I was full time on simple call, I woke up in
09:20 the morning and said, well, the money's coming in.
09:22 I don't need to work.
09:23 I don't have a boss.
09:24 And I just sit in bed and watch YouTube videos all day.
09:26 And then I just felt miserable at the end of the day.
09:29 Like the, I was like, this is supposed to feel great.
09:31 Why all this freedom I've wanted and dreamt about for so long?
09:34 We're like, why does it not feel great?
09:36 Yeah.
09:38 It also, also feels like risk and more different kinds of responsibility.
09:42 All right.
09:42 So simple poll.
09:43 The reason I said it'd be worth talking about a little bit is, you know, Slack's a popular
09:48 platform and this is based on Django, right?
09:50 So simple poll is a full on Django app.
09:52 Yeah.
09:52 And it's funny.
09:53 Sometimes people joke that, I don't know if you've gone through the official Django tutorial,
09:58 but in there you actually make a polls app in the browser.
10:00 Sometimes people joke, wait, did you just turn this into like a Slack app?
10:05 And then you productize the getting started tutorial.
10:07 Yeah.
10:08 Exactly.
10:09 But yeah, like it turned out that like polls and then, yeah, getting, you know, your team
10:14 more connected and Slack and more engaged are like things people really care about.
10:18 So it came to the Slack, simple poll joined the Slack platform like at the perfect time and
10:24 has just been growing super well since then.
10:27 Tell people a little bit about what it takes technically to make a Slack app.
10:32 I mean, Slack is not built in Python as far as I know.
10:35 And it's probably JavaScript and Electron, mostly the people interact with, right?
10:40 So what is the deal here?
10:41 It's actually super interesting.
10:42 So the way you build like a Slack app, it's actually all backend based.
10:47 So when a user interacts in Slack, Slack sends your app, your backend, like a JSON payload
10:52 saying like this user clicked this button.
10:54 And then you can just send a JSON payload back saying, all right, now show this message.
10:59 Now show this modal.
11:00 And they have their own JSON based block kit framework where you can render different types
11:05 of content.
11:06 So you don't actually have to think about JavaScript or React or any of their stack at all.
11:10 It's basically all sending JSON payloads around and calling various parts of the Slack API.
11:15 So you can build a Slack app in your favorite language, any kind of exotic language if you wanted
11:20 to.
11:20 But yeah, I love Python.
11:23 So I decided to build it in Python and Django.
11:25 So yeah, actually building Slack apps is a really like pleasant experience.
11:29 What's the deployment backend story look like?
11:33 Is it a pass sort of thing?
11:35 Serverless?
11:36 VMs?
11:37 At the time, it was Heroku.
11:39 Simbup was running on Heroku.
11:41 And then I think a few years ago, we migrated it to AWS.
11:45 So now it's running on AWS and ECS.
11:48 Nice.
11:49 Okay.
11:49 So Docker for the win.
11:51 Right on.
11:51 How does it work in TalkBison?
11:52 I'm curious.
11:53 How, what's, where are you deployed?
11:54 It's all DigitalOcean.
11:55 And then I have one big, like eight, eight CPU server running, I think, 16 different Django
12:03 apps.
12:04 Not Django, sorry.
12:05 Docker apps.
12:05 No, sorry.
12:06 Docker apps that are all, all doing like, you know, some of them share database that's in
12:12 Docker and some of them do sort of have their own self-contained pair of like web app and
12:19 database and so on.
12:20 But it's all, it's all Docker on one big server, which is fairly new for me.
12:25 And it's, it's glorious.
12:26 It's glorious.
12:27 That's awesome.
12:27 Very cool.
12:28 Yeah.
12:29 All right.
12:30 So again, congrats on this.
12:32 Very, very neat.
12:33 Let's talk Colo.
12:35 Let's do it.
12:35 I first came across this, I've come across it independently twice.
12:42 Once when the Django chat guys recommended that I talk to you because they're like,
12:47 Will's doing cool stuff.
12:49 You should definitely talk to him.
12:50 This Django thing for VS Code is super cool.
12:53 But also I can't remember there's somebody on your team whose social media profile I came
12:58 across and I saw this and I'm like, oh, this is, this is pretty neat.
13:01 I think we even covered it on the Python Bytes podcast.
13:04 Oh, no way.
13:05 Let's see.
13:05 Yeah, sure.
13:06 In January we did.
13:07 So that's what we talked about a little bit, but this just looks like such a neat thing.
13:12 And it's, I encourage people to, who may be interested in this, to visit colo.app because
13:17 it's a super visual sort of experience of understanding your code, right?
13:21 Would you agree?
13:22 Yeah.
13:22 I mean, a hundred percent.
13:23 Yeah.
13:23 A funny thought.
13:24 I hadn't really thought that a podcast is going to be a hard way to describe the visual
13:29 beauty and magic that Colo can bring to your code.
13:32 But yeah, a hundred percent.
13:33 Yeah.
13:33 So Colo like very much started as like the idea of, Hey, like I should be able to see
13:38 like how my code actually flows.
13:40 I think like all of us, as we build software, as we write our Python code, we have this kind
13:45 of like mental model of how all the different functions like fit together.
13:49 How like a bit of data ends up from like the beginning, like to the end, like it passes through
13:54 maybe a bunch of functions.
13:55 It passes through a bunch of like classes, a bunch of loops.
13:58 All the state gets like modified and we have this kind of like mental picture of all of
14:03 that in our head.
14:04 And the kind of very beginning of Colo, the question I asked myself was like, is there a
14:09 way we can just like visualize that?
14:11 Is there a way we can just actually print that out onto a screen?
14:15 So if you go to colo.app, it kind of looks like this funny sun chart with like lots of kind
14:20 of a sunny tree chart with lots of nodes going from the center and like going off into the
14:26 distance, which I think is like, yeah, similar to like what folks kind of might already have
14:30 in their head about like how the code flows.
14:32 Maybe another way to describe it is imagine like you enable a debugger at the beginning
14:40 of every function and at the end of every function in your code and you print out like what was
14:46 the function name?
14:47 What were the input arguments?
14:48 What was the return value?
14:49 And then you arrange all of that in a graph that then shows which function called which other
14:54 function.
14:55 It almost looks like what you get out of profilers.
14:57 Right.
14:57 You know, where you say like, okay, this function took 20%, but if you expand it out, I'll say,
15:02 well, really spent 5% there, 10% there, and then a bunch of it.
15:06 And you can kind of traverse that.
15:08 100%.
15:08 Yeah.
15:09 I'm guessing you're not really interested in how long it took, although maybe you can probably
15:12 get that out of it.
15:13 It's the important is more what is the dependency?
15:16 What are the variables being passed and like understanding individual behavior, right?
15:21 Or maybe.
15:22 Yeah.
15:22 What do you think?
15:23 Yeah, 100%.
15:23 I think like, it's interesting because Colo actually uses under the hood, like a bunch
15:26 of the Python profiling APIs.
15:28 And I think people often think of Colo as a profiler.
15:31 We do actually have a traditional profiling based chart, which puts the timing at the center.
15:36 But you're absolutely right that the focus of our like main chart, the one that we're both
15:41 looking at that has like this idea of the function overview and like which function calls which.
15:46 The idea there is like absolutely the hierarchy and seeing like giving yourself that same mental
15:52 model that someone who's worked on a code base for three months has in their head immediately
15:56 like yourself by just looking at it.
16:00 This portion of Talk Python to me is brought to you by Sentry.
16:03 Code breaks.
16:04 It's a fact of life.
16:05 With Sentry, you can fix it faster.
16:07 As I've told you all before, we use Sentry on many of our apps and APIs here at Talk Python.
16:13 I recently used Sentry to help me track down one of the weirdest bugs I've run into in a long
16:18 time.
16:19 Here's what happened.
16:20 When signing up for our mailing list, it would crash under a non-common execution pass, like
16:26 situations where someone was already subscribed or entered an invalid email address or something
16:31 like this.
16:32 The bizarre part was that our logging of that unusual condition itself was crashing.
16:38 How is it possible for our log to crash?
16:41 It's basically a glorified print statement.
16:43 Well, Sentry to the rescue.
16:45 I'm looking in the crash report right now, and I see way more information than you'd expect
16:50 to find in any log statement.
16:51 And because it's production, debuggers are out of the question.
16:54 I see the traceback, of course, but also the browser version, client OS, server OS, server
17:01 OS version, whether it's production or Q&A, the email and name of the person signing up.
17:06 That's the person who actually experienced the crash.
17:08 Dictionaries of data on the call stack and so much more.
17:11 What was the problem?
17:12 I initialized the logger with the string info for the level rather than the enumeration.info.
17:20 which was an integer based enum.
17:22 So the login statement would crash saying that I could not use less than or equal to between
17:27 strings and ints.
17:28 Crazy town.
17:30 But with Sentry, I captured it, fixed it.
17:33 And I even helped the user who experienced that crash.
17:36 Don't fly blind.
17:38 Fix code faster with Sentry.
17:39 Create your Sentry account now at talkpython.fm/sentry.
17:43 And if you sign up with the code talkpython, all capital, no spaces, it's good for two free
17:50 months of Sentry's business plan, which will give you up to 20 times as many monthly events
17:54 as well as other features.
17:57 Usually in the way these charts turn out, you can notice that there's like points of interest.
18:02 Like there's one function that has a lot of children.
18:04 So that clearly is coordinating like a bunch of the work where you can see kind of similarities
18:09 in the structure of the some of the subtrees.
18:12 So you know, oh, okay, maybe that's like a loop and it's the same thing happening a couple
18:15 times.
18:16 So you can essentially, I get this overview and then it's fully interactive and you can
18:21 dive in to like what exactly is happening.
18:23 Yeah.
18:24 Is it interactive?
18:25 So I can like click on these pieces and it'll pull them up.
18:28 We actually, and this is what's, it'll be live by the time this podcast goes live.
18:32 We actually have a playground in the browser.
18:34 This is also super fun.
18:36 We can talk about this.
18:36 Let me drop you a link real quick.
18:38 This will be at play.colo.app.
18:40 So with this, yeah, this is super fun because this is fully Python just running in the browser
18:45 using Pyodide and like WebAssembly.
18:47 Nice.
18:48 Okay.
18:48 But yeah, so this is the fully visual version where you can, yeah, it defaults to loading
18:53 like a simple Fibonacci algorithm.
18:55 Yeah.
18:55 And you can see like what the cola visualization of Fibonacci looks like.
19:00 And you can actually edit the code and see how it changes with your edits and all of that.
19:03 We have a couple other examples.
19:05 Wow.
19:05 The pandas one and the whack-a-mole one are pretty intense.
19:08 They're pretty wild pictures.
19:09 They look like sort of Japanese fans or whatever.
19:12 You know, those little paper ones.
19:13 We once had a competition at a conference to see who could make like the most fun looking
19:18 algorithm and visualize it with Kolo.
19:20 But yeah, like it's fun.
19:22 Like visualizing code is really great.
19:23 That's awesome.
19:24 So this is super cool.
19:26 It's just all from scratch.
19:28 It's besides Pyodide here.
19:31 Not like VS Code in the browser or anything like that.
19:34 I think it's using Monaco in this case or CodeMirror.
19:37 But otherwise, this is all is Pyodide and a little bit of React to like pull kind of the
19:41 data together.
19:42 Uh-huh.
19:43 But yeah, we're really, yeah.
19:44 Wow.
19:44 It's otherwise homemade.
19:46 This is kind of like the kind of what Kolo has been for like the past like two years or so has been this kind of side project for our SimplePol to help like just visualize and understand code better.
19:57 The SimplePol code base, to be honest, has grown so large that like there's parts of it that I wrote like five years ago that I don't understand anymore.
20:04 And it's like annoying to get back to that and having to spend like a day to re-familiarize myself with everything.
20:10 It's a lot nicer to just like to actually kind of explain like end to end how it works.
20:15 You install like in a Django project, you install Kolo as a middleware.
20:19 And then as you just browse and use your Django app and make requests, traces get saved.
20:27 So Kolo records these traces.
20:28 They actually get saved in a local SQLite database.
20:32 Then you can view the traces, which includes the visualization, but also like lots of other data.
20:37 Like you can actually see in the version you have there.
20:39 Like we show every single function call, like the inputs and outputs for each function call.
20:43 So that main idea of Kolo is to like really show you everything that happened in your code.
20:48 So in a Django app, that would be like the request, the response, like all the headers, every single function call, input and output, outbound request, SQL query.
20:57 So really the goal is to show you everything.
21:00 You can view these stored traces either through VS Code.
21:04 And this is also will be live by the time this episode goes live through like a web middleware version, which is a bit similar to Django debug toolbar.
21:12 Not sure if you've played around much with Django debug toolbar.
21:15 Yeah, a little bit.
21:15 Yeah.
21:16 And those things are actually pretty impressive.
21:17 Right.
21:18 I played out with that one in the pyramid one.
21:20 And yeah, you can see more than I think you would reasonably expect from just a little thing on the side of your web app.
21:27 Yeah, yeah, exactly.
21:27 And that's very much our goal to like very kind of deep insight.
21:31 In our minds, this is almost like a bit like old news.
21:34 Like we've been using this for like a few years, basically.
21:36 And then at some point, like last year, we started playing around with this idea of like,
21:41 OK, so we have this trace that has information about like pretty much everything that happened in like a request.
21:48 Is there any way we could use that to solve this like reasonably large pain point for us, which is like writing tests?
21:54 I'm actually curious.
21:55 Do you enjoy writing tests?
21:56 I'll tell you what I used to actually.
21:58 I used to really enjoy writing tests.
22:01 I used to enjoy thinking a lot about it.
22:03 And then as the projects would get bigger, I'm like, you know, this is these tests don't really cover what I need them to cover anymore.
22:10 And they're kind of dragging it down.
22:12 And then, you know, the thing that really kind of knocked it out for me is I'd have like teammates and they wouldn't care about the tests at all.
22:18 So they would break the tests or just write a bunch of code without tests.
22:23 And I felt kind of like like a parent cleaning up after kids.
22:27 You're like, why is it so?
22:28 Can we just pick up?
22:29 Like, why are there dishes here?
22:30 You know, just going around.
22:32 I'm like, this is not what I want to do.
22:34 Like, I want to just write software.
22:35 And like, I understand the value of tests, of course.
22:38 A hundred percent.
22:39 At the same time, I feel like maybe higher order integration tests often, for me at least, serve more value.
22:47 Because it's like, I could write 20 little unit tests or I could write two integration tests.
22:52 And it's probably going to work.
22:53 I'm actually completely with you on that.
22:54 Okay.
22:55 Right on.
22:55 The bang for the buck of integration tests are like great.
22:59 Like really, really useful.
23:00 You can almost think of tests as having like two purposes.
23:04 One being like, well, actually, I think this would be too simple in explanation.
23:08 Let me not make grand claims about all the uses of tests.
23:11 I think the use of it that most people are after is this idea of like, what I've built isn't going to break by accident.
23:20 Yeah.
23:20 Like you want confidence that any future change you make doesn't impact a bunch of unrelated stuff that it's not supposed to impact.
23:27 I think that's what most people are after with tests.
23:32 And I think for that specific desired result, like integration tests are the way to go.
23:37 And there's some cool writing about this from, I wrote a little blog post about Kolo's test generation abilities.
23:43 And in there, I linked to a post from Kent C. Dodds from the JavaScript community who has a great post about, I think it's called write tests, not too many, mostly integration.
23:53 Kind of after this idea of like, Nice.
23:55 Yeah, yeah, yeah.
23:56 Eat not too much, mostly vegetables.
23:58 I think that's the, yeah, exactly.
24:00 Exactly.
24:00 Yeah.
24:01 I'm a big fan of that.
24:02 And actually it's interesting.
24:03 I've speaking to a bunch of folks over the past like year about tests.
24:06 A lot of engineers think about writing tests as vegetables.
24:10 And obviously some people love vegetables and some of us love writing tests, but it seems like for a lot of folks, it's kind of like a obviously necessary part of creating great software, but it's maybe not like the most fun part of our job.
24:24 Or you pick up some project, you're a consultant, or you're taking over some open source project.
24:30 You're like, this has no tests.
24:31 Right.
24:31 It's kind of like running a linter and it says there's a thousand errors.
24:35 You're like, well, we're not going to do that.
24:36 Yeah.
24:37 We're just not going to run the linter against it because it's just too messed up at this point.
24:42 Right.
24:42 It's interesting.
24:42 You mentioned the picking up like a project with no tests, because I think within the next three months, we're not quite there yet.
24:48 But I think in the next three months with Colo's test generation abilities, we'll have a thing where, yeah, all we need is a Python code base to get started.
24:55 And then we can bring that to like a really respectable level of code coverage just by using Colo.
25:01 Okay.
25:01 How?
25:03 I was kind of describing a second ago how like we, SimplePol has tons of integration tests.
25:08 SimplePol actually is about 80,000 lines of application code, not including migrations and like config files.
25:15 And then it's about 100,000 lines of tests.
25:17 And most of that is integration tests.
25:20 So SimplePol very well, very well tested.
25:22 Lots of, you know, really mostly integration tests.
25:24 But it is always a bit of a chore to like write them.
25:27 So we started thinking about like, hmm, this like Colo tracing we're doing, can that help us with making tests somehow?
25:34 And then we started experimenting with it.
25:36 And like to our surprise, it's actually, yeah, I'm still sometimes surprised that it actually works.
25:40 But basically the idea is that if you have a trace that has, that captures everything in the request, you can kind of invert it to build a integration test.
25:53 So let me give an example of what that means.
25:55 The biggest challenge we found with creating integration tests is actually the test data setup.
26:02 So getting your application into the right shape before you can send a request to it or before you can call a certain function.
26:09 That's like kind of the hardest part.
26:11 Writing the asserts is almost like easy or even like fun.
26:15 Right.
26:15 There's the three A's of unit testing.
26:17 Range, assert, and act.
26:19 Wait, arrange, act, and assert.
26:21 Exactly.
26:21 The first and the third one that you kind of have data on, right?
26:25 Exactly.
26:25 Yeah.
26:25 So we're like, wait a second.
26:28 We actually can like kind of extract this, like the act.
26:32 So like the setting, sorry, the arrange, setting up the data, the act, like actually making the HTTP request.
26:38 And then the assert, like to ensure the status change or that the request go to 200 or something.
26:43 We actually have the data for this.
26:44 It's reasonably straightforward.
26:46 Like if you capture in, you know, just your like normal, like imagine you have a local to-do app and you browse like a to-do kind of demo, simple to-do app.
26:54 And you browse to the homepage and the homepage maybe lists the to-dos.
26:57 And if you've got Kolo enabled, then Kolo will have captured the request, right?
27:01 So like the request went to the homepage and it returned a 200.
27:05 So that's already like two things we can now turn into code in our integration test.
27:09 So first step being, well, I guess this is the act and the assert in the sense that the assert is the 200.
27:15 And then the act is firing off a request to the homepage.
27:19 Now the tricky bit, and this is where it gets the most fun, is the arrange.
27:23 So if we just put those two things into our test, in our to-imaginary test, there wouldn't have been any to-dos there, right?
27:30 So it's actually not an interesting test yet.
27:32 But in your local version where the trace was recorded, you actually had maybe like three to-dos already in your database.
27:39 Does that make sense so far?
27:40 Yeah, yeah, absolutely.
27:41 On the homepage, like your to-do app might make a SQL query to like select all the to-dos or all the to-dos for the currently logged in user.
27:50 And then Kolo would store that SQL query, would store that select, and would also store actually what data the database returned.
27:57 This is actually something where, yeah, Kolo goes beyond a lot of the existing kind of like debugging tooling that might exist.
28:03 Like actually showing exactly what data the database returned in a given SQL query.
28:07 But imagine we get like a single to-do returned, right?
28:11 We now know that to replicate this like trace in our test, we need to start by seeding that to-do into the database.
28:20 That's where like the trace inversion comes in.
28:22 If like a request starts with a select of like the to-do table, then the first thing that needs to happen in the integration test is actually a like creating like an insert into the database for that to-do.
28:35 And now when you fire off the request to the homepage, it actually goes through your real code path where like an actual to-do gets loaded and gets printed out onto the page.
28:44 So that's like the most basic kind of example of like how can you turn like a locally captured trace of a request that like made a SQL query and return 200 into like an integration test.
28:56 Yeah, that's awesome.
28:57 One of the things that makes me want to write fewer unit tests or not write a unit test in a certain case is I can test given using mocking, given my, let's say SQLAlchemy or Beanie or whatever Django Rm model theoretically matches the database.
29:15 I can do some stuff, set some values and check that that's all good.
29:19 But in practice, if the shape, if the schema in the database doesn't match the shape of my object, the system freaks out and crashes and says, well, that's not going to work, right?
29:29 There's no way.
29:30 And so it doesn't matter how good I mock it out.
29:32 It has to go kind of end to end before I feel very good about it.
29:36 Oh, yeah.
29:37 Okay.
29:37 It's going to really, really work, right?
29:39 Exactly.
29:39 That's an interesting story.
29:40 Like you're saying to like, let's actually see if we can just create the data, but like let it run all the way through, right?
29:46 I'm totally with you.
29:47 And I think I've often seen like unit tests pass and say, I mean, there's like lots of memes about this, right?
29:52 How like unit tests say everything is good, but the server is down.
29:55 Like, how is that possible?
29:56 I think in Django world, it's reasonably common to write integration tests like this, as in like the actual database gets hit.
30:03 You have this idea of like the Django test client, which sends like a, you know, real in air quotes, HTTP request through the entire Django stack, as opposed to doing the more unit test approach.
30:15 So it hits the routes.
30:16 It hits like all of the, that sort of stuff all the way.
30:19 Yeah.
30:20 And the template.
30:21 Yeah.
30:21 Yeah.
30:21 And then at the end, you can assert based on like the content of the response, or you can check, like, imagine if we go back to the to-do example, if we're testing like the add to-do endpoint or form submission, then you could make a database query at the end.
30:36 And Colo actually does this as well, because like, again, we know like that you inserted a to-do in your request.
30:42 So we can actually make an assert.
30:44 This is a different example of the trace inversion.
30:47 If there's an insert in your request that you've captured, then we know at the end of the integration test, we want to assert that this row now exists in the database.
30:56 So you can assert at the very end to say, does this row actually exist in the database now?
31:01 So it's a very nice kind of reasonably end to end, but still integration test.
31:05 It's not like a brittle click around in the browser and kind of hope for the best kind of thing.
31:10 It's like, as we said at the beginning, I think like integration tests just get you great bank for your buck.
31:14 They really do.
31:16 It's like the 80-20 rule of unit testing for sure.
31:20 Yeah.
31:20 Talk Python to me is partially supported by our training courses.
31:24 If you're a regular listener of the podcast, you surely heard about Talk Python's online courses.
31:29 But have you had a chance to try them out?
31:31 No matter the level you're looking for, we have a course for you.
31:34 Our Python for Absolute Beginners is like an introduction to Python, plus that first year computer science course that you never took.
31:41 Our data-driven web app courses build a full PyPI.org clone along with you right on the screen.
31:48 And we even have a few courses to dip your toe in with.
31:51 See what we have to offer at training.talkpython.fm or just click the link in your podcast player.
31:56 So is this all algorithmic?
31:59 Yep.
31:59 Great question.
32:00 Is it LLMs?
32:01 Like how much VC funding are you looking for?
32:04 Like, you know, like if you've got LLMs in there, like they're coming out of the woodwork.
32:07 No, I'm just kidding.
32:08 No, how does this happen?
32:10 It's actually all algorithmic and rule-based at the moment.
32:14 So this idea of a select becomes like an insert and an insert becomes like a select assert.
32:21 We were surprised how far we could get with just rules.
32:24 The benefit we have is that we kind of have this like full-sized SimplePol Django code base to play around with.
32:30 And yeah, like generating integration tests in SimplePol just like fully works.
32:36 There's a bunch of tweaks we like had to make to as soon as I guess you work in kind of like outside of a demo example.
32:42 You want like time mocking and HTTP mocking and you want to use your like factory boy factories.
32:49 And maybe you have a custom unit test like base class and all of this.
32:53 But yeah, it like it actually works now.
32:55 I gave a talk at DjangoCon Europe last year.
32:58 It's kind of like a bit of a wow moment in the audience where, yeah, you just click generate test and it generates you like a hundred line integration test and the test actually passes.
33:07 So that was like people started just started clapping, which was a great feeling.
33:11 I'm still a bit surprised that it works on it.
33:14 But yeah, no LLM at all.
33:15 I do think like LLMs could probably make these tests like even better.
33:19 Or you know how I was saying a second ago, like in three months, we could go take a code base from like zero test coverage to maybe like 60%, 80%.
33:28 I imagine if we made use of LLMs, that would help make that happen.
33:33 Yeah.
33:33 Yeah.
33:34 You could talk to it about like, well, these things aren't covered.
33:37 Right.
33:38 What can we do to cover them?
33:39 Yeah.
33:39 I don't know if you maybe could do fully, fully automated.
33:43 Just push the button and let it generate it.
33:45 But, you know, it could also be like a conversational, not a conversation, sort of a guided.
33:49 Let's get the rest of the test.
33:51 You know, like, okay, we're down to 80.
33:53 We've got 80%, but there's the last bit are a little tricky.
33:56 Like what ones are missing?
33:57 Right.
33:57 So how do you think we could do this?
33:59 Is that, no, no, you need to, that's not really the kind of data we're going to pass.
34:02 You know, I don't know.
34:03 It seems something like that.
34:04 Right.
34:04 I really like that.
34:05 I had not thought about like a conversation as a way to generate tests, but that makes so much sense.
34:10 Right.
34:10 It kind of bringing the developer along with them where it's gotten too hard or something, you know?
34:15 Yeah.
34:16 There's something cool about just clicking a button and see how much code coverage you could get to.
34:19 But chatting to it.
34:21 I think also, honestly, like so far, like our test generation logic is a bit of a black box.
34:27 It just kind of like works.
34:29 Yeah.
34:29 Until the point where like it doesn't.
34:31 So we're actually kind of in the process of like shining a bit more of a light into like,
34:35 like essentially the like internal data model that Colo keeps track of to know what the database state should be like in this arrange part of the integration test.
34:46 And yeah, we're actually like in the process of like, yeah, talking to a bunch of users who are already using it and also finding like companies who want to increase their increase their test coverage or who have problems with their testing and want to improve that.
34:59 And kind of working closely with them to make that happen.
35:03 That's kind of a huge focus for us as we figure out, like, how do we want to monetize Colo?
35:07 Like so far, Colo has just been kind of supported by SimplePool as a side project, but we're kind of making it real, making it its own business.
35:14 So and we think the test generation is going to play a big part in that.
35:18 Right.
35:18 Like that could be a certainly a premium team feature sort of thing.
35:21 Exactly.
35:22 Yeah.
35:22 Yeah.
35:22 Yeah.
35:23 Enterprise.
35:24 Enterprise version comes with auto testing.
35:26 Yeah, exactly.
35:28 Something like that.
35:28 Yeah.
35:29 Yeah.
35:29 If there's anyone listening and like they're keen to increase their code coverage, please email me.
35:33 Maybe we can leave my email in the in the notes or something like that.
35:35 Yeah, I'll put your contact info in the show notes for sure.
35:37 It's actually really nice.
35:38 It's just W at Colo dot app.
35:40 Oh, very nice.
35:41 So yeah, if anyone's listening and wants to kind of like increase their code coverage or has a lot of code bases that have zero coverage that would benefit from getting to like some level of coverage, we'd love to help you and talk to you.
35:52 Even if the solution doesn't like involve using Colo, just really, really keen to talk to anyone about like Python tests and what can be done there.
35:59 So yeah, please hit me up.
36:01 Awesome.
36:01 Yeah, I'll definitely put some details in the show notes for that.
36:04 I have some questions as well.
36:05 Please.
36:05 Yes.
36:06 Right here.
36:06 I'm looking at the web page and the angle bracket title is Colo for Django.
36:12 But in the playground thing you sent me, it was on plain Python code.
36:17 It was on algorithms.
36:19 It was on pandas, which I thought was pretty interesting how much you could see inside pandas.
36:23 Makes me wonder, you know, if you look at the web frameworks, there's two or three more that are pretty popular out there and they all support middleware.
36:30 Yeah, 100%.
36:30 So Colo kind of started as like this like side project for our Django app.
36:35 And I think that that's why we kind of went there first.
36:37 It's kind of the audience we know best.
36:40 You can do dogfood as well.
36:41 Yeah.
36:41 Exactly.
36:42 Dogfooded.
36:43 Lily, who's an engineer on the team, and who's been building a lot of the Python side of Colo, is like a core contributor to Django.
36:53 So Django is like really where we're home.
36:55 And to be honest, I think when building a new product, it's kind of nice to keep the audience somewhat small initially.
37:01 Keep like building for very specific needs as opposed to going like very wide, very early.
37:06 That was kind of very much, very much the intention.
37:08 But there's no reason why Colo can't support Flask, FastAPI, the scientific Python stack.
37:15 As you can see in the playground, it does totally work on plain Python.
37:19 It's really just a matter of honestly, like FastAPI support would probably be like a 40 line config file in our code.
37:28 And there's actually, yeah, we're thinking of ways to make that actually a bit more pluggable as well.
37:34 There's only like so many things we can reasonably support well ourselves.
37:39 I was going to say, if somebody else out there has an open source project, they want it to have good support for this, right?
37:44 Like, hey, I run HTTPX or I run Litestar or whatever.
37:49 And I want mine to look good here too, right?
37:50 Totally.
37:51 So the thing you can do already today is there's a little bit of config you can pass in.
37:55 And actually, if you look back on the pandas example, you'll see this.
37:58 By default, Colo actually doesn't show you library code if you use it in your own code base.
38:03 But you can tell it, show me everything that happened, like literally everything.
38:07 And then it will do that for you.
38:09 So in this example you're looking at, or if anyone's looking at the playground, if you look at the pandas example, it'll say like include everything in pandas.
38:16 And that'll give you like a lot more context.
38:19 The thinking there is that most people don't really need, like the issues you're going to be looking at will be in your own code or in your own company's code base.
38:27 You don't really need to look at the abstractions, but you totally can.
38:30 But yeah, to answer the question, like we have this like internal version of a plugin system where, yeah, like anyone could add FastAPI support or like a great insight into PyTorch or what have you.
38:42 The way it all works technically really is it's totally built on top of this Python API called set profile.
38:48 I'm not sure.
38:49 Have you used, have you come across this before?
38:51 It's a bit similar to set trace actually.
38:52 Yeah, I think so.
38:53 I think I've done it for some C profile things before.
38:58 I'm not totally sure.
38:59 Yeah.
38:59 Yeah.
39:00 It's a really neat API to be honest, because Python calls back to your, like the callback that you register on every function, enter and exit.
39:08 And then Colo essentially looks at all of these functions, enters and exits and decides which ones are interesting.
39:13 So the matter of like supporting say FastAPI is basically just telling Colo, these are the FastAPI functions that are interesting.
39:21 This is the FastAPI function for, for like an HTTP request that was served.
39:25 This is the HTTP response or similarly for SQLAlchemy.
39:28 This is the function where the query was actually executed and sent to the database.
39:33 This is the variable, which has the query result.
39:35 Like there's a little bit more to it.
39:37 And I'm definitely like, yeah, generalizing, but it's kind of like in principle, it's as simple as that.
39:43 It's like telling Colo, here's the bits of code in a given library that are interesting.
39:46 Now just kind of like display that and make that available for the test generation.
39:50 Excellent.
39:51 Yeah, I totally agree with you that getting focused, it probably gets you some more full attention from the Django audience.
39:58 And the Django audience is quite a large and influential group in the Python web space.
40:03 So that makes a ton of sense, especially since you're using it.
40:05 By the way, it was Lily's Mastodon profile, I believe, that I ran across that I first discovered Colo from.
40:12 So of all the places, yeah.
40:13 Or a post from her or something like that.
40:16 That's awesome.
40:17 Cool.
40:17 All right.
40:17 So let's talk about a couple other things here.
40:20 Let's do it.
40:20 For people who haven't seen it yet, like you get quite a bit of information.
40:24 So if you see like the get request, you actually see the JSON response that was returned out of that request.
40:31 And it integrates kind of into your editor directly, right?
40:35 If you've seen CodeLens before, it's kind of like CodeLens, right?
40:38 Yeah, this is another thing which I think is pretty novel with Colo.
40:41 Like I think it's reasonably common for existing debugging tools to show you like, oh yeah, this is the headers for the request.
40:48 Or this is like the response status code.
40:50 But especially working with the Slack API in SimplePol, you're constantly looking at payloads and what were the values for things and what are you returning.
40:59 In production, you don't directly get to even make those or receive those requests, right?
41:03 There's some like system in Slack who was like chatting with your thing.
41:07 You're like, well, what is happening here, right?
41:09 Not that you would actually run this in there, but you know.
41:12 I mean, it's funny you mentioned this because there is one experiment we want to run of kind of actually enabling these extremely deep and detailed Colo traces in production.
41:21 We haven't explored this too much yet.
41:23 And I think we're going to focus a little bit more on the test generation.
41:26 But you could imagine like a user who's using, who's on the Talk Python site and they've got some incredibly niche error that no one else is like encountering.
41:38 And you've tried to reproduce it, but you can't reproduce it.
41:41 Maybe there's a little bit of information in like your logging system, but it's just not enough.
41:45 And you keep adding more logging and you keep adding more logging and it's just not helping.
41:49 Like imagine a world where you can say just for that user, like enable Colo and enable like these really deep traces.
41:55 And then you can see whenever the user next interacts, like the value for every single variable, for every single code path that executed for that user.
42:05 That's just like, yeah.
42:06 I think one of our users described it as like a debugger on steroids.
42:09 Yeah, yeah.
42:10 It's pretty interesting.
42:10 Sounds a little bit like what you get with Sentry and some of those things, but maybe also a little bit different.
42:18 So, you know, you could do something like, here's a dear user with problem.
42:23 Here's a URL.
42:24 If you click this, it'll set a cookie in your browser and then all subsequent behavior, it just, it's on it.
42:30 You know what I mean?
42:31 It's like recording it.
42:32 Yeah.
42:32 That'd be pretty interesting.
42:33 Yeah.
42:34 I think it makes sense in the case, like if a user, it could even be an automated support thing, right?
42:39 Like if a couple of sites have this where you can like do like a debug dump before you submit your support ticket.
42:45 This is almost like that.
42:47 And then as an engineer who's tasked with digging into that user's bug, you don't have to start with like piecing together.
42:54 What was this variable at this time when they made that request three days ago?
42:58 You like, you can just see it.
43:00 If an error ever encounters an exception on your site, you just set the cookie.
43:04 Right.
43:04 Everything else they do is now just recorded until you turn it off on them.
43:07 Oh my gosh.
43:08 You're giving me so many good ideas.
43:09 That'd be fun, right?
43:10 Start writing this stuff down.
43:11 Hey, let's record it.
43:13 It'll be fine.
43:13 That's awesome.
43:14 Yeah.
43:14 There's a bunch of stuff that's interesting.
43:16 People can check it on the site.
43:18 It's all good.
43:19 However, we talked a little bit about the production thing.
43:22 Like another thing you could do for production, this requires both a decent amount of traffic and maybe you could actually pull this off on just a single server.
43:30 But you could do like, let's just run this for 1% of the traffic so that you don't kill the system.
43:37 But you get, you know, if that's why you have enough traffic is like statistically significant sampling of what people do without actually recording a million requests a day or something insane.
43:48 A hundred percent.
43:49 I think there's really something there.
43:50 Or like I could go on about this whole idea of like runtime data and like improving software understanding for days because I just think like it's really this like missing layer, right?
43:59 Like all of us constantly imagine like what is like we play computer looking at our code, imagining what the values can be.
44:05 But like, yeah, say you're looking at some complex function in production and you want to understand how it works.
44:10 Like how useful would it be if you could see like the 10, the last 10 times it was called, like what were the values going into it and what were the values coming out of it?
44:18 Like that would be, I just think like, why do we not have this already?
44:22 Like why does your editor not show you for every single function in the code base?
44:26 Give examples of like how it's actually used like in production.
44:30 Yeah.
44:30 And then use those to generate unit tests.
44:32 And if there's an error, use that to generate the edge case, like the negative case, not the positive case, unit test, right?
44:38 There you go.
44:38 Exactly.
44:38 It's all like kind of hanging together.
44:40 Like, yeah.
44:41 Yeah.
44:42 Once you have the data, you have interesting options.
44:44 Yeah.
44:44 Business model.
44:45 This is not, this, I maybe should have started sooner with this, but it's not entirely open source.
44:50 It may be a little, little bits and pieces of it, but in general, it's not open source.
44:54 That's correct.
44:55 Yeah.
44:55 Yeah.
44:56 No, I'm putting that out there as a negative, right?
44:57 This looks like a super powerful tool that people can use to write, write code it.
45:00 And that's fine.
45:01 Yeah.
45:02 I think the open source question is super interesting.
45:03 Like it's always been like something we've thought about or, or considered.
45:08 I think there is, yeah, with, with developer tools, I think business models are always super interesting and we want to make sure that we can have a business model for Colo and like run it as like a sustainable thing, as opposed to it just being like a simple pole side project kind of indefinitely.
45:22 Be great if Colo could like support itself and yeah, have a business model.
45:26 I think that's how it can like really fulfill its potential in a way, but that's not to say that like Colo won't ever be open source.
45:32 Like I think there's a lot to be said for open sourcing it.
45:35 I think especially like the, the capturing of the traces is maybe something like I could see us open sourcing.
45:42 I think the open source community is fantastic.
45:44 I do also think it's not like a thing you get for free, right?
45:48 Like as soon as you say, Hey, we're open source, you open yourself up to contributions, right?
45:54 And to like the community actually getting involved and that's great, but it also takes time.
45:59 And I think like, that's a path I would like to go down when we're a little bit clearer on like what Colo actually is and like where it's valuable.
46:08 If that makes sense.
46:09 Yeah, sure.
46:10 If it turns out that no one cares about like what, how, like how to visualize code, then like, that's a great, like learning for us to have made, but I'd rather get there without like a lot of work in the middle that we could have kind of avoided, if that makes sense.
46:25 So for sure.
46:25 It feels like once we have a better sense of the shape of Colo and what the business model actually looks like, then we can be a bit more.
46:34 Yeah.
46:34 We can invest into open source a little bit more, but to be honest, like based on how everything's looking right now.
46:39 I would not be surprised at all.
46:41 If like big chunks, it was like Colo becomes open core or big chunks of it are like open source.
46:46 It makes sense to me.
46:47 It is fully free at the moment.
46:49 So I should, that's worth calling out.
46:51 There's no cost or anything.
46:52 You can also like, you know, you download the Python package and guess what?
46:55 You can look at all of the code.
46:57 Like it actually is all of theirs.
46:59 It is all kind of visible.
47:01 That kind of leads into the next question is I've never used GitHub Copilot and a few of those other things because it's like here, check this box to allow us to upload all of your code and maybe your access keys and everything else.
47:15 That's interesting.
47:15 So we can one trainer models and two, you know, give you some answers.
47:19 And that just always felt a little bit off to me.
47:21 What's the story with the data?
47:24 At the moment, Colo is like entirely like a local product, right?
47:27 So it's all local.
47:29 Like you don't have to, you can get like all of the visualization, everything just by using local Colo in VS Code.
47:36 We do have a way to like upload traces and share them like with a colleague.
47:40 This is actually also something I think is like kind of playing with the idea of like writing a little like Colo manifesto.
47:46 Like what are the things that we believe in?
47:47 One of them that I believe in, and this goes back to the whole like runtime layer on top of code.
47:52 And like there's this whole dimension, this like third dimension to code that we're all simulating in our heads.
47:58 I think like it should totally be possible to not just like link to a snippet of code like on GitHub, but it should be possible to have a like link like a URL to a specific execution of code, like a specific function and actually talk about that.
48:13 It's kind of wild to me that we don't have this at the moment.
48:16 Like you can't send a link to a colleague saying, hey, look at this execution.
48:20 That looks a bit weird.
48:22 We ran this in continuous integration and it crashed, but I don't understand.
48:25 Let's look at the exact.
48:26 Right.
48:27 The whole deal.
48:28 You can link to like CI runs.
48:29 You can link to like sentry errors.
48:31 But like if you're just seeing something slightly weird locally or like even something slightly like weird in production where there's no area, you can't really like link link to that.
48:40 Anyway, like this is kind of a roundabout way of me saying that like I think that totally should be a thing.
48:45 Like you should be able to link like generically to like an execution of a function or execution of a request.
48:51 Like that would totally have to live somewhere.
48:54 Right.
48:54 So this is where there's some idea of like Colo cloud comes in and this is where you could like connect your repository.
49:00 And then Colo would like as part of that, you know, just like GitHub does have access to your code and like show you the code in like the Colo cloud.
49:08 So I think there's definitely like useful things that are possible there.
49:13 But at the moment, it's a fully local experience.
49:15 Like your code doesn't ever leave your system.
49:19 You can if you want to like upload traces and then Colo stores the like trace data, not the code, just the trace data.
49:26 But yeah, very local experience right now.
49:28 Yeah.
49:29 A little SQLite database.
49:30 Exactly.
49:31 Yep.
49:31 Yeah.
49:31 SQLite's pretty awesome.
49:32 It's a credible piece of software.
49:34 Yeah, it really, really is.
49:35 Let's close out our conversation here with a little bit of a request from Michael.
49:39 Right now it's VS Code only.
49:41 Any chance for some PyCharm in there?
49:43 This is our top request, like PyCharm support.
49:46 Yeah.
49:46 And we've decided super small team, like we want to kind of support everyone.
49:50 But we've been working very heavily actually the past few months on a web-based version, which is, I'm happy to say, like very much nearing completion.
49:58 And there's a few bits and pieces where like it's really nice to be integrated super deeply into the editor, like the code lenses and all of that.
50:06 And I think there's a chance we'll have that for PyCharm eventually as well.
50:09 But we actually found that like building out this web version, there's a few things that are actually much nicer when you have the full control over the UI in terms of like browsing around a trace, highlighting little bits of code.
50:20 So for example, in Colo, like a given function call, we call a frame.
50:24 And you can look at a given frame, both in VS Code, but also in the web version and see the code and see all of the data that passed through the code.
50:33 But something we can do in the web version we can't do in VS Code is actually show where the current function was called from and actually show like a preview of that code.
50:40 In VS Code, you can't really show like, you can link to it.
50:43 Yeah, you can layer multiple files together or a difference.
50:45 Yeah, exactly.
50:46 Yeah.
50:47 There's actually a lot of, like I was surprised by how many different novel, like kind of ways we had in the web that we just never even considered with like a direct editor integration in terms of displaying this runtime data.
50:59 So like long story short, like you, you know, you want a PyCharm integration.
51:03 Let me give you something even better.
51:05 Yeah.
51:05 A web version.
51:06 So would that work like you run a certain command or something when you run your web app and then it just generates the SQLite file and then you could just explore it with a web view or what are you?
51:18 Yeah, it's actually kind of cooler than that.
51:19 So if you're using Django or in the future, like other things with a typical middleware, you would just like go to your, you would just go to localhost 8000 slash Colo.
51:29 Yeah, yeah, yeah.
51:29 Kind of like you do for open API docs.
51:33 Yep.
51:33 And then the whole experience is just there.
51:35 If you're not using a middleware, we'll have a command like Colo serve or something like that.
51:40 And that'll host the same experience for you.
51:43 Just make sure it's off by default or it only responds on localhost or something like that.
51:49 Yeah, exactly.
51:50 Don't let people ship it on accident.
51:52 That would be bad news.
51:54 No production use of this.
51:55 Yeah.
51:55 I mean, people already know about the Django debug settings, but I guess you could sort of layer onto that, right?
52:02 Probably.
52:03 Yeah, I think we actually do that at the moment.
52:05 But yeah, it's worth remembering.
52:06 I know.
52:07 I'm just thinking of like, oh, this is really cool to explore.
52:11 A hundred percent.
52:12 CNN.com is awesome.
52:14 Look what it's doing.
52:14 Look at all these requests and all this.
52:16 Yeah, exactly.
52:18 A hundred percent.
52:19 Yeah.
52:19 Yeah.
52:19 Oh, and the API key is so interesting.
52:21 Anyway, that's a bit of a side conversation.
52:25 So let's just wrap it up with final call action.
52:28 People are interested.
52:29 What do they do?
52:30 Yeah, colo.app and check it out.
52:32 We have the Playground link there.
52:33 Play.colo.app.
52:34 Easiest way to kind of see what Colo is and what Colo does.
52:37 But we'll say the most powerful way to actually see Colo in action is to use it on your own code base.
52:44 So seeing the visualization and the test generation capabilities is just like, yeah, the most useful when you use it on your code base.
52:51 So hopefully the Playground can entice that a little bit.
52:54 And yeah, really the main, most important thing for us right now is, yeah, chatting to folks who want to increase their test coverage, want to like build automated testing as part of their workflow.
53:02 And yeah, work very closely with you to make that happen.
53:05 So if that's you, please email me at w at colo.app.
53:10 You need that pause for the W. That's right.
53:12 The two ads.
53:13 Awesome.
53:14 Will, thanks for being on the show.
53:16 Congrats on both of your projects.
53:18 They look really neat.
53:19 Thanks so much for having me.
53:19 Yeah.
53:20 So excited to have been on.
53:21 Yeah, you bet.
53:21 Bye.
53:22 Bye.
53:22 This has been another episode of Talk Python to Me.
53:26 Thank you to our sponsors.
53:28 Be sure to check out what they're offering.
53:30 It really helps support the show.
53:31 Take some stress out of your life.
53:33 Get notified immediately about errors and performance issues in your web or mobile applications with Sentry.
53:39 Just visit talkpython.fm/sentry and get started for free.
53:44 And be sure to use the promo code talkpython, all one word.
53:48 Want to level up your Python?
53:49 We have one of the largest catalogs of Python video courses over at Talk Python.
53:54 Our content ranges from true beginners to deeply advanced topics like memory and async.
53:59 And best of all, there's not a subscription in sight.
54:01 Check it out for yourself at training.talkpython.fm.
54:04 Be sure to subscribe to the show.
54:06 Open your favorite podcast app and search for Python.
54:09 We should be right at the top.
54:10 You can also find the iTunes feed at /itunes, the Google Play feed at /play,
54:16 and the direct RSS feed at /rss on talkpython.fm.
54:20 We're live streaming most of our recordings these days.
54:23 If you want to be part of the show and have your comments featured on the air,
54:26 be sure to subscribe to our YouTube channel at talkpython.fm/youtube.
54:31 This is your host, Michael Kennedy.
54:33 Thanks so much for listening.
54:34 I really appreciate it.
54:35 Now get out there and write some Python code.
54:37 I'll see you next time.