Monitor performance issues & errors in your code

#440: Talking to Notebooks with Jupyter AI Transcript

Recorded on Monday, Oct 30, 2023.

00:00 We all know that LLMs and generative AI have been working their way into many products.

00:05 Well, it's Jupyter's turn to get a really awesome integration.

00:08 We have David Kui here to tell us about Jupyter AI.

00:12 Jupyter AI provides a user-friendly and powerful way to apply generative AI to your notebooks.

00:18 It lets you choose from many different LLM providers and models to get just the help that you're looking for.

00:24 And it does way more than just add a chat pane in the UI.

00:28 Listen in to find out.

00:30 This is Talk Python to Me, episode 440, recorded October 30th, 2023.

00:49 Welcome to Talk Python to Me, a weekly podcast on Python.

00:52 This is your host, Michael Kennedy.

00:54 Follow me on Mastodon, where I'm @mkennedy, and follow the podcast using @talkpython, both on fosstodon.org.

01:02 Keep up with the show and listen to over seven years of past episodes at talkpython.fm.

01:07 We've started streaming most of our episodes live on YouTube.

01:11 Subscribe to our YouTube channel over at talkpython.fm/youtube to get notified about upcoming shows and be part of that episode.

01:19 This episode is sponsored by Posit Connect from the makers of Shiny.

01:24 Just share and deploy all of your data projects that you're creating using Python.

01:28 Streamlet, Dash, Shiny, Bokeh, FastAPI, Flask, Quattro, Reports, Dashboards, and APIs.

01:35 Posit Connect supports all of them.

01:37 Try Posit Connect for free by going to talkpython.fm/posit, P-O-S-I-T.

01:43 And it's also brought to you by us over at Talk Python Training.

01:47 Did you know that we have over 250 hours of Python courses?

01:51 Yeah, that's right.

01:52 Check them out at talkpython.fm/courses.

01:56 David, welcome to Talk Python to Me.

01:59 Awesome to have you.

02:00 Yeah, thank you, Michael.

02:01 I'm really excited to see what the AIs have to say today.

02:04 The AIs, yeah, language models, sure.

02:07 Yes, exactly.

02:08 Exactly.

02:09 Now, you've built a really cool extension for Jupyter that plugs in large language models for people and it's looking super interesting.

02:17 So I'm excited to talk to you about it.

02:19 Yeah, I'm excited to talk about Jupyter AI too.

02:22 I've actually presented this twice at one set, actually three times.

02:26 I did a short demo at like this like tech meetup thing in Seattle.

02:30 That was actually the first time Jupyter AI was shown to the public.

02:34 And then I presented at PyData Seattle at Microsoft's Redmond campus.

02:39 And then I got to present again at JupyterCon in Paris this May.

02:43 It was a really wonderful experience.

02:45 But yeah.

02:46 Wow.

02:47 Yeah, you're making the rounds.

02:48 Yeah.

02:49 I love to talk about Jupyter AI.

02:50 It happens to get me some plane tickets.

02:53 Just joking.

02:56 Honestly that's like half the bonus of conferences is like awesome places you get to go.

03:00 The other half probably is the people you meet.

03:02 You know, it's really cool.

03:03 Oh, for me, it's like almost like all the people.

03:05 Like the people are just so great, especially JupyterCon.

03:08 Yeah.

03:09 And the JupyterCon videos are now out for JupyterCon 2023.

03:12 And there's a ton of good looking talks there.

03:14 So yeah, lots of really smart people.

03:17 I mean, like I was chatting to a few folks there and that's like the only place where you're going to find like people who work at these hedge funds and trading firms just lounging so idly and casually.

03:29 Right.

03:30 Like, yeah.

03:31 The market's opening and they're chilling.

03:33 It's all fine.

03:34 Yeah.

03:36 There's a lot of smart people there.

03:37 Yeah.

03:38 Jupyter more than a lot of programming technologies bring people from all sorts of different places together, different backgrounds.

03:45 There's like a huge.

03:46 So yeah, there's like a lot of reasons behind that.

03:49 But long story short, Jupyter is pretty awesome.

03:51 And that's kind of why I work to contribute to it.

03:53 Awesome.

03:54 Well, let's start this whole conversation with a bit of background about yourself.

03:57 And for people who didn't see your talk and don't know you yet, tell them a bit about you.

04:03 I didn't really give much of an intro there either, but sure.

04:05 Yeah.

04:06 So I've worked for AWS as a software engineer and I've been with AWS, specifically the AI ML organization at AWS.

04:15 I've been with them for almost two years now.

04:18 Right now, my manager is actually Brian Granger, who's the co-founder of Project Jupyter.

04:23 He also works for AWS.

04:25 Yeah.

04:26 So he's been offering some technical and product guidance for the things that we're building.

04:32 And he's a fantastic gentleman to work with.

04:35 Oh, that's a really neat, need to have him available as a resource, you know, as a colleague.

04:40 Yeah.

04:41 You know, it's funny.

04:42 Like I actually, yeah, I met him internally.

04:45 So when I first joined, I wasn't working for him, but at tech companies, you can do this internal transfer thing.

04:51 And basically my old team was kind of, the team I just joined right after I joined, it sort of started to dissolve a little because they just launched a product at reInvent, which happens in like December.

05:03 And then, so I joined in December and it's like, oh, hi.

05:08 So then I, yeah.

05:09 And then I joined, I messaged, I saw just, saw Brian Granger's name somehow.

05:14 I messaged him and I didn't even know that he was the co-founder of Project Jupiter.

05:18 I just wanted to work for him because I used it before.

05:21 And yeah, it's a pretty funny story.

05:24 Indeed.

05:25 I imagine that project has, has really, you know, this Jupiter AI is a great example, but I just, thinking of being say a founder of Jupiter or, or something like that, these things take on a life of their own.

05:36 And he's probably in awe of all the stuff happening and all the things going on.

05:41 And there's probably a lot of stuff in Jupiter.

05:43 He doesn't even know about, right.

05:44 It's like that's happening.

05:45 Yeah.

05:46 It's huge.

05:47 And yeah.

05:48 And like the leadership structure has changed to accommodate that.

05:50 So Brian is no longer the benevolent dictator for life.

05:54 Jupyter Project Jupyter is now governed by a committee, decentralized and democratized just to allow it to scale.

06:00 Yeah, of course.

06:02 Let's start by talking about a bit of the role of AI in data science.

06:07 I don't know how you feel about it.

06:09 You must be somewhat of an advocate putting this much time and energy into bringing it to Jupyter.

06:15 Wow.

06:16 However, personally, when I, I want to know something, I don't think there's a great specific search result for it straight to ChatGPT or friends.

06:27 I think there's, there's such a, such a wealth of information there from, I need to take this paragraph and clean it up and make it sound better to I have this program.

06:36 I want to convert to another language or I have this data in this website.

06:41 How do I get it?

06:42 You know, like just, you can ask so many open-ended questions and really get great answers.

06:47 So it seems to me, especially coming from, like I mentioned before, those diverse backgrounds, people not necessarily being like super deep in programming, maybe they're deep in finance, but they do programming that having this AI capability to ask like, Hey, I know I can, but how, what do you think for data science in particular?

07:05 This is an interesting topic, right?

07:08 Because I think the whole power of language models stems from their ubiquity and versatility and how they can sort of be very generally applicable.

07:16 So like the thing about language models is that they're basically statistical models that have been trained on a very, very, very large corpus of data.

07:26 And that's really to the computer.

07:28 The computer doesn't really understand English.

07:31 It doesn't really understand natural language.

07:33 And it has, it's basically making, it has like, when it's trained, it has like knowledge of the distribution of information and how information sort of interacts with other information.

07:45 Because of that, it's a very, it has a very general applicability, right?

07:49 And I don't think that's utilities limited to data science.

07:53 Now if we're talking about the field of data science specifically, I think language models have extraordinary utility and explanatory natural language tasks, which I think everybody is aware of now, now that a ChatGPT has been out for almost a year.

08:09 But I think in the field of data science and other like deep technical fields, they're especially applicable because of how complicated some of the work is.

08:19 Chat AI can also help analyze and debug code, which JupyterLab also allows you to do.

08:24 I know it's statistics, but when you look at it, it seems like it understands, right?

08:29 It seems like it understands my question.

08:32 And I think one of the really interesting parts is the fact that these LLMs have a context.

08:38 I would like you to write a program in Python.

08:40 Okay, great.

08:41 Tell it to me.

08:42 I want a program that does X, Y, Z, and then it writes it in Python, which sounds so simple.

08:46 But up until then, things like Siri and all the other voice assistants, they seemed so disjointed and so not understanding.

08:56 I just asked you about the weather.

08:57 And when I ask you how hot it is, how do you not understand that that applies to the weather?

09:03 The fact that you converse with them over a series of interactions is pretty special.

09:09 The context, yeah, it's basically implemented just by passing the history, the whole history appended to your prompt.

09:16 So yeah, it's not like any super staple magic or whatever, but it's still very interesting like how far you can take it.

09:25 And yeah, definitely like the context allows you to interact with the AI a lot more conversationally and humanly.

09:32 Like you don't have to pretend like you're talking to an AI.

09:36 You can actually just kind of treat it like a human and it still answers questions very well.

09:41 Yeah, it was even at Google, there was that engineer who said they thought it had become sentient and there was that whole drama around that, right?

09:49 This is such a crazy coincidence, but my roommate is actually friends with that gentleman.

09:54 Oh really?

09:55 I know.

09:56 Wow.

09:57 Absolutely crazy coincidence.

09:58 Like, I just thought it was really funny you bringing it up.

10:02 It was a Cajun gentleman, a senior engineer at Google, right?

10:05 Yeah.

10:07 Very funny, mate.

10:08 It's been a little while.

10:09 I don't remember all the details, but yeah, I mean, it's pretty wild and pretty powerful.

10:12 I think I've recently read, I'm trying to quick look it up, but I didn't find it.

10:16 I think they just used LLMs to discover like a new protein folding.

10:21 And it's that kind of stuff that makes me think like, okay, how interesting that knowledge wasn't out there in the world necessarily.

10:29 I have a lot to say on that subject.

10:31 So personally, I don't believe that language models are actually intelligent.

10:36 I think that people are conflating, well, they're certainly not conscious, right?

10:41 Absolutely.

10:42 Yeah.

10:43 As to whether they're intelligent, I don't think they are.

10:44 I think that intelligence has a, like intelligence as we know it, as humans know it, has some different characteristics that language models don't really exhibit.

10:53 They're best thought of as like really, really, really good statistical models.

10:58 Like are you familiar with the mirror test?

11:00 Maybe but I don't think so.

11:01 Yeah.

11:02 So it's like this idea on animal psychology, but like if a cat sees a mirror, it thinks it's another cat because it doesn't recognize its own inflection.

11:10 Right.

11:11 So it's like, oh, they all get like big and like, we're trying to like act big and tough to chase it off.

11:15 And it's just them.

11:16 Yeah.

11:17 Language models are kind of like that for, but for humans, right?

11:19 Like if something mimics human like quality closely enough, very tempting to think of it as human.

11:25 Yeah.

11:26 We see faces on Mars when it's really just erosion, stuff like that.

11:29 Exactly.

11:30 I could talk about this like for a full hour, but yeah, we should totally move on to another topic before I go on a tirade.

11:36 Well, when you're saying it's not that intelligent, I'm wondering if maybe you've mis-misnamed you this project.

11:42 Maybe it should just be Jupyter A, like the eye.

11:45 Do you got to drop the eye?

11:46 I don't know.

11:47 We're just following convention, right?

11:48 So I still use the term AI.

11:51 Of course, you got to talk to people.

11:52 Yeah.

11:53 Emphasize the artificial, huh?

11:55 Before we get to Jupyter AI, what came before?

11:57 How did people work with things like ChatGPT and other LLMs in Jupiter before stuff like Jupiter AI came along?

12:03 I think initially it was a combination.

12:07 So the initial motivation for this project came through a combination of sort of a demo put together by Fernando Perez, who is another, I believe.

12:16 I think he's another co-founder.

12:18 Another co-founder of Project Jupiter.

12:20 And he put together this demo called Jupite, which is, it's spelled like Jupiter, except the last letter is an E. And it's a pun of like ChatGPT, right?

12:30 Ju-pi-ti.

12:31 So it was a combination of that demo project set by Fernando and some motivation from my manager, Brian, who also was, you know, as a leader in the AWS AI organization, you know, he's always trying to think of fancy schmancy new ideas, right?

12:51 And this is a pretty fun idea to work out.

12:54 So I put together, I think this was sometime in early January, I put together the first demo, it was private, and I showed it off to the team and they were like, wow, this has a lot of potential.

13:05 Let's see if we can grow it a bit more.

13:08 And then as we worked on it for the next few months, it became clear like, oh, wow, this is actually, it's actually really significant.

13:15 Let's keep working on this.

13:17 So it's definitely been a collaborative effort to bring a Jupyter AI to where it is today.

13:23 Sounds like it.

13:24 Definitely a lot of contributors over on the GitHub listing.

13:27 Let's get into it.

13:28 What is Jupyter AI?

13:29 I mean, people can guess, but it's also different in ways than maybe just plug it in a chat window.

13:34 Jupyter AI is actually, right now it's two packages, but it's best thought of as just a set of packages that bring generative AI to Project Jupyter as a whole.

13:44 So not just JupyterLab, but also Jupyter Notebook and IPython.

13:48 Even the shell, how do you, I guess you invoke it by doing like the magic there as well.

13:53 It's the IPython shell, which is not the same as like a bash shell for instance, in a terminal.

13:59 You can dive into a little bit more detail on what these two packages are.

14:03 So we have the base Jupyter AI package, which is spelled exactly as you might imagine it.

14:09 It's Jupyter-AI.

14:10 That is a JupyterLab extension that brings a UI to JupyterLab, which is the screenshot that you're showing on your screen, but for viewers without a screen, it basically is the package that gives, adds that chat panel to the left-hand side and allows you to speak conversationally with an AI.

14:30 And then the second package is Jupyter-AI-Magix, which is spelled the same, except at the end, it's spelled hyphen-magix.

14:38 And that is actually the base library that implements some of the AI providers we use and brings things called magic commands to the IPython shell.

14:48 And magic commands basically let you invoke the library code, aka like calling, using it to like call language models, for instance.

14:57 And that allows you to do it inside an IPython context.

15:00 So what's crazy is that if you run IPython in your terminal shell, you can actually run Jupyter-AI from your terminal, which is pretty cool.

15:09 Yeah, I didn't realize that.

15:11 I mean, it makes sense, of course, but I hadn't really thought about it.

15:14 Yeah, I thought more of this like kind of a GUI type of thing that was alongside what you were doing.

15:19 Yeah.

15:20 We try to make it flexible and there's reasons for the magic commands, which I can, I can talk about that later though.

15:26 Sure.

15:27 This portion of Talk Python to Me is brought to you by Posit, the makers of Shiny, formerly RStudio, and especially Shiny for Python.

15:35 Let me ask you a question.

15:36 Are you building awesome things?

15:38 Of course you are.

15:39 You're a developer or data scientist.

15:41 That's what we do.

15:42 And you should check out Posit Connect.

15:44 Posit Connect is a way for you to publish, share, and deploy all the data products that you're building using Python.

15:51 People ask me the same question all the time.

15:54 Michael, I have some cool data science project or notebook that I built.

15:57 How do I share it with my users, stakeholders, teammates?

16:00 Do I need to learn FastAPI or Flask or maybe Vue or React.js?

16:06 Hold on now.

16:07 Those are cool technologies and I'm sure you'd benefit from them, but maybe stay focused on the data project?

16:11 Let Posit Connect handle that side of things?

16:14 With Posit Connect, you can rapidly and securely deploy the things you build in Python.

16:18 Streamlet, Dash, Shiny, Bokeh, FastAPI, Flask, Quadro, Reports, Dashboards, and APIs.

16:25 Posit Connect supports all of them.

16:27 And Posit Connect comes with all the bells and whistles to satisfy IT and other enterprise requirements.

16:33 Make deployment the easiest step in your workflow with Posit Connect.

16:37 For a limited time, you can try Posit Connect for free for three months by going to talkpython.fm/posit.

16:44 That's talkpython.fm/POSIT.

16:47 The link is in your podcast player show notes.

16:49 Thank you to the team at Posit for supporting Talk Python.

16:55 Now one thing, kinda, you said this, but I want to emphasize it a little bit in that this will run anywhere IPython kernel runs, which is JupyterLab notebook, but also Google Colab, VS Code, other places as well.

17:09 So pretty much it comes to you wherever your Jupyter type of stuff is.

17:13 Yeah.

17:14 And the same goes for your lab extension.

17:15 So the great thing about lab extensions is that they work anywhere where the product is just sort of built on top of JupyterLab, right?

17:23 So Google Colab is essentially what Google has is a, well, I can't, I obviously can't attest to what they're actually doing, but most likely it's like a set of extensions or CSS themes that are built on top of JupyterLab, but like the underlying code is still, is still JupyterLab.

17:40 It's still mostly JupyterLab.

17:41 So you can actually just install extensions and they work just fine, which is another reason why JupyterLab is just a pretty awesome.

17:49 Yeah, it sure is.

17:51 Yeah.

17:52 JupyterLab itself is basically a preselected, pre-configured set of extensions, right?

17:57 That's pretty cool.

17:58 That is true.

17:59 Yeah.

18:00 Giving preference to, or my showing just what I play with mostly, which is ChatGPT.

18:04 There's actually a lot of language models that you can work with, right?

18:08 One of the big things, and this is something I'll circle back to later, is that JupyterAI is meant to be model agnostic, meaning that we don't discriminate against the choice of model or model provider, because as an open source project, it's very imperative that we maintain the trust of our users, right?

18:24 Like users have to be sure that this isn't just some product that exists to force them and force or pigeonhole them into using a certain model provider like OpenAI or Anthropic.

18:36 This, the product makes no opinions.

18:38 We simply try to support everything as best as we can.

18:42 And we've written a lot of code to make sure that all of these models and just play nicely together essentially.

18:50 Like every model provider, like let's say from Anthropic or AI21 or Cohere, every one of these APIs kind of has its own quirks.

19:00 Every one of its Python SDKs has its own quirks.

19:03 And we work very hard to basically iron out the surface and make everything have the same interface.

19:09 We can talk about that later though.

19:11 - Sure, and it also makes it pretty easy to try them out, right?

19:14 If you switch from one to the other, you're like, I wonder how Hugging Face versus OpenAI would do to solve this problem, right?

19:21 - Absolutely.

19:22 Like that's kind of one of the ideas is like, while certain model providers, they might offer a UI if they're very well funded by their investors, for example, OpenAI to have a UI, but that UI only allows you to compare between different models from OpenAI, which as an independent third party looking to use an AI service, that information is obviously a little biased, right?

19:49 You want to see what other providers have to offer.

19:52 What does AI21 have?

19:54 What does Anthropic have?

19:55 And right now, there really is no cross model provider UI or interface in general.

20:05 But that's kind of one of the use cases that Jupyter AI was intended to fit.

20:09 - Yeah, provides a standard way to interact with all these and sort of compare them.

20:13 And it's also a UI for them if you're using JupyterLab, yeah.

20:16 I'm not familiar with all of these different models and companies.

20:20 Do any of those run locally, like things like GPT4ALL, where it's a local model versus some kind of cloud?

20:27 Where's your key?

20:28 Where's your billing details and all that?

20:30 - We actually recently just merged a PR that adds a GPT4ALL support.

20:36 That's included in the release.

20:38 However, back when we first implemented this a few months ago, I had a few issues with the platform compatibility.

20:45 So like some of the binaries that we downloaded from GPT4ALL didn't seem to work well on my M1 Mac, for instance.

20:53 I'd say, yes, we do have local model support, but it's a bit experimental right now.

20:58 We're still like ironing out the edges and testing, like seeing how we can make the experience better.

21:04 Like does it sometimes only get bad output because we forgot to install the shared library?

21:09 Those are the type of questions that our team is wrangling with right now.

21:12 - I see.

21:13 So maybe, I just jumped right into it, but maybe tell people what GPT4ALL is just real quickly.

21:18 - GPT4ALL offers a few local models.

21:21 Actually they offer several, I believe, not just a few.

21:23 - I think they offer maybe like 10 to 15.

21:26 It's the numbers getting quite large.

21:28 - Yeah.

21:29 - To the point where I don't know what the right choice is.

21:30 I'm like, which one do I download?

21:32 They say they're all good.

21:33 Of course they're going to say they're good.

21:34 - I'll be frank.

21:35 I don't actually have that much experience with GPT4ALL.

21:39 We mainly use them as sort of a provider for like these free and open source language models.

21:44 I think they offer a UI as well for multiple platforms.

21:48 - I've only played with them a little bit, just started checking it out, but it's basically a local, you download the model, you run it locally, you don't pay anything because it's just running on your machine, right?

21:58 As opposed to say OpenAI and others where you've at least got rate limiting and certain amount of queries before you have to pay and maybe potentially access to better models like ChatGPT4 versus 3.5 and so on.

22:12 - That's also taking the characteristics of these service providers for granted, right?

22:18 So yes, definitely, while it does hurt the wallet to pay for usage credits, right?

22:26 It's also pretty remarkable how small the latency has gotten with some of these APIs.

22:31 I've gotten like sub 500 millisecond latency on some of these APIs.

22:36 That's really incredible because when I was using GPT4ALL, the latency was a little bit high, right?

22:42 When running locally with limited computer resources.

22:45 It's really remarkable like how fast these APIs are.

22:48 - It is pretty insane.

22:50 Sometimes it drives me crazy.

22:51 I'm just only again, referring to ChatGPT because I don't have the experience with the others to the degree, but it drives me crazy how it artificially limits the response based on the speed of the response.

23:02 So it looks like it's chatting with you.

23:04 I'm like, no, I have four pages of stuff because you just get it out.

23:09 It'll say something like, if you can ask, I gave you a five page program.

23:14 Let's call it X.

23:15 If you say, what is X?

23:17 It'll just start printing it slowly line by line.

23:19 You know, you just are echoing it back.

23:22 Just get it out.

23:23 I want to ask you the next question.

23:24 You know what I mean?

23:25 - In that case, that's actually a feature request that we've gotten because it doesn't actually slow down.

23:30 Like, it's not just like a pointless animation.

23:33 Yeah.

23:34 The servers are streaming essentially token by token, right?

23:37 As the language model generates output.

23:39 So it's kind of more like a progress indicator than a superfluous animation.

23:44 Yeah.

23:45 - Yeah, of course.

23:46 But if you've got something large, large blocks of text you're working with, it can be, it can be a drag.

23:51 All right.

23:52 I wanted to kind of touch on some of the different features that I pulled out that I thought were cool.

23:56 I mean, obviously it goes without saying that Jupyter AI is on GitHub.

24:01 I mean, because it's software.

24:04 So it's open source, which I don't know if we said that, but obviously free open source on GitHub, BSD3 license.

24:12 But it's also noteworthy that it's officially under the JupyterLab organization, not under the, the David, or David account.

24:19 You know what I mean?

24:21 - It's officially part of the JupyterLab sub project.

24:23 And yep, as you pointed out, we're under the JupyterLab GitHub org as well.

24:28 - Yeah, that's awesome.

24:29 Let's talk about some of the different things you can do with it.

24:32 Some of them will be straightforward, like just like, how do I write a function, Jupyter AI?

24:37 And others I think are going to be a little more interesting.

24:40 So let's start with asking something about your notebook.

24:43 Tell us what people can, can do here.

24:46 - Asking about your notebook basically means like you can actually teach Jupyter AI about certain files, right?

24:52 So the way you do this is via a Slack command in the chat UI that you type slash learn and then file path.

24:59 And you essentially teaches the Jupyter AI about that file.

25:04 Now it works best with files are written in natural language, right?

25:08 So like text files or markup or markdown rather.

25:11 Yeah.

25:12 So like those, like, especially like developer documentation as well, right?

25:17 It works really well with those, it works best with those kinds of files.

25:21 And after Jupyter AI learns about these files, you can then ask questions about the files.

25:27 It's learned by prefixing your question with the slash ask.

25:32 - That is so cool.

25:33 - It is pretty cool, I know.

25:34 - It's so cool because what I've done a lot of times, if I want ChatGPT to help me, it's like, I'm like, all right, well, let me copy some code.

25:42 - Right.

25:43 - Then I'm going to have a conversation about it.

25:44 But a lot of the context of, well, it's actually referencing this other function and what does that do or just a broader understanding of what am I actually working on is missing, right?

25:55 Because I've only copied it.

25:57 You can't paste, you know, 20 files into ChatGPT and start talking about it.

26:01 But with this, you can, right?

26:02 You can say, learn about, you can say, learn about different things, right?

26:06 You can say, learn about your notebook, but you can also probably tell it like, learn about my documentation or learn about my dataset.

26:13 And now let me talk to you about it.

26:15 What's interesting is that the right now, while it works best for natural language documents, we are working on improving the experience for code.

26:23 From our testing, like the code is mostly the capabilities of Jupyter AI after learning code is right now mostly limited to explaining what code does, but sort of like explains it from the doc stream.

26:37 So we're working on a way to format the code in a manner that is more interpretable to a language model.

26:44 We're working on ways to improve the experience for code, but yeah, definitely the long-term vision is to have Jupyter AI literally be able to learn from a whole directory of files or possibly even a URL, like a remote URL to like a documentation page.

27:01 We have some big ideas there.

27:02 We're still working on them.

27:03 I want to work with a new package XYZ.

27:06 Like I don't know what XYZ is.

27:08 You know what?

27:09 Here's where you can learn about it.

27:10 Go and figure it out.

27:11 Like query the PyPI API, get the docs page from the metadata, and then go to that URL, scrape it.

27:18 Like lots of things we're exploring.

27:20 It's still kind of early days for this, right?

27:22 You've been at it about a year or so?

27:23 It's been out for a while.

27:25 Recently I've had to work on a few other things as well, like Jupyter AI.

27:30 Unfortunately I cannot give my entire life to Jupyter AI.

27:33 So I've been working on a few other things these past few months, but yeah, there are a lot of things that I envision for Jupyter AI.

27:41 I have a much bigger vision for what I want this project to be in, what it can be capable of.

27:47 Exciting.

27:48 So what did this screenshot that you got here in the section that I'll link to in the show notes is cool because you can select a piece of a portion, not even a whole cell, but a portion of code in a cell.

27:59 And then you can ask, what does this code do?

28:01 We have an integration with the JupyterLab editor APIs.

28:06 So you can select a block of code and then include that in your prompt.

28:10 And it will be appended to your prompt below, right?

28:13 It's appended to prompt, sorry.

28:15 You can basically ask, so you can select a block of code.

28:18 So in this screenshot right here, there's this block of code that computes the least common multiple of two integers, right?

28:26 And you can select that and then click include selection and then ask Jupyter AI, what does this do?

28:32 Which is pretty awesome.

28:33 Another checkbox is replace selection.

28:35 I'm guessing that is like, help me rewrite this code to be more efficient.

28:39 Or if there's any bugs, fix it.

28:41 So the replace selection checkbox is totally independent.

28:44 So both, so you can actually use both at the same time.

28:47 And one of the use cases for this is refactoring.

28:50 And I've actually applied this in practice a few times, where you can basically select a block of code, and then click both include and replace selection.

29:00 And then you can pull out your prompt to say, refactor this block of code, do not include any additional help or text.

29:06 And when you send that prompt over, it will actually refactor the code for you in your notebook, which is, yeah, like is, is pretty great.

29:15 That's pretty awesome.

29:16 You know, you could do things like refactor this to use guarding clauses.

29:20 So it's less nested, less, less arrow code or whatever, right?

29:24 Yeah.

29:25 Or like add a docstring, right?

29:27 Summarize this purpose of this function, and then enclose that in a docstring and add it to the function.

29:33 Right.

29:34 Or this code is pandas code, but I'd like to use pollers.

29:36 Please rewrite it for pollers, which is not a super compatible API.

29:40 It's not like DAST to pandas, where it's basically the same.

29:42 Yeah.

29:43 And this kind of circles back to that question that you had asked earlier.

29:46 I think I went on a tangent there and didn't fully answer, but like, what is like the utility of Jupyter AI to like data practitioners, right?

29:53 So we're talking data scientists, machine learning engineers, like this, the include selection features, we've heard great feedback about how helpful it is to like actually explain a data set.

30:03 So sometimes like you're working with a test set and it's not immediately clear what the features of this test set are, or like what this even does, because sometimes it's like high dimensional data and they can literally select it and then click include selection and say, and tell Jupyter AI, explain to me what this does.

30:21 Just like, what, what is this like data frame stuff?

30:24 Like, whoa, we got data frames in data frames.

30:26 Like what's going on here?

30:27 Like what even is structure?

30:28 That's awesome.

30:29 And I think it's super valuable.

30:30 And this is like a little bit I was getting to before one of the features that I think is cool.

30:35 Whereas if you just go with straight ChatGPT, you copy your code, you paste it into the chat.

30:40 Hopefully it doesn't say it's too much text and then you can talk about it.

30:43 But then when you get an answer, you've got to grab it, move it back over.

30:47 And this just, this fluid back and forth is really nice.

30:50 Yeah.

30:51 And that's actually one of the design principles that we worked out when first starting this project officially.

30:57 Was the idea that Jupyter AI should be human centered as in you shouldn't be expected to be a developer to know how to use this tool.

31:06 Like this tool is for humans, not, not for any specific persona, just for humans in general.

31:11 That's awesome.

31:12 Yeah.

31:13 So in this case, you select the function that does the lowest common denominator bit and you ask it what it does.

31:17 It says the code will print out the least common multiple of two numbers passed to it.

31:23 Super simple, very concise.

31:24 Okay, great.

31:26 Now we can go on to the next thing, right?

31:28 Yeah.

31:29 There's a, this LCD function that, that we're kind of talking about here.

31:32 This example is recursive, which I think recursion is, is pretty insane, right?

31:39 As a, just a concept for people to get their head around.

31:42 This is the iterative version.

31:43 So this is after they, yeah, this is the iterative.

31:46 Oh, this is after.

31:47 Yeah.

31:48 So if we go back up, Oh, one of the things you ask it is things like, and the example is rewrite this function to be iterative, not recursive.

31:54 Right.

31:55 That's really, really awesome.

31:56 Right.

31:57 You're like, this is breaking my brain.

31:59 Let's, let's see if we can not do that anymore.

32:04 This portion of Talk Python to me is brought to you by us.

32:07 Have you heard that Python is not good for concurrent programming problems?

32:11 Whoever told you that is living in the past because it's prime time for Python's asynchronous features with the widespread adoption of async methods and the async and await keywords.

32:21 Python's ecosystem has a ton of new and exciting frameworks based on async and await.

32:26 That's why we created a course for anyone who wants to learn all of Python's async capabilities, async techniques and examples in Python.

32:34 Just visit talkpython.fm/async and watch the intro video to see if this course is for you.

32:40 It's only $49 and you own it forever.

32:42 No subscriptions.

32:43 And there are discounts for teams as well.

32:48 Another thing I wanted to talk about, and you talked a fair amount about this in your presentations that you did.

32:54 I can't remember if it was the JupyterCon or the PyData one that I saw, but one of those two, you talked about generating new notebooks and how it's, it's actually quite a tricky process.

33:07 You've got to break it down into little steps.

33:08 Cause if you ask too much from the AI, it kind of doesn't give you a lot of great answers.

33:12 Tell us about making new notebooks.

33:13 Like, why would I even use, like I can go to JupyterLab and say, file new, it'll make that for me.

33:18 What's this about?

33:19 The generate capability is great because it generates a file that is essentially a tutorial that can be used as a tutorial to teach you about new subjects.

33:28 Right?

33:29 So like you could, for example, submit a prompt, like slash generate a notebook about async IO or a demonstration of how to use matplotlib.

33:38 And after, so this will take a bit of time, but eventually Jupyter AI is done thinking and generates a file and it names a file and generates a notebook and the notebook has a name, it has a title, it has like sections, table of contents, and each of the cells within it like is tied to some like topic that is determined to be helpful and to answer the user's question.

34:02 Awesome.

34:03 Could I do something like I have weather data in this format from the US weather service.

34:11 Could you generate me a notebook to plot this XYZ and help me answer these questions?

34:15 Or like, could I ask it something like that?

34:17 Not at the moment.

34:19 So like that would best be done.

34:21 That would be done best if since the data is already like, I'm assuming that the data is already available, right.

34:27 And some kind of like format.

34:29 So like in a notebook, you could use the chat UI to like select that entire, select that selection and then tell it, tell Jupyter AI to generate code to plot that data set.

34:40 So right now generate only takes a natural language prompt as its only argument.

34:46 So it's kind of like stateless in that regard.

34:48 So in this case, you can say slash generate a demonstration on of how to use Matplotlib.

34:53 And then the response is great.

34:54 I'll start working on your notebook.

34:56 It'll take a few minutes, but I'll reply when it's ready.

34:58 In the meantime, let's keep talking.

35:01 So what happens behind the scenes that takes a few minutes here?

35:05 This is a bit interesting.

35:06 It does kind of dive deep into like the technical details, which I'm not sure is, which do you want to just like dive?

35:13 Yeah, let's go tell us how it works.

35:14 Probably a good chance to explore that.

35:16 Yeah.

35:18 So slash generate.

35:19 The first thing is that the prompt is first expanded into essentially a table of contents.

35:24 So basically we tell the language model generate us a table of contents conforming to this JSON schema.

35:30 And when you pass a JSON schema included in your prompt, the language model will be much more, will have a much higher likelihood of returning exclusively an object, a JSON object that matches that JSON schema that you had initially provided.

35:46 So in our case, we generate a table of contents and then we take that table of contents.

35:51 And then we say for each section, we do this in parallel, like generate us some code cells that are appropriate for teaching this section of the document.

36:00 So for example, for Matplotlib, right?

36:03 Like maybe the first section is your first, like generating your first plot, plotting 3D functions.

36:08 And the next one is like plotting complex functions with face or something like that.

36:13 And then with each of these sections, we then send another prompt template to the language model for each of these sections, asking it to generate the code.

36:21 And then at the end, we join it all together and then we save it to disk and then emit that message and say, we're done.

36:27 Maybe the English literature equivalent would be, instead of just saying, write me a story about a person who goes on an adventure and gets lost.

36:36 It's like, I want, give me an outline, bullet points of interesting things that would make up a story of how somebody goes on an adventure and gets lost.

36:44 And then for each one of those, you're like, now tell this part of the story, now tell this part.

36:48 And somehow that makes it more focused and accurate, right?

36:51 The main limitation is that because we're model agnostic, language models are limited in how much output they can generate, right?

36:59 The issue we were running into when we were trying to do the whole thing all at once, like generate, meet a whole notebook, is that some language models just couldn't do it.

37:07 In an effort to, you know, sort of stay model agnostic, we deliberately implemented, we deliberately broke this process down into like smaller sub tasks, each with its own like prompt template in order to accommodate these models that may lack the same token size windows that other models have.

37:26 I think just even for ones that have a large token spaces, I think they still, the more specific you can be, the more likely you're going to get a focused result instead of a wandering vague result.

37:38 Teach me about math or teach me how to factor, you know, how to integrate this differential, solve this series of differential equations or physics.

37:47 Like you're going to get a really different answer to those two questions.

37:50 Yeah.

37:51 That's also, so it goes back to like a topic I didn't, I did want to call out, but I don't, I don't think we hit on it is that the chat UI actually does support rendering in both Markdown and LaTeX, which is a markup language for math.

38:04 So you can ask it both complex engineering and mathematical questions, like asking it to explain you like, yeah, so like there might be a demo here.

38:13 I'm not sure if it's on this page though.

38:16 So if I had a Fourier, fast Fourier transform in LaTeX and I put it in there and say, what is this, it'll say it's a fast Fourier transform or something like that.

38:25 Yes.

38:26 And it also works the other way around.

38:27 You can also use it to say like, Hey, explain to me what the 2D Laplace equation is, or explain to me like what, what does this do?

38:35 Right.

38:36 And it will actually generate and format the equation inside the chat UI, which is really remarkable.

38:43 I love that feature.

38:44 It's actually really awesome.

38:45 And it's also really appropriate for a scientific oriented thing like Jupiter, right?

38:50 The remarkable thing is that because chat UBT and like other such language models, like the ones from Anthropic and AI21, because like they are founded on the premise of where like their functionality comes from having such a large corpus of data, they know a remarkable amount of information.

39:08 So like we've tried like some example notebooks of quantum computing and explained those really well.

39:14 We try, I tried one of like the Black-Scholes options pricing model, we use them in financial engineering.

39:20 And it's really remarkable, like the utility that it offers just by being there in the side panel.

39:26 Like you essentially have like a math wizard available to you in JupyterLab all the time.

39:30 It's probably better than a lot of math professors in terms of not necessarily in the depth of one area, but you know, if you ask somebody who does like abstract algebra about real analysis, they're like, I don't really do that part.

39:43 Or if you ask somebody about real analysis, about number theory, like I don't really, you can hit on all the areas, at least a generalist professor sort of thing.

39:50 We talked about the slash learn command.

39:53 That's pretty excellent already and where that's going.

39:56 So I'm pretty excited about that.

39:58 Yeah, it actually does have a lot of interesting technical tidbits to it, like the implementation.

40:04 Okay.

40:05 Yeah, actually, this is one of the really challenging things with these chat bots and things.

40:10 For example, I've tried to ask ChatGPT, if I gave it one, just one of the transcripts from the show, I want to have a conversation about it.

40:17 It's too much.

40:18 I can't do it.

40:19 It's just one, one show and in doc, like in your documentation, there might be a lot of files in there, right?

40:25 More than just one transcript levels worth.

40:28 So that alone, I think is kind of interesting just how to ingest that much data into it.

40:32 Yeah.

40:33 You know, this is a very interesting subject and it actually is a bit complex.

40:38 I'm sure it is.

40:39 I think there are some other features you want to discuss.

40:41 Let's dive into this for just a minute.

40:43 Because I think it is interesting.

40:44 How do you make, because this makes it yours, right?

40:46 It's one thing to ask vague, like, tell me about the Laplace equation and how does it apply to heat transfer?

40:51 Like, okay, great.

40:52 I have a specific problem with a specific library and I want to solve it.

40:56 And you don't seem to understand about enough of it.

40:58 So it really limits the usefulness if it doesn't, if it's not a little closer to what you're actually doing.

41:04 And I think this brings it closer.

41:05 So yeah, tell us about it.

41:06 Language models aren't just governed by like their intelligence, however you measure that, right?

41:12 They're governed by how much context they can take.

41:14 So one of the reasons ChatGPT was so remarkable is that it had a great way of managing context through conversation history.

41:21 And that like seemingly small, like leap and like seemingly small feature, like is what made ChatGPT so remarkably disruptive to this industry is because of that additional context.

41:35 And we think about like extending that idea, like how do we give an AI more context, make it like even more human-like and personal?

41:42 Well, the idea is similar.

41:44 We add more context and that's what learning does, right?

41:47 And so the way learning works is that we're actually using another set of models called embedding models.

41:54 And embedding models are very, very underrated and the AI like modeling space, right?

42:01 These are really remarkable things.

42:02 And they have this one, I'll only cover like the most important characteristic of embedding models, which is embedding models take syntax and map it to a high dimensional vector space called a semantic space.

42:17 And inside of the semantic space, nearby vectors indicate semantic similarity.

42:23 I know that's like a lot of words, I'm going to break that idea down, right?

42:26 So like canine and dog, let's take these two words as an example, right?

42:30 These two words are completely different.

42:32 They don't even share a single character in similarity together, right?

42:36 They don't have a single letter in common with one another.

42:39 And yet we know as humans that these two dogs, these two words mean the same thing.

42:45 They refer to a dog.

42:46 So like they have different syntax, but the same semantics, the same semantic meaning.

42:51 So their vectors would be-

42:52 Would be mapped close.

42:53 Would be very close by whatever metric you're using, yeah.

42:56 If you extend this idea and like you imagine, okay, what if you split a document?

43:01 What if you split a file into like one to two sentence chunks?

43:05 And then for each of these like sentence, let's just say sentences, for example.

43:09 Let's say we split a document to sentences and then we take each of those sentences and then map, use an embedding model to compute their embedding and then store them inside of a vector store.

43:19 Like basically like a local database that has, that just stores all of these vectors in like a file or something, right?

43:26 Now imagine what happens if we then take a prompt, like a question that we might have, so that is an embedding.

43:33 And then we say to the vector store, okay, for this prompt embedding, find me all of the other embeddings that are close to this.

43:42 Well, what you've just done in this process is called a semantic search.

43:46 So it's kind of like syntax search, except instead of searching based off of keywords or tokens or other syntactic traits, you are searching based off the actual natural language meaning of the word.

43:59 This is much more applicable when it comes to like natural language prompts and natural language, like corpuses of data, because this is like the actual information that's being stored.

44:10 Now we don't care about the characters.

44:11 We care about the information that they represent, right?

44:14 The essence of it.

44:15 Yeah.

44:16 And these vectors are computed by the larger language model?

44:18 The vectors, the embeddings are computed by an embedding model and they're actually a separate category of model that we have our own special APIs for.

44:27 So in our settings panel, you can change the language model.

44:31 And I think we already discussed that, right?

44:33 Yeah.

44:34 But what's interesting is that underneath that, you'll also see a section that says embedding model and you can change the embedding model to like, we also offer that same principle of model agnosticism there.

44:46 Yeah.

44:47 This is very interesting.

44:48 Very interesting.

44:49 Let's talk a little bit about the format.

44:50 You said obviously that you can do LaTeX, which you say in, you say math, right?

44:54 You tell us, give me math.

44:55 Yeah.

44:56 Which is, yeah, it's pretty interesting, but you can do images, markdown code, HTML, JSON, text.

45:03 There's a lot of, a lot of different formats you can get the answer back in.

45:05 When you use the AI magics, we can pass them to like a renderer first before we show it to output to the user.

45:13 Yeah.

45:14 And with the AI magic, the percent percent AI in the cell, you can also specify, that's where you put the format potentially, but you can also specify the model and the service, I guess, for the provider.

45:25 The IPython magics are basically stateless in the sense that you always have to specify the model explicitly.

45:33 They don't operate off the premise that you are using JupyterLab.

45:36 They don't run off the premise that you have JupyterLab installed or are using the lab extension that we offer.

45:42 Because of that, like the model is stated explicitly every time.

45:46 That's by design.

45:47 When you sat down, what is your Jupyter AI provider set to?

45:54 What's your favorite?

45:55 As a developer, I like literally pick a random one to give the most test coverage at all times.

46:01 And that's actually a great way of finding bugs.

46:03 So yeah, I don't have a favorite one.

46:05 My favorite one is the one that works and hopefully that should be all of them.

46:11 You can also tell it to forget what I was talking about.

46:14 We're going to start over.

46:16 That's pretty interesting that you can do that along the way because you maybe had a bunch of conversations and we talked about the benefit of it, like knowing the history of that conversation, but you're like, all right, new idea.

46:26 Switching topics.

46:27 Chat.

46:28 I think the last one I wanted to talk about specifically was interpolating prompts.

46:32 Kind of almost like f-strings where you can put in the prompt text, you can put a variable and then other parts of your notebook or program can set that value.

46:43 Yeah.

46:44 Tell us about this.

46:45 You can define a variable in your IPython kernel, right?

46:48 So like, and that's just dumb, but just like how you define any other variable.

46:52 But what's interesting is that IPython is actually aware of the variables that you are defining.

46:58 So we can programmatically access that when we implement the magic, right?

47:03 Basically if you define any variable at a top level scope, like let's say a poet equals Walt Whitman, right?

47:09 So we have a name variable called poet.

47:11 And then you can send a prompt over like write a poem in the style of curly braces, poet and curly braces.

47:19 And when that is run, the variable, the value of that variable is interpolated and substitutes around the curly braces.

47:27 So the final prompt becomes write a poem in the style of Walt Whitman.

47:31 And when that prompt is sent, well, you can imagine it generates a poem of Walt Whitman.

47:36 Yeah.

47:37 The variable interpolation that we offer in IPython is very useful for like very quick like debugging.

47:42 So you can actually reference a cell in the notebook directly.

47:47 I think a lot of people don't know this, but like in a Jupyter notebook, there's like the in and out indicators to the left of each cell.

47:55 So it'll say like in.1, out.1, right?

47:58 So those are actual variables and you can use them here too.

48:01 So you can reference like debug, tell me why this code is failing, curly braces in.1.

48:08 That's you just on the screen there, just scroll there.

48:11 Imagine what this does, the in bracket 11 or what went wrong in out bracket 11 or yeah, something like this.

48:19 Right.

48:20 It's fine as long as you don't go and rerun that cell.

48:22 Yeah.

48:23 But like you said, this is not for long-term.

48:25 You can make it independent of the order of execution just by like assigning whatever variable that is to whatever like content that is to a named variable.

48:35 That way, no matter what order you run them in.

48:38 People might be thinking when you describe this interpolation thing, just bracket, bracket, curly brace, curly brace.

48:44 They're like, we already have that in Python.

48:46 You just put an F in front of the string and so on.

48:49 But this is in the message that goes to the AI cell magic, not straight Python, right?

48:55 That's the relevance.

48:56 That's why this is interesting, right?

48:57 Yes.

48:58 I guess it really comes down to the different models that you select.

49:00 So you opt into this a little bit, or maybe you need to understand it that way, but talk to us a bit about privacy.

49:07 If I select something and say, what does this do?

49:11 What happens?

49:12 Something important to emphasize here is that whenever you use a language model that's hosted by a third party, so like it's running on their servers, right?

49:21 Regardless of whether this model is free or not, like the fact that you're sending data to a third party over the internet, like that's where the privacy and security concerns happen, right?

49:33 So that happens whenever you're sending data across the wire over the internet.

49:37 But we have some special safeguards in place here specifically to assuage fears of concerns over privacy and security that a lot of open source users have.

49:49 And one of the important ideas here is that Jupyter AI is both transparent and traceable.

49:55 So when we send a prompt to a language model, that's always captured in the server logs by default.

50:02 So that's always being logged.

50:03 That's always being captured.

50:05 So it's always going to be traceable.

50:07 There's no secret back channel.

50:10 You tell people this is happening.

50:11 Okay.

50:12 Yeah.

50:13 So if an operator needs to audit, like, oh, dang, let me check just to make sure nothing scary was sent over to OpenAI.

50:21 Well, the operator can review the server logs and make sure that all usage is compliant with whatever privacy policy their company has.

50:31 And Jupyter AI is also exclusively user driven, meaning that we will never by default send data to a language model, even if you selected one, right?

50:43 Like, we will never send data to that language model until explicit action is done by the user.

50:49 So in this case, like clicking the send button, clicking shift, enter and running to sell.

50:54 Nothing is sent to language model or embedding model until that happens.

50:57 That's really all you can do.

50:58 That's sounds great.

50:59 Because you don't control what happens once it hits OpenAI or Anthropic or whatever.

51:04 That's why the transparency is so important.

51:06 Right.

51:07 And oh, I forgot to touch on traceability.

51:09 So like with these AI generated cells, right?

51:12 So like the output cells in the metadata, we indicate that with like the model that was used to generate an output cell, if it was if it comes from the Jupyter AI magic.

51:21 So that way it's also like traceable, not just in the logs, but like in the actual files, metadata and stuff as well.

51:28 That's cool.

51:29 So it'd be real easy to say, have the cell magic and then you know, notebooks store their last output.

51:35 Like if you upload them to GitHub, they'll, they'll keep their last output.

51:39 Unless you say clear all cell outputs, depending on which model you have selected, you might not get the same output, not even close to the same output.

51:46 So you might want to know like, well, how did you get that picture?

51:49 Oh, I had Anthropic selected and not the other, right?

51:52 Like that is really nice.

51:53 I've actually used the server logs myself to debug like prompt templates, for instance.

51:58 Right.

51:59 Because what we show in the logs is the full prompt, like after applying our template, after applying like the edits, like that's what's actually shown.

52:08 So that's really also really helpful for developers who need to debug what's happening.

52:12 Yeah, of course.

52:13 Or if you're a scientist and you're looking for reproducibility, I doubt there's much guaranteed reproducibility, even across the versions of the same model, but at least you are in the same ballpark, you know, at least I know what model it came from.

52:25 You can set the temperature to zero, but it won't generate a very fun output.

52:29 That is a workaround if you truly need a reproducibility.

52:33 I suppose.

52:34 Yeah.

52:35 The temperature being the ability to tell it how creative do you want to be or how focused do you want the answer to be, right?

52:42 It's a hyperparameter that basically governs the randomness, like how far away from the mean it's willing to deviate.

52:49 Kind of, yeah, vaguely describable as creativity.

52:52 Yeah.

52:53 Yeah, I suppose.

52:54 And if you're looking for privacy, the GPT for all might be a good option.

52:58 Oh yeah, absolutely.

52:59 For that, right?

53:00 Because that's not going anywhere.

53:01 Yeah.

53:02 However, some of them do have a license restrictions and that's also why we have also taken it this slow when it comes to adding more GPT for all support is because different models are licensed differently and that's another consideration we have to take in mind.

53:17 Yeah.

53:18 Yeah, of course.

53:19 You're playing in a crazy space with many different companies evolving licenses.

53:23 Yeah.

53:24 Let's close it out with one more thing here.

53:26 Maybe two more things.

53:27 One is there's a lot of interest in these agents and it sounds, for example, your create a notebook that does this sort of thing, like teach me about Matplotlib is a little bit agent driven thinking of like Lang chain and stuff like that, or even GPT engineer.

53:44 What's the story?

53:45 Do you have any integrations with that?

53:46 Any of those types of things?

53:47 We're working on one where basically you won't need to use slash commands anymore.

53:52 So this is like, again, like we're just kind of playing around with this, seeing how well it behaves.

53:58 But we are trying to use agents to, for example, remove the need to use slash commands.

54:02 So when you say like generate a notebook, like it will just generate one.

54:07 You don't have to know to slash command for that.

54:09 Like it will just go.

54:10 So like, yeah.

54:11 However, we don't have any, we don't use agents at the present moment now.

54:16 And the reason for that is that they have very, they're not very model agnostic, at least the ones from our research, like they only work well for a specific model and a specific prompt template, but beyond that, it's hard.

54:29 All right.

54:30 Last question running out of time here.

54:31 What's your favorite thing you've done with Jupiter AI?

54:34 Not like a feature, but what have you made it do to help you that you really like?

54:38 Teach Jupyter AI about Jupyter AI.

54:41 That's an easy one.

54:42 So we have the documentation.

54:44 What did it learn?

54:45 Well, it learned about itself.

54:47 So like I was at a conference and I, so this was at Py data and I actually got enough questions to the point where I did, I just downloaded, I had the documentation already available in my home directory because of some other previous or restructure tax them.

55:02 Yeah.

55:03 And the markdown source, the markdown source.

55:04 Right.

55:05 And then I just had slash learn.

55:07 I just learned that doc and then I just had that laptop on the side and told people, like if you have any questions, try it.

55:14 Just ask it.

55:15 Yeah.

55:16 In case, in case you don't want to wait in line.

55:17 So yeah, that was yeah, it's, it's pretty remarkable.

55:21 Yeah.

55:22 The learn feature by far is definitely my favorite and it's the one I want to spend the most time developing and pushing.

55:28 I think there's massive possibility there because I deeply understand what you're working on in a multifaceted way.

55:36 What are the documentations?

55:37 What is the code?

55:38 What is the data?

55:39 What is the network topology and servers I can work with?

55:41 Like all of that kind of stuff or pick your specialty.

55:44 Yup.

55:45 And that's how we make AI more human right before it takes over.

55:48 So no problem there.

55:49 Oh yeah.

55:50 Just kidding.

55:52 Let's wrap it up.

55:53 I think we're out of time, David.

55:54 So final question, some notable PyPI package, maybe something awesome you discovered to help you write Jupyter AI.

56:01 Anything you want to give a shout out to?

56:02 We definitely use LangChain a lot and that it's a pretty, it has some pretty fantastic integrations and we're actually built on top of LangChain really.

56:11 But also Dask.

56:12 Dask is really nice.

56:13 Yeah.

56:14 It has some great visualization capabilities for when you're doing parallel compute.

56:18 Like it has a great dashboard that's also available in JupyterLab.

56:22 Yeah.

56:23 That shows it running and distributing the work in JupyterLab is amazing.

56:26 And one of the contributors also reached out once he had heard I was integrating Dask into Jupyter AI.

56:33 He actually reached out to help us and offer like direct one-on-one guidance with using Dask.

56:39 And yeah, it's just been a fantastic experience using Dask.

56:42 Yeah.

56:43 I have no complaints.

56:44 It's, it's just pretty awesome that somebody has finally made parallel and distributed compute better in Python.

56:51 For sure.

56:52 Yeah.

56:54 Dask is cool.

56:55 Dask is very cool.

56:56 All right.

56:57 Well, final call to action.

56:58 People want to get started with Jupyter AI.

56:59 What do you tell them?

57:00 Install Jupyter AI or Conda.

57:01 Conda install works too.

57:02 It's on both.

57:03 Awesome.

57:04 Well, really good work.

57:05 This is super interesting and I think a lot of people are going to find value in it.

57:07 So I can see some nice comments in the audience that people are excited as well.

57:11 So thanks for being here.

57:12 Yeah.

57:13 Thank you so much.

57:14 Yeah.

57:15 See you later.

57:16 Bye bye.

57:17 Bye.

57:18 This has been another episode of Talk Python to Me.

57:19 Thank you to our sponsors.

57:20 Be sure to check out what they're offering.

57:22 It really helps support the show.

57:25 This episode is sponsored by Posit Connect from the makers of Shiny.

57:29 Publish, share and deploy all of your data projects that you're creating using Python.

57:34 Streamlet, Dash, Shiny, Bokeh, FastAPI, Flask, Quattro, Reports, Dashboards and APIs.

57:40 Posit Connect supports all of them.

57:42 Try Posit Connect for free by going to talkpython.fm/posit.

57:46 P-O-S-I-T.

57:47 Want to level up your Python?

57:51 We have one of the largest catalogs of Python video courses over at Talk Python.

57:55 Our content ranges from true beginners to deeply advanced topics like memory and async.

58:00 And best of all, there's not a subscription in sight.

58:03 Check it out for yourself at training.talkpython.fm.

58:06 Be sure to subscribe to the show.

58:08 Open your favorite podcast app and search for Python.

58:11 We should be right at the top.

58:12 You can also find the iTunes feed at /iTunes, the Google Play feed at /play, and the Direct RSS feed at /rss on talkpython.fm.

58:22 We're live streaming most of our recordings these days.

58:24 If you want to be part of the show and have your comments featured on the air, be sure to subscribe to our YouTube channel at talkpython.fm/youtube.

58:33 This is your host, Michael Kennedy.

58:34 Thanks so much for listening.

58:35 I really appreciate it.

58:36 Now get out there and write some Python code.

58:39 [MUSIC PLAYING]

58:42 [MUSIC PLAYING]

58:55 (upbeat music)

58:58 [BLANK_AUDIO]

Back to show page
Talk Python's Mastodon Michael Kennedy's Mastodon