Monitor performance issues & errors in your code

#430: Delightful Machine Learning Apps with Gradio Transcript

Recorded on Thursday, Aug 10, 2023.

00:00 You've got this amazing machine learning model you created, and you want to share it and let your colleagues and users experiment with it on the web.

00:07 How do you get started?

00:08 Learning Flask or Django?

00:10 Great frameworks, but you might consider Gradio, which is a rapid development UI framework for ML models.

00:16 On this episode, we have Freddy Bolton to introduce us all to Gradio.

00:20 This is "Talk Python to Me," episode 430, recorded August 10th, 2023.

00:25 (upbeat music)

00:28 Welcome to Talk Python To Me, a weekly podcast on Python.

00:41 This is your host, Michael Kennedy.

00:43 Follow me on Mastodon, where I'm @mkennedy, and follow the podcast using @talkpython, both on fosstodon.org.

00:50 Be careful with impersonating accounts on other instances.

00:53 There are many.

00:54 Keep up with the show and listen to over seven years of past episodes at talkpython.fm.

00:59 We've started streaming most of our episodes live on YouTube.

01:03 Subscribe to our YouTube channel over at talkpython.fm/youtube to get notified about upcoming shows and be part of that episode.

01:11 This episode is brought to you by JetBrains, who encourage you to get work done with PyCharm.

01:17 your free trial of PyCharm Professional at talkpython.fm/done-with-pycharm.

01:23 And it's brought to you by Sentry. Don't let those errors go unnoticed. Use Sentry.

01:29 Get started at talkpython.fm/sentry. Freddy, welcome to Talk Python to Me.

01:35 Thanks for having me, Michael.

01:36 Yeah, it's great to have you here. I think people are going to learn a lot about some machine learning on this episode. And you've got this really cool visual way, this visual tool with working with machine learning projects.

01:47 And oftentimes people ask me, I'm not really a web developer, but I have some machine learning stuff, or I'm a data scientist, and I want to share with people, how do I do that?

01:56 So your project might be a good answer to that for some folks, right?

01:59 - Yeah, absolutely.

02:00 I think that, yeah, Gradio is built for that use case.

02:03 I think you can build like lots of complex stuff with Gradio, web apps and stuff, but you could, it's optimized for the ML use case.

02:11 Like how do you get like an ML workflow, like on the web and share it with people as quickly as possible.

02:16 That's kind of what Gradio is built for.

02:17 Awesome.

02:18 I've got it running on my computer.

02:19 How do I take it from a notebook to something that other people who are not programmers can use, right?

02:23 Yeah, exactly.

02:24 That's kind of it.

02:24 And it's what Gradio is one line.

02:26 You can get a shareable link directly from your Colab notebook, Jupyter notebook, SageMaker, local, whatever.

02:32 Yeah.

02:33 So it's really easy to share with people.

02:35 Excellent.

02:35 All right.

02:36 Before we dive into that, let's start with you.

02:38 A quick story, quick introduction about you and how you got into programming Python.

02:42 Yeah, absolutely.

02:42 So I, all the way from the beginning, I, I graduated with a degree in statistics and my first job was working as a data scientist in Chicago.

02:50 And that was doing more like bread and butter data sciency stuff.

02:53 Like you pull data from like database and like you train a model and then you like try to communicate the results with someone.

02:59 And then at the time, yeah, I mean, it wasn't that long ago, but to me, it feels like it was like a millennia ago.

03:05 Technologically speaking, it's so different.

03:07 Yeah.

03:07 Yeah.

03:07 It was like a really long time ago.

03:09 Yeah.

03:09 But then basically like what happens or what happened to me a lot was just just like, okay, like we need to like, we're training this model.

03:15 Like, how do we share, like share with the relevant, like stakeholders, right.

03:18 There's like a PM or like someone that's interested in this, like, how do you, how do we, how do you make them care?

03:23 And then there really wasn't a good answer.

03:24 Like you would have to like, you'd like compute like some metrics and then try to explain what they mean.

03:29 And you like draw like a bar plot or something.

03:31 And it just wasn't really that useful.

03:33 Right.

03:33 And I think it really was to be fair, it was like a skill gap.

03:36 I didn't even know how to build like an interactive website to share with people.

03:40 Cause at the end of the day, what really, what people really care about or how to make someone really care about it is that they can play with it, right?

03:46 Cause if you show them these plots and these metrics, it's like a machine learning models, like very abstract, right?

03:52 It's like just this thing that's somewhere else.

03:54 And then this is like the output of it, but it's not really even like the actual output, it's just like some sort of summary statistics of it.

04:00 Right.

04:00 But if you give someone like, this is what the model is and you can let them like manually play with the inputs and see how the outputs change, they get a sensible, well, like what the model is and how it works and stuff like that.

04:12 Right.

04:12 And I think that's where I learned about this problem and why this problem is important.

04:17 And then ever since then, I've been sort of devoting to devoting myself to working on like open source tools to like make data science like more efficient.

04:24 And like my, my, the latest product that I worked on is called Gradio, which kind of does this.

04:29 It basically lets you turn a machine learning function into a web app in one line of code.

04:33 And from there you can jump off and build like as complex of a web as you want.

04:38 But from the beginning, you can get up and running with this, like basically two lines of code.

04:43 So, yeah, it's kind of a little bit of a bummer.

04:45 You talked about letting people play with the machine learning model and you change the inputs and stuff.

04:50 It just completely changes the round trip speed because the alternative might be, I'll make, I'll get you a PDF and you can read the report and then what if we change this, all right, at tomorrow's meeting, I'll bring the new PDF.

05:02 No, no, no, no, no, no, no. Just empower people and give them these tools.

05:05 And yet at the same time, data scientists are not web developers, certainly not in the super dynamic, front-end, Ajax callback way of programming, right.

05:14 That's a different skill to be sure.

05:15 And so it's not like their data science skills by default make them.

05:20 Able to build these.

05:21 And even if you could, is it a good use of your time?

05:23 Right?

05:23 Yeah, absolutely.

05:24 And yeah, so it's like the PDF report is definitely like one, one way of doing it.

05:28 The other way that we tried to do sometimes is we'll just hand it off to someone and We'll have them build the web, but then that just takes longer.

05:34 You have to like explain everything to them.

05:36 Right.

05:36 I think like the make it really impactful as if the person who made the model convey themselves, they can just create the web app, the demo, like immediately.

05:44 a lot of these, a lot of data scientists are in Python, right?

05:46 So that kind of means you need like a Python based tool to get you up and running really quickly.

05:51 And a lot of them, yeah, like you said, don't know about web programming.

05:53 So you've got to like abstract that away as much as you can, like as much as it makes sense, so that they're not like daunted or it's not like, okay, now I'm Like I'm really good at Pytorch and training, like all these things that like now I need to learn about what it's like servers and all that stuff.

06:07 It's like, it's like almost a different skill set for a lot of people.

06:09 It really is.

06:10 And the handed off to somebody else.

06:12 It's also it's slow, but also it only works for certain situations, right?

06:16 Like a lot of data scientists, I spec don't have a whole software team supporting them as they need.

06:22 Right.

06:22 They're the sole person at their company.

06:25 So you said you had gone into statistics and found your way over to this side of that world.

06:29 You feel like it's a golden age for statisticians now, because when I went to college, it's like, well, you could be an actuary or you could work at an insurance company, or maybe some other company might be interested in hiring somebody who does stats.

06:43 You work at the US bureau of statistics.

06:46 Yeah, exactly.

06:47 And that was, and now with the kind of blending into data science, there's just so in demand.

06:53 So when it was like, the world is so open for that now.

06:55 It definitely became a much sexier career.

06:58 And I think I got lucky that I got into it like right before, right.

07:02 As it was starting to take off, like at that point, like data science wasn't really like a term yet.

07:07 So not that old, but yeah, at that time it wasn't really, there were like, I think at that point, like they were called it's like research scientists was more like the term, right.

07:15 Which still sounds a little bit dry, but then it got rebranded as data science.

07:18 Yeah.

07:19 I mean, I think it's a group.

07:20 I mean, I would say it's always a good time to study statistics.

07:22 I think like a lot of people, if someone were to come back to me and say, I have I have no idea what I want to major in, but I have some aptitude for math.

07:28 I would say like major in statistics, I think it's like really useful.

07:31 It has like a lot of applications.

07:33 But I think now also in terms of tech, I feel like definitely like it's sort of like the, it's the era of the Renaissance person, right?

07:39 Like you have to know a little bit of everything now, right?

07:42 Because it's like stats, programming, math, like all these things are starting to like blend it, blend together.

07:48 And yeah, I think it's like a lot of people I really respect pull from all these disciplines like seamlessly, right?

07:53 And I think it's, yeah, I think that's where we are now.

07:56 Yeah.

07:56 It's a super fun time.

07:58 If you're excited about always learning new things and sort of bettering yourself and bringing in this thing and mixing it that way, if you'd rather just be done learning, maybe not so much.

08:07 Absolutely.

08:07 Can't just show up for 20 years.

08:09 It'd be okay.

08:10 I guess if you were like maintaining Cobalt code, well, the code, it would be okay, but not in the machine learning space.

08:16 And machine learning also is just crazy.

08:18 We've got large language models just running loose everywhere now.

08:22 What do you think about all that?

08:23 It's definitely like a very exciting time to be alive.

08:26 I think pretty crazy that when I got started in ML six years ago, it was, it definitely was like very niche, right.

08:32 And like the tools that people use and like the language about it definitely did not penetrate the mainstream.

08:37 But now it seems like, like the technologies and the algorithms, the models, the data sets are all things that people talk about now.

08:43 Right.

08:43 And I think we've all had an older relative ask us about ChatGPT or like the Like the latest trend or stable diffusion with the image, the AI image generation.

08:53 I think it has penetrated every part of this.

08:55 I think like part of the reason why that is, is one, because one, like the technology is like way more impressive now, right?

09:00 Like these algorithms are able to do things that were unimaginable, like 2017, right, when I graduated college.

09:06 But also it's just, these models are much easier to share and use now.

09:10 Right.

09:10 And I think part of the reason why tech GPT is so, which took off so quickly, which was like the interface is so intuitive.

09:16 It is.

09:16 We've been chatting with each other for like decades, like over the internet.

09:20 Right.

09:20 And the user interface is so simple and it works so fits our mental models so quickly or so easily.

09:27 But you know, under the hood, it's like this incredibly complex process.

09:30 Right.

09:31 Yeah.

09:32 Right.

09:32 And I think that's like where tools like radio come into play.

09:35 Right.

09:35 It's just, there's a bunch of like incredible, like amazing research happening, but unless other people can use it, play with it, evaluate it, like it's almost as if it doesn't exist.

09:45 Right.

09:45 And I think radio really helps you create like a demo, an app that other people can use and play with and evaluate your model.

09:52 And then, and then just like that, anyone can use it, right?

09:55 Like, you don't, you no longer have to be like a technical person and you don't have to like Python some script or something and then go to a website, right?

10:01 You can just go to a website, right?

10:03 You can just send someone a link and then they can play with the state of the art.

10:05 It's pretty, pretty cool.

10:06 Can you control a combo box and a button?

10:09 Yeah.

10:09 Something like that.

10:10 Right.

10:10 Yeah.

10:10 Then you're qualified.

10:11 It's wild.

10:12 One of the things that surprises me is for such insane technology that leverages so many servers to ChatGPT and friends, the user interface for it is so mundane, not, I don't mean that as a derogatory term, but it's just like, well, you just talk to it in this text box and it just talks, there's not like a crazy new UI where you put on 3d glasses and where it's just a chat box.

10:34 But what it does is incredible.

10:36 Similar for mid journey and other things you just slash imagine.

10:40 Just chat with it.

10:41 But so there's this sort of weird paradox of this incredible, simple way to interact with it.

10:47 And yet what it does is I guess it's a natural way to interact with it, which is what's surprising.

10:52 Yeah.

10:52 Part of the reason I think it's just like the natural language.

10:55 Like interface.

10:56 I think a lot of people like resonate with that.

10:57 I think you'd have to explain that, right?

10:59 Like you just type something and then it'll respond, right?

11:01 Like it won't, it won't error.

11:03 Right.

11:03 And I think even like stuff that isn't just purely chat based, I think like stable diffusion, like the web UI, right?

11:09 I think it's, it has a lot of controls.

11:11 Right.

11:11 But at the end of the day, it's like a Photoshop S interface, right?

11:15 Where it's like someone who's used to that kind of software, like it's what they expect, right?

11:20 You like upload an image and then you like, you can like get a tool to blur something out and then you can obtain it or obtain it and stuff like that.

11:28 So I forget who said it, but I think some, someone said that MLs, it's not really like the product.

11:33 It's like the, like in the background, right?

11:35 Like the most successful ML products, they don't really feel like they're ML.

11:39 Right.

11:40 It's like the MLs like abstracted away and it just makes your experience like that much better.

11:44 Right.

11:44 And I think that's what all these different tools are showing.

11:47 Amazing.

11:47 All right.

11:48 So let's talk about Gradio and I'm going to ask you to do something a little bit funky to kick this off, but let's talk about what just, what other apps are like Gradio.

11:57 So things that come to mind for me are like StreamYard, for example.

12:01 Streamlit.

12:01 Sorry, that's what I mean.

12:02 Stream, yeah.

12:03 Streamlit.

12:03 I'm reading the words of our app that we're using.

12:06 Streamlet, not Streamyard.

12:07 Streamlit and other, just give people a sense of what are the categories of apps that's in the same space so they can get a mental model for what Gradio is.

12:14 For sure.

12:15 Yeah.

12:15 So I think like Streamlit is a good comparison.

12:18 Like Plotly, Dash, I think is also in the same ecosystem.

12:22 Shiny, I think.

12:24 Yeah.

12:25 I think like the first programming language.

12:26 Yeah.

12:27 I just had Joe on to talk about Shiny for Python recently.

12:30 Yeah.

12:30 They're all definitely in the same ecosystem.

12:32 If you go to the Gradio homepage, Gradio.app, you can see some of the apps that you can build with Gradio really quickly.

12:39 Absolutely.

12:40 Yeah.

12:40 But that doesn't necessarily limit, like what you see on the landing page is not all that you can build with Gradio.

12:45 I think those are just like the eye-catching quick examples, just because.

12:49 Like I said, like Gradio is built to get these kinds of examples up and running really quickly, but you can do like lots of complex stuff with Gradio.

12:56 Excellent.

12:57 Yeah.

12:57 So we'll let's dive into it.

12:59 So you've already given it a bit of a introduction for us.

13:02 Maybe we could work, just start by discussing how you might take, you've got some different types of problems you can solve on your homepage and it shows you the code, the entire code and then the UI that comes out of it.

13:14 So maybe we can, you could just talk us through the sketch recognition.

13:16 It's one of the types of UIs you could build here.

13:19 So with the sketch recognition, you just, what you draw, it's definitely a bird I drew there or a mountain.

13:24 I'm not sure.

13:25 How do you think about Gradio?

13:25 Right.

13:26 So Gradio is a, the Python library, right?

13:29 Python is the main language used to interface or to build Gradio apps.

13:32 pip install Gradio.

13:33 Yeah.

13:33 You pip install Gradio, right?

13:35 And then what does Gradio do at the highest level?

13:37 Gradio turns a Python function, any Python function into a interactive web app.

13:43 Right.

13:43 So when you think of function, right, function has inputs and outputs.

13:47 So these inputs correspond, these inputs and outputs correspond to things that will be drawn on the page, right?

13:51 And, and Gradio comes with a standard set of inputs, right?

13:55 There's like text boxes, dropdowns, number fields, data frames, plots, anything like that, but also like drawing tools, like a sketch pad.

14:04 And then the output, it can be, it can be any of these other components, but that could also be like a label, right.

14:09 To show like a machine learning prediction.

14:11 So all you need to do is take, write a plain Python function that takes in a drawing and returns a set of probabilities or competences, and then Gradio can wrap that in one line of code and turned it into an interactive web app, like we see here, if you're on the YouTube stream, you can see what Michael is doing.

14:28 There's like a sketch pad area and then he can scribble on it and then immediately he'll get a prediction out.

14:34 Yeah.

14:34 Let's see if I can draw an owl.

14:36 Maybe.

14:37 Right.

14:37 So it's like, okay, let's see how I do.

14:39 I don't know.

14:39 Yeah.

14:40 Syringe.

14:40 We've got to make the model better.

14:42 Right.

14:42 So it could be a cat.

14:44 It definitely could be a cat.

14:45 I can see cat.

14:46 I can see it.

14:47 I think I got to make my drawing better.

14:48 But so the idea is you have a regular function that takes the inputs and outputs.

14:53 There's no UI whatsoever.

14:54 And there's also no reactive programming.

14:57 You're not like hooking events where I redraw and it just reruns.

15:00 Right.

15:01 That's not part of my code.

15:02 I write as a Python person.

15:04 - Right. - And then you just say, gr.interface, give it the function, and then you say the inputs are, in this case you say it's just a sketchpad.

15:11 And so I get this UI that I can draw on, that I've been attempting to draw an owl on.

15:16 It's probably missing the eyes.

15:17 I think it's the eyes that are missing.

15:19 - The latter. (laughs)

15:21 - It's not about testing the underlying model, is it?

15:25 And then you say the outputs that are labeled.

15:26 Now a lot of UIs, people might think a label is just a non-interactive piece of text.

15:32 But here there's more of a machine learning label, right?

15:34 You've got like a cool horizontal bar graph that has percentages and talks about its guesses.

15:39 So it's like a machine learning labeled response report.

15:43 Just a machine learning person would, when they see label, they think that's right.

15:48 They don't think of the standard web, like just like a text box.

15:52 Right.

15:52 So yeah, like label four type of thing.

15:54 And then HTML.

15:55 Yeah.

15:55 Right.

15:55 So that's one of the things that are one of the ways that kind of Gradio is built for that kind of audience.

16:00 Right.

16:00 it's like the high level parameters match that mental model.

16:03 - This portion of Talk Python to Me is brought to you by JetBrains and PyCharm.

16:11 Are you a data scientist or a web developer looking to take your projects to the next level?

16:16 Well, I have the perfect tool for you, PyCharm.

16:19 PyCharm is a powerful integrated development environment that empowers developers and data scientists like us to write clean and efficient code with ease.

16:28 Whether you're analyzing complex data sets or building dynamic web applications, PyCharm has got you covered.

16:34 With its intuitive interface and robust features, you can boost your productivity and bring your ideas to life faster than ever before.

16:42 For data scientists, PyCharm offers seamless integration with popular libraries like NumPy, Pandas, and Matplotlib.

16:48 You can explore, visualize, and manipulate data effortlessly, unlocking valuable insights with just a few lines of code.

16:55 And for us web developers, PyCharm provides a rich set of tools to streamline your workflow.

17:00 From intelligent code completion to advanced debugging capabilities, PyCharm helps you write clean, scalable code that powers stunning web applications.

17:09 Plus, PyCharm's support for popular frameworks like Django, FastAPI, and React make it a breeze to build and deploy your web projects.

17:17 It's time to say goodbye to tedious configuration and hello to rapid development.

17:22 But wait, there's more.

17:23 With PyCharm, you get even more advanced features like remote development, database integration, and version control, ensuring your projects stay organized and secure.

17:32 So whether you're diving into data science or shaping the future of the web, PyCharm is your go-to tool.

17:37 Join me and try PyCharm today.

17:39 Just visit talkpython.fm/done-with-pycharm, links in your show notes, and experience the power of PyCharm firsthand for three months free.

17:51 PyCharm, it's how I get work done.

17:54 - One thing you mentioned earlier that there's no like explicit reactivity that you as a programmer have to write.

17:59 That's definitely true in the GR .interface case.

18:01 GR.interface, like abstracts all that away from you, but Gradio also offers.

18:05 So this is your simple case.

18:07 Like I just want.

18:08 Just run this function with these inputs and outputs, and I want a real basic variant.

18:12 Okay.

18:13 Gradio also offers like a lower level API, where you can explicitly control the layout.

18:18 Right.

18:18 So right now everything is like side by side.

18:19 You can put them horizontally across columns, rows.

18:23 You can add components, right.

18:24 And then you can also be more explicit and saying, okay, when this input changes, run this function and then that will populate this and stuff like that.

18:32 And you can change these things together.

18:34 So.

18:34 That makes sense.

18:35 It may be, it's expensive to, it's expensive to generate some portion.

18:39 Do you want to cache it as much as possible?

18:41 Yeah.

18:41 You just want to be, have like more control over like exactly what happens when things change, right?

18:47 You could, Gradio gives you that, that control, but for a lot of use cases, you can get, you can get really far with GR.interface.

18:54 And then the other companion piece, which isn't on the landing page.

18:56 Cause we just released it maybe two weeks ago, it's Gr.chat interface, right?

19:01 So you could build like .

19:02 Oh, interesting to an LLM or something.

19:04 Yeah.

19:04 You can build a chat UI for yeah.

19:06 Like an LLM just in one line of code.

19:09 And I think I can try to maybe find an example of that real quick.

19:13 Yeah.

19:13 While you're looking, do you offer any guidance or any opinionated stuff on which LLM to choose?

19:19 Or do you just say it's just a chat interface and you write the code to make it happen?

19:23 You just write the code to, yeah, just given the message, what's what should the response be?

19:29 And then that's the interface.

19:30 Yeah.

19:31 And then there's some interesting options people might want to pick.

19:34 Obviously you could pick open AI as and use their API, but there's things like private GPT, which allows you to ask questions about your documents, but a hundred percent private, right?

19:44 You could just give it hundreds of docs and say, learn these and we're going to talk to you about it or something along those lines.

19:51 There's the LangChain.

19:52 Right.

19:53 Yeah.

19:53 Which is a pretty interesting option for building these things.

19:56 Llama.

19:57 Like the new Llama 2.

19:59 Yeah.

20:00 So I think in the chat, I just posted like a Gradio Llama 2 UI that we can show.

20:04 It's on Hugging Face.

20:05 So I think we can talk about the hosting on Hugging Face as well.

20:08 Okay.

20:09 Yeah.

20:09 So this is the chat UI.

20:10 So it's if you were to scroll down a little bit.

20:13 The UI says chat bot.

20:14 You can type a message.

20:15 Yeah.

20:15 I'll ask it what the podcast says.

20:17 Hey, I'm here to help you.

20:18 Talk Python is a podcast and community dedicated to helping developers improve their skills, interviews and experts in the field.

20:24 That's you, Freddie.

20:25 Resources.

20:26 Yeah.

20:26 What do you want to know?

20:27 Under the hood, this is using a LLAMA 2, 70 billion LLM.

20:31 Nice.

20:31 I can ask you what the latest episode is.

20:33 So it gives me a sense how far back it goes.

20:35 That's about two years old.

20:36 So, okay.

20:36 Interesting.

20:37 There you go.

20:37 Yeah.

20:37 Makes sense.

20:38 Oh wait, no, this is not so sure about that.

20:41 It says, yeah, I think there's a little bit of a mismatch, but this is really cool.

20:44 And so you basically plug in whatever LLM you want to into this and they, they put the, or you put the LLama 2.

20:52 If you scroll up a little bit in the website, if you, when you see those like three bars, yeah, the hamburger deal.

20:58 No, sorry.

20:59 That's not it.

21:00 Other hamburger.

21:01 Where is it?

21:01 There are three dots.

21:02 Maybe.

21:02 I guess maybe because you don't have an account.

21:04 You can't see the file.

21:05 Oh yeah.

21:05 If you go to files there, if you go to files, I'm going to app.py.

21:08 This is the source code of the, oh, interesting.

21:11 Okay.

21:11 If you scroll down the helper tags, and then this is the actual prediction function. It's about 20 lines of code. But if you scroll down, you see the chat interface code with gr blocks as demo, create a tab, a batch.

21:22 Okay, the important thing is that your chat interface is just, it's just like a one line way to create a chatbot. It works similarly to the interface case, right? There's just a function in this case that it's an LLM, it handles like the responding to the each user message. And then you just call that and then you can call launch and then you get a UI. Nice. You get like a chat to gr. UI And in this case, it's a hundred lines of code, but you know, yeah, pretty simple.

21:48 That is simple.

21:48 And one of the things that's pretty cool here is chat section.

21:51 You hook it to the type you set up is a streaming type versus the place I type as a batch.

21:58 And then the function you give it is a generator with yield keywords.

22:01 So it just, as you go through it, it makes choices and sends them back.

22:05 Pretty advanced interaction for the UI to be, to run in like that.

22:09 That's cool.

22:09 In order to get streaming, there's no special syntax.

22:12 You can just use the normal Python yield and then Gradio knows how to feed that iteratively, feed that to the front end.

22:18 And then you get like this responsive streaming UI.

22:20 That's a really good call out.

22:21 It's just like the, Gradio tries to use like the core Python syntax and the core Python data types as much as possible.

22:27 Just so to make it easy for people to get up and running.

22:30 I just blew through this really quickly here, but basically from what I can tell is the amount of code here to actually implement this, that is not the, just the details of given this text, make the LLM do the thing.

22:42 Five lines of code.

22:43 Yeah.

22:43 Yeah.

22:44 Definitely true.

22:44 Yeah.

22:44 That should make people pretty excited about, Hey, I can write five lines of code, especially with an example to work from exactly.

22:50 Yeah.

22:50 So yeah, definitely.

22:51 We need to get chat interface up and the landing page, but yeah, I think it's super easy to get complex demos running.

22:57 I think it's just a handful lines of code.

22:59 We mentioned shiny a little bit.

23:01 Umar asks, how does it compare to shiny?

23:03 How familiar with shiny?

23:05 Can you not super familiar?

23:06 I'm not going to say not familiar.

23:08 I'm familiar with shiny, but not well enough to compare it directly to Gradio either.

23:11 I mean, they live in the same general world of trying to create a UI.

23:15 That's you don't have to write web apps for, but I don't think they're, I don't think they're totally the same, but they're similar.

23:20 Okay.

23:21 So we talked about setting up pip install.

23:24 That's easy.

23:24 You say you can choose from a variety of interface types.

23:27 These are the widgets that you're talking about, right?

23:29 You could like, in terms of the inputs and the outputs, we call them components.

23:33 But yeah, there's, if you go to the docs pages about, we have about 30 something components.

23:37 Wow.

23:38 Okay.

23:38 Yeah.

23:38 So code, buttons, data frames, plots, files, pretty much like you name it.

23:44 And we're adding components all the time.

23:47 And we're also, one of the things that we're going to work on is letting the community create their own components.

23:51 So if you have your own particular demo, your own particular web app, and you want to, you want this new component that we don't support, like how do you, we're working to make it easy for you to do that without having to merge something into 30 upstream, right?

24:03 And then other people can play with it.

24:04 So we're working on that as well.

24:05 But for the time being, it's yeah, about these like 30 something components.

24:09 And then, yeah, you can mix and match them however you want.

24:12 You've got quite a bit of them.

24:13 Many of my people expect people would imagine.

24:16 So you've got button.

24:17 Yeah.

24:17 Let's see, you've got data frame, which is pretty cool.

24:20 And then the gallery image, the plots, like the line plots, scatter plot.

24:24 Those are all pretty cool, but you've also got things like audio.

24:27 What's the story with audio?

24:28 You, this is how you, yeah, you can upload like a, an MP3 or a, or wave file directly or maybe for transcripts or sentiment analysis or something.

24:37 Exactly.

24:38 Like whisper essentially audio to transcription or also just a synthetic audio.

24:43 Right.

24:44 So there's like a bark and there's all these like machine learning demos that they go text to text to speech basically.

24:50 And they're really advanced.

24:51 So if you want it to display that, right.

24:52 Like you ingest text, come out with audio.

24:55 Like you can use like an audio output component and then you get, yeah, you get like a, you can play the audio directly in the browser.

25:01 It's just like an audio tag in HTML.

25:03 Obviously it does more from the UI, I'm sure, but it's to not just, I guess you would just do file if you really wanted to drop an MP3, but if you wanted to generate audio and let people see the results, then this audio thing would be the way to go.

25:16 Yeah.

25:16 And then you could also, the audio could also be like the input, right?

25:19 You could just drag a drop, but you know, if you click on that box, it'll let you upload an audio if you have it and then you can play it.

25:25 I have some audio.

25:26 Yeah.

25:27 I know that I have some, but let's see if I can find something to upload here.

25:31 Here I'll upload a sponsor.

25:32 Yeah, this is super short, so I can upload.

25:35 Yeah.

25:35 Look at that.

25:36 And it just becomes a player.

25:37 Excellent.

25:37 You can play it and then you can also edit it as well.

25:40 So you see that little pencil and you could like trim it.

25:43 I like trim, trim it to make it shorter.

25:45 Okay.

25:45 Yeah.

25:46 So yeah, lots of cool components like that, that we have.

25:50 Yeah.

25:50 All the standard form stuff like sliders and dropdowns and yeah.

25:53 Highlighted text.

25:54 And yeah, so the standard like form stuff, but then there's also complex, not complex, but more maybe domain specific ML stuff.

26:01 So highlighted text, for example, like really big and like part of speech tagging, like NLP, right.

26:06 So you can get a highlighted.

26:08 it's like, depending on the tag that you apply to each word in the text, you'd get like different coloring and stuff like that.

26:14 Yeah.

26:14 So it's for MLP, like model 3d.

26:16 So there's a lot of ML demos that come out that you can generate model 3d assets directly.

26:21 So this lets you display them as well.

26:24 So, yeah, so everywhere where we have everything from the most kind of basic general like web app stuff, domain specific machine learning components as well.

26:33 Yeah.

26:33 Let's see what else jumps out here.

26:35 We have video as well, which is pretty cool.

26:38 JSON.

26:39 Yeah.

26:39 There's a lot of what you can type in JSON.

26:41 I guess it probably validates it and auto formats it something rather than just plain text.

26:46 Yeah.

26:46 When you return the JSON, it highlights it for you.

26:48 You can copy it directly as well.

26:50 So, okay.

26:50 So when we were talking about the Gradio.interface, it had, well, here's an input and here's an output.

26:57 plural.

26:58 So could you have, I say there's a sketch pad and a text box as the input and then the outputs are, I don't know, three other things.

27:06 And yeah, that's a really good observation.

27:08 Yeah.

27:08 So you can have more than one input, more than one output for sure.

27:11 Right.

27:12 So if you go to the, I think the time series forecasting demo, I think that one has two, that's also on the homepage.

27:17 Yeah.

27:17 Also on the homepage.

27:19 Right.

27:19 So you can pick, I see it's like a toy example, like forecasting Py installs, but you could pick here has two, two inputs, right?

27:26 Like the time horizon and the library itself, both of them are, are dropdowns.

27:30 Right.

27:31 And then when either updates the plot updates, right.

27:34 So there's also like how you can do plotting and gradio.

27:36 And then also this is interesting.

27:38 This is, this demo is built with the lower level API.

27:41 You could build it with interface if you wanted to, just as an example, it's built with the lower level.

27:46 API.

27:46 Yeah.

27:47 So I guess there's probably a library and time span, two arguments to the function that you write.

27:52 And then just as you interact with these widgets, it just recalls it with whatever the values are.

27:56 Exactly.

27:57 And then the function itself returns a plot.

27:59 So in this case, it's a plot, we plot, right?

28:01 So by default, or, you know, we ship with support for a matplotlib, plotly, Bokeh, and Altair.

28:07 So if I would create like a matplotlib object, do all the stuff to it that I would do in a notebook, instead of calling show, I just return it from my function and then it becomes part of the UI.

28:17 Okay.

28:17 Yeah.

28:18 That seems pretty straightforward.

28:18 Yeah.

28:19 This one happens to be done with profit.

28:21 A time series library.

28:23 Yeah.

28:23 You've got integration with a bunch of cool machine learning libraries here as well.

28:26 Yeah.

28:27 So the cool thing is pretty much if you can write a Python function for it, like it'll work with Gradio.

28:31 It really doesn't need to be like us as a development team don't really need to build that many integrations to get anything that you're working with to work with Gradio, pretty much.

28:40 If you can call a bread, a Python function to do it, and we have a supported output types and stuff like that, and you can, you can display it with, with great.

28:47 Yeah.

28:47 Nice.

28:47 Yeah.

28:47 So yeah, there's a couple of demos, for example, like connecting to like databases and stuff.

28:52 You can connect to us three if you wanted to, right?

28:54 Like you're not, I don't know exactly where they are now, but yeah.

28:57 It's gotta be one.

28:58 It just looks like it might be one potentially.

29:00 I'll click, find some S3 stuff.

29:03 Here's just gotta be something here somewhere.

29:04 Yeah.

29:05 Very cool.

29:05 So one of the things people may be wondering, and the fact that I don't see a pricing up at the top, it might be a big hint here.

29:12 What's the business model?

29:14 What's the story with this?

29:15 Is this just straight open source?

29:17 Is it a open core?

29:18 What's the story around your project here?

29:20 Gradio is completely open source and you can host it anywhere.

29:24 So you're not tied into any platform.

29:26 Gradio did get acquired by Hugging Face maybe like almost two years ago.

29:30 Right.

29:30 So Gradio integrates really tightly with the Hugging Face ecosystem, but okay.

29:36 I see.

29:36 Those integrations are normally free.

29:38 Right.

29:39 So for example, you could host radio demos on Hugging Face spaces or something.

29:44 Yeah.

29:44 On Hugging Face spaces.

29:46 Right.

29:46 And then if you, if your demo needs special like hardware or stuff like that, Like you, you could pay Hugging Face to provision that for you, but you're not paying for the Gradio, you could use whatever you want on Hugging Face spaces.

29:56 Now, right.

29:57 So it doesn't have a gradio.

29:58 Yeah.

29:58 So it's freely available.

29:59 So it's completely open source with a kind of a Gradio as a service via Hugging Space.

30:05 Hugging Face.

30:06 Hugging Face.

30:06 Yeah.

30:07 Yeah.

30:07 On their spaces.

30:08 Say that fast bunch of times.

30:10 So yeah, really cool.

30:11 One of the things people might not know if they haven't heard of Gradio before is you go to your GitHub repo for it.

30:17 20, almost 21,000 GitHub stars, a serious bit of a attention that it's gotten.

30:22 We've seen a lot of growth in the last, yeah, about a year and a half.

30:25 Like ever since the hugging face acquisition, that's really helped us put the library in front of a new audience.

30:31 Yeah.

30:31 The recent advances in ML, like a lot of people want to build demos for ML models now, right?

30:36 So I think that's definitely helping Gradio as well.

30:38 Yeah.

30:38 Trying to give people a sense of scale, right?

30:41 This is like third of FastAPI, third of last, like that's a lot of people using this.

30:45 So the reason I'm bringing that up is it's not some brand new thing that you came up that came up with that maybe people could try, but it's got a lot of users, right?

30:52 Month to month.

30:53 We're seeing like hundreds of thousands of people building these Gradio demos repeatedly.

30:58 So yeah, definitely a lot of growth and yeah, Gradio is about five years old now.

31:01 So it's not awesome.

31:02 Congrats.

31:02 That's, that's really cool.

31:03 Yeah.

31:04 3.9, almost 4 million monthly downloads.

31:07 That's a decent chunk.

31:08 This portion of talk Python to me is brought to you by Sentry.

31:13 You know that Sentry captures the errors that would otherwise go unnoticed.

31:17 Of course, they have incredible support for basically any Python framework.

31:21 They have direct integrations with Flask, Django, FastAPI, and even things like AWS Lambda and Celery.

31:29 But did you know they also have native integrations with mobile app frameworks?

31:33 Whether you're building an Android or iOS app or both, you can gain complete visibility into your applications correctness, both on the mobile side and server side. We just completely rewrote talk pythons mobile apps for taking our courses. And we massively benefited from having century integration right from the start. We use Flutter for our native mobile framework. And with century, it was literally just two lines of code to start capturing errors as soon as they happen.

32:01 Of course, we don't love errors, but we do love making our users happy.

32:05 Solving problems as soon as possible with Sentry on the mobile Flutter code and the Python server-side code together made understanding error reports a breeze.

32:15 So whether you're building Python server-side apps or mobile apps or both, give Sentry a try to get a complete view of your app's correctness.

32:25 Thank you to Sentry for sponsoring the show and helping us ship more reliable mobile apps to all of you.

32:30 What do we think about, I don't want to do an image in one.

32:35 You can do other demos you got here is you've got time series forecasting.

32:38 We talked about that as the multiple inputs, XG boost with explainability.

32:43 Want to tell us about this a little bit?

32:44 Yeah.

32:45 This one also, I think it has like, this one has 12 inputs, right?

32:48 And the idea is it's one of these like kind of Kaggle-esque things where you like predict income based on a slew of predictors, right?

32:55 And then the cool thing is that this isn't explicitly built into Gradio or yeah, this isn't explicitly built into Gradio, but you could hook into the SHAP really easily.

33:03 Right.

33:03 So if you hit explain, it'll try to explain the prediction of the model and display it in a plot for you.

33:09 Wow.

33:09 Okay.

33:10 Right.

33:10 So for those of you don't know, Shap is like this algorithm for explaining the predictions of any machine learning model.

33:15 I see.

33:16 It's hooking into XGBoost.

33:17 Right.

33:18 But there isn't like an explicit, in this demo, there isn't like an explicit radio feature that's being used.

33:22 It's just calling Shap directly from this Python function and then displaying the results as a plot.

33:27 The thing does is it's got a bunch of different sliders and dropdowns.

33:30 It says given an age, your education level, years of school, whether you're married or not, all those male, female, how many hours a week you work.

33:40 And then it predicts what is this?

33:42 Yeah.

33:42 Predicts your, your yearly income.

33:44 And then the senior talking about is with that model, you can ask it, okay, well, of all these different things we could put into it, what features, what aspects of that are more important and what are less important, right?

33:54 Right.

33:55 Okay.

33:55 The use cases, let's say like you are a data scientist that is charged with building this kind of model.

34:01 The first question after someone seeing the prediction is someone might have was like, why, like, why is it predicting this?

34:06 Right.

34:07 And then you ideally want to be able to explain exactly what element of the predictors contributed to the prediction the most.

34:13 And there's a lot of tools that you can use for that, right?

34:16 Shab is I think the most well known to my understanding.

34:19 And then, yeah.

34:20 And then you can just with Gradio really easily just call that algorithm and then just display it in a plot.

34:25 Right.

34:25 And then in this example, like one of the inputs is like the capital gain.

34:29 So like how much you make on your investments, right.

34:31 So, and I think in this particular case, like the capital gain is like really big, right.

34:34 So obviously because the capital gain is so big in this particular case, we predict that the income will be, will be really big, right.

34:40 Cause capital gain is pretty much synonymous with income really.

34:43 So yeah.

34:44 Yeah.

34:44 So that's what this is showing.

34:45 Yeah.

34:46 And I suspect this is important for a lot of reasons.

34:48 If you were, you're building this for your company or for some kind of project, people want to know, well, we have all these different inputs.

34:55 What ones actually matter in making a prediction?

34:57 Maybe only the top three are the ones that really matter.

35:00 And you can throw out things like marital status.

35:02 Like it actually doesn't make much of a difference.

35:04 Right.

35:04 Or if you're a policy person and you're this model actually matches real data.

35:09 You could say, well, we're trying to improve the policy for a certain group of people.

35:13 We could focus on any of these aspects, which one or two would make the biggest return for our effort to make a change.

35:19 Right.

35:20 A lot.

35:20 There's a lot of cool stuff that comes out of this, I think.

35:22 Absolutely.

35:23 And then you as a developer, I think it's, as like the data scientist, right.

35:27 It's really easy to make this kind of thing, right.

35:29 This is like a jarred interface, I believe.

35:31 Right.

35:31 So this is just one line of code to build this.

35:34 So yeah, that's okay.

35:36 Not GR that interfaces.

35:37 The other API that we can talk about now is called blocks.

35:40 Yeah.

35:40 Tell us about that.

35:41 It's, it's cool.

35:42 Yeah.

35:42 The, the way that it works is that you declaratively define your UI, right?

35:46 So it's like this input is going to go in this column and say, well, this input is like a dropdown for example.

35:51 Right.

35:52 So in this example, there's lots of dropdown components, lots of sliders for the age and stuff like that.

35:57 And then you define all these components and then you can define the reactivity separately.

36:02 So if you were to scroll down, there should be like a button dot click, right?

36:05 So whenever the predict button gets clicked, yeah.

36:08 So you're called this function with these inputs and then return this one thing.

36:12 Yeah.

36:12 So that that's like the model, right?

36:14 Like right now it looks like a lot of code just because there's a lot of like inputs and stuff, but at the end of the day, it's like pretty simple.

36:19 you're just defining a UI and then you define like what happens when, and then Gradio handles the rest.

36:24 Yeah, it's pretty straightforward.

36:25 So people listen, basically the UI for the more advanced version is you use context managers, create with blocks.

36:32 So then you'd say, here's something that goes for us and with another row, put some columns in there with another row, and then that's how you build it up.

36:40 So it's pretty straightforward.

36:41 What it reminds me of a little bit is it reminds me a little of Flutter.

36:44 Are you familiar with how Flutter looks?

36:47 No.

36:47 And the code.

36:48 It's, I don't know if I can find a quick example about an example Flutter.

36:52 Come on.

36:52 It's really sort of hierarchical.

36:54 So that the thing that I think is interesting is the, the code hierarchy matches the sort of UI hierarchy, right?

37:02 So it's a code driven UI where as it gets more indented, that's talks about, okay, well that's a row or then you pop off and stuff and they've got, it's real similar in that sense that all right there in the code, there's not a designer or a markup language or something like that.

37:16 But yeah, pretty cool.

37:17 Yeah, exactly.

37:18 Yeah.

37:18 So like the UI, it's all declarative, right?

37:19 So you, yeah, like you said, you just say this is this row and then yeah, there's ways to control like the relative width of each of these columns, for example.

37:27 So if you wanted that, you could, and then.

37:29 Another thing I saw, I can't remember what you demoed it.

37:32 So I'm not going to pull it up, but I saw that there's, there's a way to pass like CSS and styling over as well.

37:37 Is that right?

37:38 That was maybe the very first thing.

37:39 There's a Python API for like defining the theme, right?

37:43 So like every UI element has certain CSS variables and you can control their value via like the values of this Python class that you then pass to your Gradio instance, but at the same time, there's like a top level CSS parameter that you could do whatever you want in that case, right?

37:56 You don't have to use like the Python API.

37:58 If you don't want to, if there's something different that you want to change, you can change the CSS variables.

38:04 You're saying I could do something in Python.

38:06 I could say, well, the style is button.

38:08 Order width is three and the color of the borders is blue.

38:12 But if I just want to have arbitrary CSS, I can just go, here's your arbitrary CSS string, go with that.

38:17 You could pass it a file and then it'll, we'll read that file and then use that CSS in the demo.

38:22 Yeah.

38:22 And then with that, you can also add IDs to each of the UI elements, and then you could write your CSS to target the IDs that you add, right?

38:30 So let's say you only wanted to modify one button, you could do it that way.

38:33 Right.

38:33 You just want to control one of the plots or something.

38:35 I guess if you're writing arbitrary code to return things like matplotlib plots, do things like the XK CD matplotlib.

38:43 Oh yeah, for sure.

38:44 Right.

38:44 Like you could control joking, but it's also awesome.

38:47 There's an XKCD Gradio theme, right?

38:50 So let me show you this.

38:51 There is.

38:52 Yes.

38:53 Okay.

38:53 Well that takes it to another level.

38:55 That's pretty excellent.

38:56 That's the cool thing about the theming is that it's shareable, right?

38:59 So someone built this XKCD theme.

39:02 Wow.

39:02 It's amazing.

39:03 Anyone can use this in their Gradio demo, right?

39:05 All you have to do is pass theme equals Gstaff/XKCD and then your demo will look like the XKCD theme.

39:12 It's so good.

39:13 I love it.

39:13 Oh my gosh, this is really.

39:16 - Yeah, completely community driven.

39:18 - Yeah, well done to whoever did this one.

39:19 That's really cool.

39:20 - It goes beyond the plot, right?

39:22 You can for sure return a plot in the XKCD theme, but you could also have the whole demo in the XKCD theme.

39:27 - I often pull this example up, this theme, this XKCD thing for Matplotlib, 'cause it's fun, but also I think there's genuine value in putting together something that looks like this.

39:37 Because if you show this to decision makers, bosses, managers, types, And they see that they look at something that looks like it's working and they're like, Oh, well, we're done then.

39:47 No, we have two months more work.

39:49 We're not done.

39:50 But I click the button and it's giving me answers.

39:52 We're really not done.

39:53 It's not scalable.

39:55 It's not this, it's not that right.

39:56 It's only an estimate, just an XKCD front end on it.

39:59 You're like, look, you see, it's not done.

40:00 It's just, it's got squiggly lines.

40:02 It's hand drawn.

40:03 It's clearly a prototype.

40:04 You're like, Oh yeah.

40:05 Okay.

40:05 But I can see where this is going.

40:06 I think actually psychologically it may have a big impact, even though it's silly.

40:11 Yeah.

40:11 That's a super interesting point.

40:12 I never thought about it that way, but yeah, I mean, I think it definitely gives it a little bit more sketch vibe.

40:16 Like this is like in the.

40:18 Yeah.

40:18 Like a wireframe vibe.

40:19 Yeah.

40:20 Yeah.

40:20 Wireframe like straight from the workshop.

40:22 Exactly.

40:23 Yeah.

40:23 That's what I was thinking.

40:24 Cause I presented projects to various stakeholders when I used to do that kind of stuff and they'd be like, Oh, well that looks like it's done.

40:30 No, we're going to need some time.

40:31 Cause it's really not done.

40:32 I know it looks good, but it's not.

40:34 Yeah.

40:35 Yeah.

40:35 You made it look too good.

40:36 Basically.

40:37 Yeah, exactly.

40:37 That was a serious mistake.

40:39 Yeah.

40:39 Okay.

40:40 So we've got a little bit more time to talk about a couple of things.

40:43 I want to talk about how people actually share this.

40:45 Like we're still talking about a thing.

40:47 I pip install locally and it has a UI, but what do I do?

40:50 I still don't want to set up a Linux machine and Nginx and domains and all that.

40:54 So what are the options?

40:55 But before we get to that, tell us a bit about the internals.

40:58 Like when you guys work on Gradio and I pip install it, like what's running.

41:02 What is this project?

41:03 The backend is a FastAPI server.

41:06 So what Graded will do, it'll spin up a server for you.

41:09 And then that server will serve like the front end assets.

41:12 The front end is built in Svelte.

41:14 Basically whenever you, whenever these reactivity events happen, what that'll happen is that, or what will happen is that the front end will just call the backend API and then run your function.

41:24 And then make sure that all the necessary processing that needs to happen to get your data ready happens.

41:29 But at the end of the day, it's a simple model in that sense, right?

41:34 Obviously there's some more complications with like the streaming, for example.

41:37 So that's like a whole kind of different code path almost.

41:40 But at the end of the day, it's like a RUST server that's talking with a JavaScript client.

41:46 So it's like the standard developer tools story for Python people is it's not some of it is Python, but you probably end up writing a lot of JavaScript or TypeScript to build this tool for other people.

41:58 Right.

41:58 So they don't have to, I'm not a huge Svelte expert.

42:01 Like thankfully there's some of the people I work with are like really good at the really knowledgeable and that stuff.

42:06 And yeah, like the front end code, I think it's, I think there's more Svelte code than Python code.

42:11 Actually, I'm curious.

42:11 I put the.

42:12 What's the code breakdown.

42:13 Let's break it down.

42:14 Yeah.

42:15 65% Python, 16% Svelte, 13% TypeScript.

42:19 Well, so I think the reason might be that we have a lot of like demos and stuff.

42:23 Yeah.

42:23 I think there might be some stuff like that.

42:24 Yeah.

42:25 The demos are in there.

42:26 Yeah.

42:26 There's a lot of demos.

42:27 You know what feature GitHub needs as you navigate the source tree, right?

42:31 When I click on like client or demo or Gradio, it would be awesome if those stats would also be repeated.

42:36 But just for that section of code, wouldn't that be great?

42:39 Like how much of the demos are Python?

42:40 I don't know.

42:41 Maybe I just want to know that, but that'd be cool.

42:42 Anyway.

42:43 Yeah.

42:43 So I suspect that is.

42:44 There's probably a lot of code and you've got a lot of notebooks and stuff in there too, that probably puts a big change on it there.

42:50 A lot of the code is actually, yeah.

42:52 Yeah.

42:52 JavaScript's felt right.

42:54 It's the take one for the team.

42:55 So the rest of us don't have to write JavaScript.

42:57 Exactly.

42:58 Yeah.

42:59 Cool.

42:59 Interesting.

43:00 Very nice.

43:00 And it says it can be embedded in a notebook, which is interesting, or it can be presented as a webpage.

43:07 Tell us about this part.

43:08 If you were to run this on any notebook, like Google Colab, for example, I think this might be an example, right?

43:13 So if you call like the way that Gradio works, right.

43:16 Once you create your Gradio interface or blocks, once you call launch, that's how you start up the server.

43:22 That's like, you've kicked off the whole process of serving this.

43:24 That'll create the server locally, right?

43:26 So no data is like leaving your machine.

43:27 Right.

43:28 And then if you click, if you call launch on like a Jupyter notebook, Colab SageMaker, like the UI will display like in the notebook.

43:34 Right.

43:35 And then you could, if you're doing locally as well, you can go to the local host URL and go to the server that way.

43:42 And then the really cool thing is that yeah, there you go.

43:44 There's a UI.

43:45 Okay.

43:45 That's what we mean that it's embedded locally.

43:47 It's a little, it feels good.

43:48 Like a little bit like the I widgets sort of thing.

43:51 It's similar to that, right?

43:53 Like it'll display right underneath the cell.

43:55 Right.

43:55 And then if you run the cell again, like you'll get like a new, like a new server basically. Right. So you can, you can iteratively build these things.

44:01 Right.

44:01 Is it running FastAPI somewhere in the background when you do that?

44:03 Yeah. Crazy. Yeah. That's pretty, pretty nuts. Turtles all the way down.

44:07 And then what we mean that they can be embedded in a notebook and then you could also like host it anywhere. Right. So you could, if your machine is exposed to the internet, right. You have like a fixed IP address. You could just give people that URL. You could also share it another way. Right. So every gradio interface is like a launch method. That's what kicks off the server. And that takes a parameter called share, right?

44:27 So if you hit share equals through true, that'll create like a temporary link for 72 hours that you could share with someone, right?

44:33 So you don't have to, you can host it right on your laptop if you want it to.

44:36 As long as your laptop, as long as you leave it on.

44:38 Yeah.

44:38 As long as you leave it on, it's like not sleeping and stuff.

44:41 Like people can access here.

44:42 If you go back to that Colab notebook, I think we might be able to like demo that.

44:45 Oh, interesting.

44:46 If I just say here and say, share equals true and rerun it.

44:50 Equals true.

44:50 See what we get.

44:51 So you see gradio.live, right?

44:53 So if you click that, it totally works.

44:55 Yeah, you can select that to wherever you want them.

44:57 They can just use this.

44:58 Right.

44:58 So that, yeah, no install needed, right.

45:01 If you're sharing this with your collaborator, your PM, your manager, your friend or whatever, you could just give them this link, right?

45:08 So you don't have to do anything.

45:10 To get, I guess it's probably worth emphasizing.

45:12 You should never try to host like production over this.

45:15 It sounds like, cause it's only for a limited time and it's going to, it's just a good looking URL.

45:20 But so often you'll be in meetings over zoom or something else.

45:25 And they'll be like, Hey, what have you done?

45:26 Can you show me?

45:27 And then you're like, all right, well, let me do screen share.

45:28 Oh, I don't have, I'm not a host.

45:30 Can you make me a host now?

45:31 Can I, you're sharing, can I share?

45:33 And then finally you get it up and it's blocky and they're like, Oh, zoom in.

45:36 It's too small.

45:37 I'm on my phone this way.

45:38 You just take that and you give it to them in the meeting.

45:40 Right.

45:41 And they, they have a full fidelity thing they can play with, which is awesome.

45:44 They have the demo itself that's running on your machine.

45:46 Right.

45:46 So they don't have to, yeah, like no, you don't have to install anything.

45:50 Right.

45:50 Just go to your point, your browser at this URL and then yeah, it'll work for that quick demo.

45:54 Yeah, exactly.

45:55 Use case as well.

45:56 Yeah.

45:56 Definitely don't use it for production.

45:58 Yeah.

45:58 If you want it to use for production, I think like the easiest, the absolute easiest way to use hugging face spaces.

46:04 So if you go to hugging face spaces, it's basically like a drag and drop, right?

46:07 Like all you have to do is just drag your Gradio script into like their UI and then that'll upload it and then Gradio will already be installed and then the server will, will start and then you have, you have your permanent hosting.

46:19 And then it's also like a, it has like a get interface, right?

46:21 So like your demo has several files, like directory, there's some assets, some images that you need that you want to upload as well, like you could just get pushed to your hugging face space.

46:30 And then you'll, you can do it like that as well.

46:33 Okay.

46:33 So you add it as an origin or something and then just push to it.

46:36 Yeah.

46:36 I can try to show it.

46:37 I don't know if you can have a, can I share my screen?

46:40 I wonder.

46:40 Yeah, sure.

46:41 Click at the bottom and share.

46:43 It's easier if you share an app.

46:44 Yeah.

46:45 If I go to my Hugging Face account and go here and then new space, and then this is talk Python demo, then MIT.

46:55 Oh yeah.

46:55 We can do whatever we want with this.

46:57 Right.

46:57 So you could write, you could host stream like radio, Docker, yeah, anything you want.

47:02 Right.

47:02 So for free, very generous free tier, you have two CPUs.

47:06 That is a generous free tier, two CPUs, 16 gigs.

47:09 Yeah, that is good.

47:10 The only caveat is that this will go to sleep after 72 hours if no one uses it.

47:14 Right.

47:14 So, but you could also upgrade it.

47:16 You have a machine learning model, you need to pay for the GPU per hour.

47:19 And then yeah, you can set public or private and then you just create space.

47:22 And then yeah, this is how the Git interface works, right?

47:26 So you could just Git clone this and then add your code and then just Git push.

47:31 Or you could just copy this.

47:32 - Copy the code and just paste it into a file, yeah.

47:35 - Add file, create new file.

47:37 - It does feel a very Git, right?

47:39 Even has the similar look and feel of like when you go to Git and you say add new file.

47:43 Yeah.

47:43 True.

47:43 A perfect commit message.

47:45 No, no comment.

47:47 Just blank.

47:47 I love it.

47:48 Yeah.

47:49 And then, so over here, they run in Docker containers or Kubernetes or something like that.

47:53 The Docker container.

47:54 Right.

47:54 So what this is doing, it's at it's, it has like a pre configured Gradio Docker image, right.

47:59 That comes with, there we go.

48:01 I've already built, but it comes with Gradio and like a bunch of standard like data science libraries, and then you, I ran them in, and then it, it adds your code to the container and then it starts to container, right.

48:11 But you could also just use your own Docker file if you wanted to.

48:14 Okay.

48:15 You can host whatever you want.

48:16 Right.

48:16 So here, yeah, you got to put your name in a present.

48:18 Wow.

48:19 Look at that.

48:19 A little Michael.

48:20 Yeah.

48:20 And the time that we've been talking about this, you've created a space and I was creating a new UI and hosted it.

48:25 That's, that's pretty ridiculous.

48:27 Yeah.

48:27 And then you can just share this, share the URL or whatever.

48:30 Yeah.

48:31 Yeah.

48:31 You can just share the URL there.

48:32 And then they have like a machine learning app that they can share.

48:35 It can be used with, with, they can share with anyone and it'll stay up.

48:38 It's just get right.

48:39 So if you have git locally, if you know how to use git, you can very seamlessly use push to the hugging face platform.

48:45 There's no, no special magic.

48:46 What if I'm not a hugger?

48:48 What if I, for some reason, don't want to use hugging face.

48:50 Can I post this on behind NginX or somewhere?

48:53 If I like infrastructure, I'd like to do my infrastructure as a service.

48:57 It's that app.PY file, Python.app.py from your cloud.

49:01 And then just make sure that the URL is the port is accessible for the internet.

49:05 And then you just give that to anyone or front that with NginX and put some.

49:08 Yeah, exactly.

49:09 Let's encrypt and then just point it over to that URL and let it go.

49:12 It'd be pretty straightforward.

49:13 You could host it wherever you want.

49:14 That's just, it's all open source type under the hood, right?

49:16 It's just FastAPI belts and then some Python libraries, right?

49:19 There's no, there's no lock in anywhere.

49:21 Yeah.

49:22 Cool.

49:22 So it's probably running uvicorn would guess as the server, which is production ish, I guess if you did like really large scale, you might want to do Gunicorn with the uvicorn workers or other than just uvicorn itself, but you know, for the failover and whatnot.

49:37 But that sounds like, if these words sound familiar to you, it should sound really familiar.

49:41 If they don't, then don't worry about it.

49:43 Exactly.

49:43 That's the standard Python web infrastructure stack type of stuff.

49:47 And in that model, it's completely free, right?

49:49 It's open source.

49:50 I can do, I can just run it there, right?

49:51 Yeah.

49:51 Just run it wherever you want.

49:53 Very nice.

49:53 Well, Freddy, let's wrap up our conversation, give a short on time with just where things are going.

49:58 We'll talk about where we are, where it came from, where are we going?

50:01 Thanks for that.

50:01 So I think like where we are.

50:03 So I think we're trying to get Gradio into like as many platforms as possible.

50:08 Right.

50:09 So, and like as many kind of like deployment modes as possible.

50:12 So one of the cool projects that we're working on is Gradio Wawesome.

50:16 Right.

50:16 So like running Gradio entirely in the browser.

50:18 Wow.

50:19 Okay.

50:19 So yeah, so that's, it's not ready.

50:21 It's not released yet, but it's something that we're actively working on.

50:24 Right.

50:25 So you can, yeah.

50:26 So if you want to just build your machine learning demo, running everything directly in the browser, right?

50:31 There's like the ML for the web space is growing a lot.

50:34 It's advancing really quickly.

50:35 Like we're getting ready.

50:37 We're getting ready for that.

50:38 - So what's that look like in terms of foundations?

50:41 Is that Pyodide?

50:42 Is that PyScript?

50:43 Is that something else?

50:44 - It's using Pyodide right now.

50:46 - Okay.

50:47 - Yeah, so yeah, that's how.

50:49 - Yeah, that's a pretty good choice because one of the selling points of Pyodide, not just that it has Python in the browser, but that it has the ability, it has a bunch of the machine learning libraries either available or compiled over to Wasm WebAssembly.

51:02 And so you can actually do machine learning stuff, not just like, hi, my name is plus name, you know what I mean?

51:07 That's one of the cool projects we're working on this year.

51:09 The other cool project that we're working on is, yeah, like the custom components.

51:12 Right.

51:13 So let's say that you wanted to build your, your own custom machine learning demo, your own custom web app, right.

51:18 But you need something that we don't have.

51:20 Giving you the API to build that component locally and then just hook it into the app without having to like merge anything into a Gradio upstream.

51:27 We're working on that actively.

51:28 So that, that'd be really exciting.

51:30 And then, yeah, really excited just cause it'll enable like a lot of people, like integrated community to collaborate with each other and build like really impressive stuff, kind of like on their own.

51:40 Right.

51:40 Like they don't need, like, they don't need like the core development team necessarily.

51:43 Sure.

51:44 Like pytest plugins rather than trying to change pytest.

51:46 So that, yeah.

51:47 Really excited about that.

51:48 And then yeah, they, like the other cool stuff is that we're, one thing that we didn't talk about that I would want to talk about if we have time is that all these demos that we've built are sort of available via API, right?

52:00 So if you click on any of these demos, like if you click on that first one, if you scroll to the bottom and you see it says use via API, right?

52:06 So this gives you like a little code snippet as to how you can call this demo from Python or JavaScript.

52:11 Okay.

52:12 What does that mean?

52:12 Right.

52:13 That means that basically all these ML apps that are available on a huggy face or just anywhere on the internet, like they now become like building blocks that you can use in your own workflow.

52:21 Right.

52:21 So, and actually this demo itself, it's I'm familiar with it.

52:25 is actually really cool because it's calling to other Gradio demos via API.

52:29 So this is an example of someone building their Gradio app by calling other Gradio apps.

52:33 - Wow, okay.

52:34 - We're creating this ecosystem where--

52:36 - It's like Gradio microservices.

52:38 - Exactly, right?

52:39 So it's like all these Gradio apps or building blocks that you can then connect together via API.

52:44 And that's really cool, right?

52:45 'Cause it basically means that machine learning is available.

52:49 You don't need to use the GUI to get state of the art machine learning, right?

52:53 like use an API, and that means that you could put these models like pretty much anywhere, right. So like one of the cool things that we launched two weeks ago, I believe, or like a week and a half ago is that you can deploy like a Gradio chatbot to Discord, like just one line of code, right. So if you, let's say if you have a Gradio app that talks with OpenAI, like GPT-3, or LLAMA, or any of these like open source LLMs, if you can build a Gradio app for it, you can like seamlessly hook it into your Discord server, right. So Then that's all built via this like API functionality.

53:23 Right.

53:24 So this is something like, okay.

53:25 Yeah.

53:25 Cool.

53:25 I'm personally super excited about it.

53:27 Like we want to push this further just because it's like, Gradio is that you like Gradio historically has been built for the UI, but it can also be used to get these machine learning models into more places.

53:37 One of the things that I'm really excited about in the coming years, making this a little bit more, a little bit more visible and getting, yeah, you could integrate some really cool LLMs and other types of chat into your, right.

53:47 That you discord.

53:48 I imagine you could probably do it with a Slack as well.

53:50 And if somebody asks in your company, how do I, whatever, you could think it go, Hey, I've already, I'm private GPT.

53:58 I've already ingested all of our docs.

53:59 So you want me to take a shot at answering that?

54:01 Like, sure.

54:02 Why not?

54:02 That's one of the pet projects I want to do is just do that for the Gradio discord.

54:07 Right.

54:07 So there's a Gradio discord where you could have the radio community and there's like people ask questions in it, but it'd be really cool if we had a Gradio chat bot that like knew a lot about radio that then you could just ask.

54:16 Exactly.

54:17 chatbot /gradio chat, like, like, how do I, whatever, how do I just flip plot right in the middle?

54:23 I'll tell you, just return a

54:24 Matplotlib

54:25 People could think, well, why don't I just use ChatGPT or something, but these are the things that you teach it like deeply about, you've given all the docs and you say, study this.

54:34 And then I want to ask you about it.

54:35 Right.

54:36 And a lot of times the docs and other things go beyond the token level.

54:39 That the standard models can take.

54:41 Like I've tried to get ChatGPT to tell me about transcripts on talk Python and it can't even adjust like one transcript before it runs out of space.

54:49 Like I can't quite load all that.

54:51 Well, that's, I wanted to ask you about all of them.

54:52 You can't even do one.

54:53 So this is not working for me.

54:55 Yeah.

54:55 So you could fine tune like an open source LLM and then host it wherever you want, right?

54:58 So yes, exactly.

54:59 More control.

55:00 Yeah, that's cool.

55:00 So you could teach all about radio.

55:02 Real quick question.

55:03 Mr.

55:03 Magnetic in the audience asks, what about a hugging face desktop app instead of the browser app?

55:07 Yeah.

55:08 So that's something that there's an open issue for that.

55:10 It's something that we've been kicking around as well.

55:13 It's just like, how do we get like a great deal of Gradio desktop app as well.

55:16 So yes, stay tuned.

55:18 I think, let me try to find that issue and then comment in the YouTube.

55:21 But yeah, I would love your thoughts on that.

55:22 If anyone has thoughts on that, but yeah, we, it's something we're thinking about, it's not, I don't think it'll happen maybe in the next like month or two, but maybe before the end of the year or next year, it could happen.

55:31 Excellent.

55:31 All right.

55:32 Well, I think that pretty well covers it.

55:34 It's a super exciting project.

55:36 So good luck with it.

55:37 I mean, already you've had a lot of luck with it, so you don't need my wishes, but further good luck on that.

55:42 And yeah, before we get out of here, let me ask you a final question here.

55:46 If you're always asked, like to ask the guests or like some cool PyPI project they've run across.

55:50 That's been really awesome.

55:51 It may be not super popular, but has made a difference or you've.

55:54 Wow.

55:55 How did I not know about this?

55:56 Any come to mind for you?

55:57 Python project?

55:58 Yeah.

55:58 On something I can pip install like FastAPI, but not FastAPI.

56:01 Cause everyone knows that.

56:02 I think when I was just starting out, I think I, I was like a really big noob.

56:06 And like, I always ran into like environment issues.

56:09 And then a friend of mine showed me about like PipDevTree.

56:12 It shows you exactly like why things get installed and yeah, I think it's really, I think it's really magical, honestly.

56:18 Yeah.

56:19 I think it's really helpful just to like figure out, like, especially like when someone files an issue and like, we don't know what's wrong with them.

56:26 Like sometimes I'll just like, where did this thing even come from?

56:29 And then just use like PipDevTree.

56:30 I think that's, it's really cool.

56:31 It's like really simple.

56:32 But yeah, I think it definitely has saved me a couple hours of time.

56:35 So.

56:35 It's cool.

56:36 I've used it for my own stuff.

56:37 I hadn't thought about using it for tech support, but yeah, of course, because then people run into problems because their environments are screwed up and they say they have a thing or they don't, or they say they have a version of a thing, but they don't, and with this, you can just to say, run this one command and it'll give you like a really cool, they have all these things installed and this is installed because it's required.

56:55 Yeah, exactly.

56:56 It's really nice.

56:56 Yeah.

56:56 Cool.

56:57 Excellent recommendation.

56:58 All right.

56:58 Final call to action.

56:59 People want to get started with Gradio.

57:01 What'd you tell them?

57:02 >> Hit the install Gradio and then go to gradioapp.com and then just see the demos there.

57:07 In our website, there's a link to our discord server.

57:09 So yeah.

57:09 Join the discord and say hi.

57:12 And then yeah, there's lots of people there.

57:13 We're willing to help.

57:14 And then I, yeah, never hesitate to file an issue.

57:17 What's really cool about this is like seeing that the demos that people build and like people build stuff that I frankly push the limits of what I thought people could build with radio.

57:25 And it's really cool seeing that.

57:26 Yeah, that's awesome.

57:27 Don't be afraid to, or don't hesitate to build really cool stuff with radio and think, well, we're really good about amplifying that.

57:32 So if you have something really cool, just like tag the gradio Twitter account or reach out to us on discord or something.

57:38 We'll amplify it for you.

57:39 Well, excellent project.

57:41 And thank you for being on the show.

57:43 Thank you for having me, Michael.

57:44 I had a lot of fun.

57:44 Yeah, same.

57:45 This has been another episode of talk Python to me.

57:49 Thank you to our sponsors.

57:51 Be sure to check out what they're offering.

57:53 It really helps support the show.

57:54 The folks over at JetBrains encourage you to get work done with PyCharm.

57:59 PyCharm professional understands complex projects across multiple languages and technologies so you can stay productive while you're writing Python code and other code like HTML or SQL. Download your free trial at talkpython.fm/donewithpycharm.

58:15 Take some stress out of your life. Get notified immediately about errors and performance issues in your web or mobile applications with Sentry. Just visit talkpython.fm/sentry and get started for free. And be sure to use the promo code "talkpython" all one word.

58:32 Want to level up your Python? We have one of the largest catalogs of Python video courses over at Talk Python. Our content ranges from true beginners to deeply advanced topics like memory and async. And best of all, there's not a subscription in sight. Check it out for yourself at training.talkpython.fm.

58:49 Be sure to subscribe to the show, open your favorite podcast app, and search for Python.

58:54 we should be right at the top.

58:55 You can also find the iTunes feed at /iTunes, the Google Play feed at /play, and the Direct RSS feed at /rss on talkpython.fm.

59:04 We're live streaming most of our recordings these days.

59:08 If you want to be part of the show and have your comments featured on the air, be sure to subscribe to our YouTube channel at talkpython.fm/youtube.

59:16 This is your host, Michael Kennedy.

59:17 Thanks so much for listening.

59:18 I really appreciate it.

59:20 Now get out there and write some Python code.

59:22 (upbeat music)

59:25 [Music]

59:38 (upbeat music)

59:40 [BLANK_AUDIO]

Back to show page
Talk Python's Mastodon Michael Kennedy's Mastodon