#430: Delightful Machine Learning Apps with Gradio Transcript
00:00 You've got this amazing machine learning model you created, and you want to share it and let your colleagues and users experiment with it on the web.
00:06 How do you get started? Learning Flask or Django?
00:09 Great frameworks, but you might consider Gradio, which is a rapid development UI framework for ML models.
00:16 On this episode, we have Freddie Bolton to introduce us all to Gradio.
00:20 This is Talk Python to Me, episode 430, recorded August 10th, 2023.
00:24 Welcome to Talk Python to Me, a weekly podcast on Python.
00:40 This is your host, Michael Kennedy.
00:42 Follow me on Mastodon, where I'm @mkennedy, and follow the podcast using @talkpython, both on fosstodon.org.
00:50 Be careful with impersonating accounts on other instances. There are many.
00:53 Keep up with the show and listen to over seven years of past episodes at talkpython.fm.
00:58 We've started streaming most of our episodes live on YouTube.
01:02 Subscribe to our YouTube channel over at talkpython.fm/youtube to get notified about upcoming shows and be part of that episode.
01:10 This episode is brought to you by JetBrains, who encourage you to get work done with PyCharm.
01:16 Download your free trial of PyCharm Professional at talkpython.fm/done dash with dash PyCharm.
01:24 And it's brought to you by Sentry.
01:26 Don't let those errors go unnoticed.
01:28 Use Sentry.
01:29 Get started at talkpython.fm/sentry.
01:32 Brady, welcome to Talk Python to Me.
01:35 Thanks for having me, Michael.
01:36 Yeah, it's great to have you here.
01:37 I think people are going to learn a lot about some machine learning on this episode.
01:41 And you've got this really cool visual way, this visual tool of working with machine learning projects.
01:47 And oftentimes people ask me, I'm not really a web developer, but I have some machine learning stuff or I'm a data scientist and I want to share with people.
01:55 How do I do that?
01:56 So your project might be a good answer to that for some folks, right?
01:59 Yeah, absolutely.
02:00 I think, yeah, Gradio is built for that use case.
02:03 I think you can build like lots of complex stuff with Gradio, web apps and stuff, but you could, it's optimized for the ML use case.
02:11 Like how do you get like an ML workflow, like on the web and share it with people as quickly as possible?
02:16 That's kind of what Gradio is built for.
02:17 I've got it running my computer.
02:19 How do I take it from a notebook to something that other people who are not programmers can use, right?
02:23 Yeah, exactly.
02:24 That's kind of it.
02:25 And it's with Gradio, it's one line.
02:26 You can get a shareable link directly from your Colab notebook, Jupyter notebook, SageMaker, local, whatever.
02:32 Yeah, so it's really easy to share with people.
02:35 Excellent.
02:35 All right, before we dive into that, let's start with you.
02:38 A quick story, quick introduction about you and how you got into programming Python.
02:42 Yeah, absolutely.
02:42 So all the way from the beginning, I graduated with a degree in statistics and my first job was working as a data scientist in Chicago.
02:50 And that was doing more like bread and butter data science-y stuff.
02:54 Like you pull data from like database and like you train a model and then you like try to communicate the results with someone.
02:59 And then at the time, yeah, I mean, it wasn't that long ago, but to me, it feels like it was like a millennia ago.
03:05 Technologically speaking, it's so different.
03:07 Yeah.
03:07 Yeah, it was like a really long time ago.
03:09 Yeah.
03:09 But then basically like what happens or what happened to me a lot was just like, okay, like we need to like, we're training this model.
03:15 Like how do we share, like share with the relevant like stakeholders, right?
03:18 There's like a PM or like someone that's interested in this.
03:21 Like how do you, how do we, how do you make them care?
03:23 And then there really wasn't a good answer.
03:24 Like you would have to like, you'd like compute like some metrics and then try to explain what they mean.
03:29 And you like draw like a bar plot or something.
03:31 And it just wasn't really that useful.
03:33 And I think it really was, to be fair, it was like a skill gap.
03:36 I didn't even know how to build like an interactive website to share with people.
03:40 Because at the end of the day, what really, what people really care about or how to make someone really care about it is they can play with it.
03:46 Right.
03:46 Because if you show them these plots and these metrics, it's like a machine learning model is like very abstract.
03:51 Right.
03:52 It's like just this thing that's somewhere else.
03:54 And then this is like the output of it, but it's not really even like the actual output.
03:57 It's just like some sort of summary statistics of it.
04:00 Right.
04:00 But if you give someone like this is what the model is and you can let them like manually play with the inputs and see how the outputs change.
04:06 They get a sense of, well, like what the model is and how it works and stuff like that.
04:12 Right.
04:12 And I think that's where I learned about this problem and why this problem is important.
04:17 And then ever since then, I've been sort of devoting to devoting myself to working on like open source tools to like make data science like more efficient.
04:24 And like my, my, the latest product that I worked on is called Gradio, which kind of does this.
04:29 It basically lets you turn a machine learning function into a web app in one line of code.
04:33 And from there you can jump off and build like as complex of a web app as you want.
04:38 But from the beginning, you can get up and running with just like basically two lines of code.
04:43 So yeah.
04:43 Excellent.
04:44 A little bit about me.
04:45 You talked about let people play with the machine learning model and you change the inputs and stuff.
04:50 It just completely changes the round trip speed because the alternative might be, I'll make, I'll get you a PDF and you can read the report.
04:58 And then what if we change this?
04:59 All right.
05:00 At tomorrow's meeting, I'll bring the new PDF.
05:02 Just empower people and give them these tools.
05:05 And yet at the same time, data scientists are not web developers, certainly not in the super dynamic front end Ajax callback way of programming.
05:13 Right.
05:14 That's a different skill to be sure.
05:16 And so it's not like their data science skills by default make them able to build these.
05:21 And even if you could, is it a good use of your time?
05:23 Right.
05:23 Yeah, absolutely.
05:24 And yeah, so it's like the PDF report is definitely like one, one way of doing it.
05:28 The other way that we tried to do sometimes is we'll just hand it off to someone and we'll, we'll have them build the web.
05:33 But then that just takes longer.
05:34 You have to like explain everything to them.
05:36 Right.
05:36 I think like the, make it really impactful as if the person who made the model can they themselves, they can just create the web app, the demo, like immediately.
05:44 A lot of these, a lot of data scientists are in Python, right?
05:46 So that kind of means you need like a Python based tool to get you up and running really quickly.
05:51 And a lot of them, yeah, like you said, don't know about web programming.
05:53 So you've got to like abstract that away as much as you can, like as much as it makes sense.
05:57 So that they're not like daunted or it's not like, okay, now I'm like, I'm really good at PyTorch and training or like all these things.
06:03 And like now I need to learn about whatever, like servers and all that stuff.
06:07 It's like, it's like almost a different skill set for a lot of people.
06:10 It really is.
06:10 And the handed off to somebody else, it's also, it's slow, but also it only works for certain situations, right?
06:16 Like a lot of data scientists, I suspect don't have a whole software team supporting them as they need, right?
06:22 They're the sole person at their company.
06:25 So you said you had gone into statistics and found your way over to this side of that world.
06:29 You feel like it's a golden age for statisticians now, because when I went to college, it's like, well, you could be an actuary or you could work at an insurance company.
06:39 Or maybe some other company might be interested in hiring somebody who does stats.
06:43 You could work at the U.S. Bureau of Statistics.
06:46 Yeah, exactly.
06:47 And that was, and now with the kind of blending into data science, there's just so in demand.
06:53 So when it was like, the world is so open for that now.
06:55 It definitely became a much sexier career.
06:58 And I think I got lucky that I got into it like right before, right as it was starting to take off.
07:03 Like at that point, like data science wasn't really like a term yet.
07:07 I'm not that old, but yeah, at that time it wasn't really.
07:09 There were like, I think at that point, like they were called, it's like research scientists was more like the term, right?
07:15 Which still sounds a little bit dry, but then it got rebranded as data science.
07:18 Yeah.
07:19 I mean, I think it's a, I mean, I would say it's always a good time to study statistics.
07:22 I think like a lot of people, if someone were to come back to me and say, I have no idea what I want to major in, but I have some aptitude for math.
07:28 I would say like major in statistics.
07:29 I think it's like really useful.
07:31 It has like a lot of applications.
07:32 But I think now also in terms of tech, I feel like definitely like it's sort of like the, it's the era of the Renaissance person, right?
07:39 Like you have to know a little bit of everything now.
07:41 Like, right.
07:42 Cause it's like stats, programming, math.
07:45 Like all these things are starting to like blend it, blend together.
07:48 And yeah, I think it's like a lot of people I really respect pulled from all these disciplines, like seamlessly.
07:53 Right.
07:54 And I think it's, yeah, I think that's kind of where we are now.
07:56 Yeah.
07:57 It's a super fun time.
07:58 If you're excited about always learning new things and sort of bettering yourself and bringing in this thing and mixing it that way.
08:04 If you'd rather just be done learning, maybe not so much.
08:07 Absolutely.
08:08 Can't just show up for 20 years.
08:10 It'd be okay.
08:10 I guess if you were like maintaining COBOL to code, it would be okay.
08:14 But not in the machine learning space and machine learning also is just crazy.
08:18 We've got large language models just running loose everywhere now.
08:22 What do you think about all that?
08:23 It's definitely like a very exciting time to be alive.
08:26 I think it's pretty crazy that when I got started in ML six years ago, it was, it definitely was like very niche.
08:32 Right.
08:32 And like the tools that people use and like the language about it definitely did not penetrate the mainstream.
08:37 But now it seems like, like the technologies and the algorithms, the models, the datasets are all things that people talk about now.
08:43 Right.
08:44 And I think we've all had an older relative ask us about ChatGPT or like the latest trend or stable diffusion with the image, the AI image generation.
08:52 I think it has penetrated every part of this.
08:55 I think like part of the reason why that is, is one, because like one, like the technology is like way more impressive now.
09:00 Right.
09:00 Like these algorithms are able to do things that were unimaginable, like 2017, right.
09:05 When I graduated college.
09:06 But also it's just, these models are much easier to share and use now.
09:10 Right.
09:10 And I think part of the reason why ChatGPT is so, we took off so quickly, which is like the interface is so intuitive.
09:16 It is.
09:16 We've been chatting with each other for like decades, like over the internet.
09:20 Right.
09:20 And the user interface is so simple and it works.
09:23 So fits our mental model so quickly or so easily.
09:27 But, you know, under the hood, it's like this incredibly complex process.
09:30 Right.
09:30 Right.
09:31 Right.
09:31 Yeah.
09:32 Right.
09:32 And I think that's like where tools like radio come into play.
09:35 Right.
09:35 It's just, there's a bunch of like incredible, like amazing research happening, but unless other people can use it, play with it, evaluate it, like it's almost as if it doesn't exist.
09:45 Right.
09:45 And I think radio really helps you create like a demo, an app that other people can use and play with and evaluate your model.
09:52 And then, and then just like that, anyone can use it.
09:54 Right.
09:55 Like, you don't, you no longer have to be like a technical person and you don't have to like type on some script or something and then go to a website.
10:01 Right.
10:01 You can just go to a website, right.
10:03 You can just send someone a link and then they can play with the state of the art.
10:05 It's pretty, pretty cool.
10:06 Can you control a combo box and a button?
10:09 Yeah.
10:09 Something like that.
10:10 Right.
10:10 Yeah.
10:10 Then you're qualified.
10:11 It's wild.
10:11 One of the things that surprises me is for such insane technology that leverages so many servers to chat to PT and friends, the user interface for it is so mundane.
10:23 I don't mean that as a derogatory term, but it's just like, well, you just talk to it in this text box and it just talks.
10:28 There's not like a crazy new UI where you put on 3D glasses.
10:32 It's just a chat box.
10:34 But what it does is incredible.
10:36 Similar for mid journey and other things.
10:38 You just slash imagine, just chat with it.
10:41 But so there's this sort of weird paradox of this incredible, simple way to interact with it.
10:47 And yet what it does is, I guess it's a natural way to interact with it, which is what's surprising.
10:52 Yeah.
10:52 Part of the reason, I think it's just like the natural language interface.
10:55 I think a lot of people resonate with that.
10:57 I think you don't have to explain that.
10:59 Right.
10:59 Like you just type something and then it'll respond.
11:01 Right.
11:01 Like it won't error.
11:02 Right.
11:03 And I think even like stuff that isn't just purely chat based, I think like stable diffusion, like the web UI.
11:09 Right.
11:09 I think it's, it has a lot of controls.
11:11 Right.
11:11 But at the end of the day, it's like a Photoshop-esque interface.
11:15 Right.
11:15 Where it's like someone who's used to that kind of software, like it's what they expect, right?
11:20 You like upload an image and then you like, you can like get a tool to blur something out and then you can intaint it or outpaint it and stuff like that.
11:28 So I forget who said it, but I think someone, someone said that MLs, it's not really like the product.
11:33 It's like the, like in the background, right?
11:35 Like the most successful ML products, they don't really feel like they're ML.
11:39 Right.
11:40 It's like the MLs, like abstracted away and it just makes your experience like that much better.
11:44 Right.
11:44 And I think that's what all these different tools are showing.
11:47 Amazing.
11:47 All right.
11:48 So let's talk about Gradio and I'm going to ask you just something a little bit funky to kick this off, but let's talk about what just what other apps are like Gradio.
11:57 So things that come to mind for me are like StreamYard, for example.
12:01 Streamlit?
12:01 Sorry, that's what I mean.
12:02 Yeah.
12:03 Stream, I'm reading the words of our app that we're using.
12:06 Streamlit, not StreamYard.
12:07 Streamlit and other, just give people a sense of what are the categories of apps that's in the same space so they can get a mental model for what
12:14 Gradio is.
12:14 For sure.
12:15 Yeah.
12:15 So I think like Streamlit is a good comparison.
12:17 Like Plotly, Dash, I think is also in the same ecosystem.
12:22 Shiny, if I think.
12:24 Yeah.
12:25 I think like the first programming language.
12:26 Yeah.
12:27 I just had Joe on to talk about Shiny for Python.
12:29 Yeah.
12:30 Recently, yeah.
12:30 They're all definitely in the same ecosystem.
12:32 If you go to the Gradio homepage, Gradio.app, you can see some of the apps that you can build with Gradio really quickly.
12:39 Absolutely.
12:40 Yeah.
12:40 But that doesn't necessarily limit.
12:42 Like what you see on the landing page is not all that you can build with Gradio.
12:45 I think those are just like the eye-catching quick examples just because, like I said, like Gradio is built to get these kind of examples up and running really quickly.
12:53 But you can do like lots of complex stuff with Gradio.
12:56 Excellent.
12:57 Yeah.
12:57 So let's dive into it.
12:59 So you've already given a bit of an introduction for us.
13:02 Maybe we could work, just start by discussing how you might take, you've got some different types of problems you can solve on your homepage.
13:08 And it shows you the code, the entire code.
13:11 Yeah.
13:12 Then the UI that comes out of it.
13:14 So maybe you could just talk us through the sketch recognition.
13:16 It's one of the types of UIs you could build here.
13:19 So with the sketch recognition, you just, when you draw, it's definitely a bird I drew there.
13:23 Or a mountain.
13:24 I'm not sure.
13:24 How do you think about Gradio?
13:25 So Gradio is a Python library, right?
13:29 Python is the main language used to interface or to build Gradio apps.
13:32 pip install Gradio.
13:33 Yeah.
13:33 You pip install Gradio, right?
13:35 And then what does Gradio do?
13:36 At the highest level, Gradio turns a Python function, any Python function into a interactive web app, right?
13:43 So when you think of function, right?
13:45 Function has inputs and outputs.
13:46 So these inputs correspond, these inputs and outputs correspond to things that will be drawn on the page, right?
13:51 And Gradio comes with a standard set of inputs, right?
13:55 There's like text boxes, drop downs, number fields, data frames, plots, anything like that.
14:01 But also like drawing tools like a sketch pad.
14:04 And then the output, it can be any of these other components, but it could also be like a label, right?
14:09 To show like a machine learning prediction.
14:11 So all you need to do is take, write a plain Python function that takes in a drawing and returns a set of probabilities or competences.
14:18 And then Gradio can wrap that in one line of code and turn it into an interactive web app.
14:24 Like we see here, if you're on the YouTube stream, you can see what Michael is doing.
14:27 There's like a sketch pad area and then he can scribble on it and then immediately he'll get a prediction out.
14:34 Yeah.
14:34 Let's see if I can draw an owl, maybe.
14:37 Right.
14:37 So it's like updating.
14:38 Let's see how I do.
14:39 I don't know.
14:39 Yeah.
14:40 Syringe.
14:40 We've got to make the model better, right?
14:42 It could be a cat.
14:44 It definitely could be a cat.
14:45 I can see cat.
14:46 I can see.
14:47 I think I've got to make my drawing better.
14:49 But so the idea is you have a regular function that takes the inputs and outputs.
14:53 There's no UI whatsoever.
14:54 And there's also no reactive programming.
14:57 You're not like hooking events where I redraw and it just reruns, right?
15:01 That's not part of my code I write as a Python person.
15:04 Right.
15:04 And then you just say gr.interface, give it the function.
15:07 And then you say the inputs are, in this case, you say it's just a sketch pad.
15:11 And so I get this UI that I can draw on that I've been attempting to draw an owl on.
15:16 It's probably missing the eyes.
15:17 I think it's the eyes that are missing.
15:19 The latter.
15:19 It's not about testing the underlying model, is it?
15:24 And then you say the outputs in our label.
15:26 Now, a lot of UIs, people might think of label is just a non-interactive piece of text.
15:32 But here, there's more of a machine learning label, right?
15:34 You've got like a cool horizontal bar graph that has percentages and talks about its guesses.
15:39 So it's like a machine learning labeled response report.
15:43 A machine learning person would, when they see label, they think that, they don't think of the standard web, like just like a text box, right?
15:52 Yeah.
15:52 Like label four type of thing in HTML.
15:55 Yeah.
15:55 Right.
15:55 So that's one of the things that, or one of the ways that kind of Gradio is built for that
15:59 kind of audience, right?
16:00 It's like the high-level perimedal, match that mental model.
16:07 This portion of Talk Python to Me is brought to you by JetBrains and PyCharm.
16:11 Are you a data scientist or a web developer looking to take your projects to the next level?
16:16 Well, I have the perfect tool for you, PyCharm.
16:18 PyCharm is a powerful integrated development environment that empowers developers and data
16:23 scientists like us to write clean and efficient code with ease.
16:28 Whether you're analyzing complex data sets or building dynamic web applications, PyCharm has
16:33 got you covered.
16:34 With its intuitive interface and robust features, you can boost your productivity and bring your
16:39 ideas to life faster than ever before.
16:41 For data scientists, PyCharm offers seamless integration with popular libraries like NumPy,
16:46 Pandas, and Matplotlib.
16:47 You can explore, visualize, and manipulate data effortlessly, unlocking valuable insights with
16:53 just a few lines of code.
16:55 And for us web developers, PyCharm provides a rich set of tools to streamline your workflow.
16:59 From intelligent code completion to advanced debugging capabilities, PyCharm helps you write
17:04 clean, scalable code that powers stunning web applications.
17:08 Plus, PyCharm's support for popular frameworks like Django, FastAPI, and React make it a breeze
17:14 to build and deploy your web projects.
17:16 It's time to say goodbye to tedious configuration and hello to rapid development.
17:21 But wait, there's more.
17:23 With PyCharm, you get even more advanced features like remote development, database integration,
17:28 and version control, ensuring your projects stay organized and secure.
17:32 So whether you're diving into data science or shaping the future of the web, PyCharm is
17:36 your go-to tool.
17:37 Join me and try PyCharm today.
17:39 Just visit talkpython.fm/done-with-pycharm, links in your show notes, and experience the
17:47 power of PyCharm firsthand for three months free.
17:50 PyCharm.
17:51 It's how I get work done.
17:53 One thing you mentioned earlier is that there's no explicit reactivity that you as a programmer
17:58 have to write.
17:58 That's definitely true in the gr.interface case.
18:01 gr.interface abstracts all that away from you.
18:04 But Gradio also offers...
18:06 So this is your simple case.
18:07 Like, I just want...
18:09 Just run this function with these inputs and outputs, and I want a real basic variant.
18:12 Okay?
18:12 Gradio also offers like a lower-level API where you can explicitly control the layout, right?
18:18 So right now, everything is like side-by-side.
18:19 You can put them horizontally across columns, rows.
18:23 You can add components, right?
18:24 And then you can also be more explicit in saying, okay, when this input changes, run this function,
18:30 and then that will populate this and stuff like that.
18:33 And you can change these things together.
18:34 So...
18:35 It makes sense.
18:35 Maybe it's expensive to...
18:37 It's expensive to generate some portion.
18:39 You want to cache it as much as possible.
18:41 Yeah.
18:41 You just want to have like more control over like exactly what happens when things change,
18:47 right?
18:47 You could...
18:48 Gradio gives you that control.
18:49 But for a lot of use cases, you can get...
18:52 You can get really far with gr.interface.
18:53 And then the other companion piece, which isn't on the landing page because we just released
18:58 it maybe two weeks ago, it's gr.chat interface, right?
19:01 So you could build like a...
19:02 Oh, interesting.
19:02 To an LLM or something.
19:04 Yeah, you could build a chat UI for...
19:06 Yeah, like an LLM just in one line of code.
19:09 And I think I can try to maybe find an example of that real quick.
19:13 Yeah.
19:13 While you're looking, do you offer any guidance or any opinionated stuff on which LLM to choose?
19:19 Or do you just say, it's just a chat interface and you write the code to make it happen?
19:23 You just write the code to...
19:25 Yeah, just given the message, what should the response be?
19:29 And then that's the interface.
19:30 Yeah.
19:31 And then...
19:31 There's some interesting options.
19:33 People might want to pick.
19:34 Obviously, you could pick OpenAI and use their API, but there's things like private GPT, which
19:40 allows you to ask questions about your documents, but 100% private, right?
19:44 You could just give it hundreds of docs and say, learn these and we're going to talk to you
19:49 about it.
19:49 Or something along those lines.
19:51 There's Langchain, right?
19:53 Yeah.
19:54 Which is a pretty interesting option for building these things.
19:56 Llama.
19:57 Like the new Llama 2.
19:58 Yeah.
19:58 The Llama 2.
19:59 Yeah.
20:00 So I think in the chat, I just posted like a radio Llama 2 UI that we can show.
20:04 It's on Hugging Face.
20:05 So I think we can talk about the hosting on Hugging Face as well.
20:08 Okay.
20:09 Yeah.
20:09 So this is the chat UI.
20:11 So if you were to scroll down a little bit...
20:13 UI says chat bot, you can type a message.
20:15 Yeah.
20:15 I'll ask it what the podcast says.
20:17 Hey, I'm here to help you.
20:18 HugPython is a podcast and community dedicated to helping developers improve their skills,
20:22 interviews, and experts in the field.
20:24 That's you, Freddie.
20:25 Resources.
20:26 Yeah.
20:26 What do you want to know?
20:27 Under the hood, this is using Llama 2, 70 billion.
20:30 Nice.
20:31 I can ask you what the latest episode is.
20:33 So it gives me a sense how far back it goes.
20:35 That's about two years old.
20:36 So okay.
20:36 Interesting.
20:37 Very good.
20:37 Yeah.
20:37 Makes sense.
20:38 Oh, wait.
20:38 No, this is...
20:40 I'm not so sure about that.
20:41 That's accurate.
20:41 Yeah.
20:42 I think there's a little bit of a mismatch, but this is really cool.
20:44 So you basically plug in whatever LLM you want to into this and they put the...
20:50 Or you put the Llama 2.
20:52 If you scroll up a little bit in the website, if you...
20:55 Like you see those like three bars?
20:57 Yeah.
20:57 The hamburger deal.
20:58 Oh, no.
20:59 Sorry.
20:59 That's not it.
21:00 Other hamburger.
21:00 Where is it?
21:01 There are three dots maybe?
21:02 Oh, I guess maybe because you don't have an account.
21:04 You can't see the file.
21:05 Oh, yeah.
21:05 If you go to files there, if you go to files, I'm going to app.py.
21:08 This is the source code of the...
21:10 Oh, interesting.
21:11 Okay.
21:11 If you scroll down, the helper text, and then this is the actual prediction function.
21:14 It's about 20 lines of code.
21:16 But if you scroll down, you see the chat interface code.
21:19 With GR blocks as demo?
21:21 Okay.
21:21 The tab, a batch.
21:22 Okay.
21:23 The important thing is that your chat interface is just like a one-line way to create a chatbot.
21:28 It works similarly to the interface case.
21:30 There's just a function.
21:31 In this case, it's an LLM, so it handles like the responding to each user message.
21:36 And then you just call that, and then you can call launch, and then you get a UI.
21:40 Nice.
21:41 You get like a chatbot-esque UI.
21:43 And in this case, it's 100 lines of code.
21:46 But, you know, yeah, pretty simple.
21:48 That is simple.
21:48 And one of the things that's pretty cool here is section.
21:51 You hook it to the type you set up is a streaming type versus the place I type is a batch.
21:58 And then the function you give it is a generator with yield keyword.
22:01 So it just, as you go through it, it makes choices and sends them back.
22:05 Pretty advanced interaction for the UI to be running like that.
22:09 That's cool.
22:09 In order to get streaming, there's no special syntax.
22:12 You can just use the normal Python yield, and then Gradio knows how to feed that iteratively.
22:16 Feed that to the front end, and then you get like this responsive streaming UI.
22:20 That's a really good call out.
22:21 It's just like the, and Gradio tries to use like the core Python syntax and the core Python data types as much as possible.
22:28 Just to make it easy for people to get up and running.
22:30 I just blew through this really quickly here.
22:32 But basically, from what I can tell, is the amount of code here to actually implement this,
22:37 that is not just the details of giving this text, make the LLM do the thing.
22:42 Five lines of code.
22:43 Yeah, definitely true.
22:44 That should make people pretty excited about, hey, I can write five lines of code,
22:48 especially with an example to work from.
22:49 Exactly, yeah.
22:50 So, yeah, definitely we need to get chat interface up in the landing page.
22:53 But, yeah, I think it's super easy to get complex demos running.
22:57 I think it's just a handful of lines of code.
22:59 We mentioned Shiny a little bit.
23:01 Boomyar asks, how does it compare to Shiny?
23:03 How familiar with Shiny can you?
23:05 I'm not super familiar.
23:06 I'm ominously not familiar.
23:08 I'm familiar with Shiny, but not well enough to compare it directly to Gradio either.
23:11 I mean, they live in the same general world of trying to create a UI that you don't have to write web apps for.
23:18 But I don't think they're totally the same, but they're similar.
23:20 Okay.
23:21 So, we talked about setting up pip install.
23:24 That's easy.
23:24 You say you can choose from a variety of interface types.
23:27 These are the widgets that you're talking about, right?
23:29 You could, like, in terms of the inputs and the outputs, we call them components.
23:33 But, yeah, there's...
23:34 If you go to the docs page, we have about 30-something components.
23:37 Wow, okay.
23:38 Yeah, so code, buttons, data frames, plots, files, pretty much, like, you name it.
23:44 And we're adding components all the time.
23:47 And we're also...
23:48 One of the things that we're going to work on is letting the community create their own components.
23:51 So, if you have your own particular demo, your own particular web app, and you want this new component that we don't support, like, how do you...
23:57 We're working to make it easy for you to do that without having to merge something into Gradio upstream, right?
24:03 And then other people can play with it.
24:04 So, we're working on that as well.
24:05 But for the time being, it's, yeah, about these, like, 30-something components.
24:09 And then, yeah, you can mix and match them however you want.
24:12 You've got quite a bit of them.
24:13 Many of them, I suspect, people would imagine.
24:16 So, you've got button.
24:17 Yeah.
24:17 Let's see.
24:18 You've got data frame, which is pretty cool.
24:20 And then gallery image.
24:22 The plots.
24:23 The plots, like the line plot, scatter plot.
24:24 Those are all pretty cool.
24:25 But you've also got things like audio.
24:27 What's the story with audio?
24:28 This is how you...
24:29 Yeah, you can upload, like, an MP3 or a WAV file directly.
24:33 Okay.
24:34 Maybe for a transcript.
24:35 Or a sentiment analysis or something?
24:37 Exactly.
24:38 Like, whisper.
24:39 Essentially, audio to transcription.
24:41 Or also just synthetic audio, right?
24:44 So, there's, like, Bark.
24:45 And there's all these, like, machine learning demos.
24:47 They go text-to-speech, basically.
24:49 And they're really advanced.
24:51 So, if you wanted to display that, right?
24:52 Like, you ingest text, come out with audio.
24:55 Like, you could use, like, an audio output component.
24:58 And then you get, yeah, you get, like, a...
25:00 You can play the audio directly in the browser.
25:01 It's just like an audio tag in HTML.
25:03 Obviously, it does more from the UI, I'm sure.
25:05 But it's to not just...
25:07 I guess you would just do file if you really wanted to drop it in MP3.
25:10 But if you wanted to generate audio and let people see the results, then this audio thing
25:14 would be the way to go.
25:16 Yeah.
25:16 And then you could also...
25:17 The audio could also be, like, the input, right?
25:19 You could just drag it, drop it, you know.
25:20 If you click on that box, it'll let you upload an audio if you have it.
25:24 And then you could...
25:25 Interesting.
25:25 You could play it, right?
25:26 I'm sure I have some audio.
25:26 Yeah.
25:27 I know that I have some.
25:28 But let's see if I can find something to upload here.
25:31 Here, I'll upload a sponsor.
25:32 Here, this is super short.
25:34 So, I can upload.
25:35 Yeah, look at that.
25:36 And it just becomes a player.
25:37 Excellent.
25:37 You can play it and then you can also edit it as well.
25:40 So, you see that little pencil?
25:41 Uh-huh.
25:41 And you could, like, trim it.
25:43 You could do, like, trim it to make it shorter.
25:45 Okay.
25:45 Yeah.
25:46 Very nice.
25:47 Yeah.
25:47 Yeah.
25:47 Lots of cool components like that that we have.
25:50 Yeah.
25:50 All the standard form stuff, like sliders and drop-downs and, yeah.
25:53 Highlighted text.
25:54 And, yeah, so the standard, like, form stuff.
25:56 But then there's also complex...
25:58 Not complex, but more maybe domain-specific ML stuff.
26:01 So, highlighted text, for example.
26:03 Like, really big and, like, part of speech tagging, like NLP.
26:05 Right?
26:06 So, you can get, like, highlighted.
26:07 It's, like, depending on the tag that you apply to each word in the text, you get, like, different coloring and stuff like that.
26:14 Yeah.
26:14 So, it's for MLP, like, Model 3D.
26:16 So, there's a lot of ML demos that come out that you can generate Model 3D assets directly.
26:21 So, this lets you display them, as well.
26:24 So, yeah.
26:24 So, everywhere...
26:25 We have everything from the most kind of basic, general, like, web app stuff.
26:30 Domain-specific machine learning...
26:32 Yeah.
26:32 ...components as well.
26:33 Yeah.
26:33 Let's see what else jumps out here.
26:35 We have video as well, which is pretty cool.
26:38 JSON.
26:38 Yeah.
26:39 There's a lot of what you can type in.
26:40 JSON, I guess it probably validates it and auto-formats it, something, rather than just plain text.
26:45 Yeah.
26:46 When you return the JSON, it highlights it for you.
26:48 You can copy it directly as well, so...
26:50 Okay.
26:50 So, when we were talking about the Gradio.interface, it had, well, here's an input and here's an output.
26:57 Floral?
26:58 So, could you have, if I say, there's a sketch pad and a text box?
27:02 Yeah.
27:02 As the input, and then the outputs are, I don't know, three other things?
27:06 Yeah.
27:06 Yeah.
27:06 That's a really good observation.
27:07 Yeah.
27:08 So, you can have more than one input, more than one output, for sure, right?
27:12 So, if you go to the, I think, the time series forecasting demo, I think that one has two inputs, right?
27:16 That's also on the homepage.
27:17 Yeah.
27:17 Also on the homepage, right?
27:19 So, you can pick...
27:20 I see.
27:21 It's like a toy example, like forecasting.
27:22 If I installs, but you could pick...
27:24 Here has two inputs, right?
27:26 Like, the time horizon and the library itself.
27:29 Both of them are dropdowns, right?
27:31 And then when either updates, the plot updates, right?
27:34 So, there's also, like, how you can do plotting in Gradio.
27:36 And then, also, this is interesting.
27:38 This demo is built with the lower-level API.
27:41 You could build it with interface if you wanted to.
27:43 Just as an example, it's built with the lower-level API.
27:47 Yeah.
27:47 So, I guess there's probably a library and time span, two arguments to the function that
27:52 you write.
27:52 And then, just as you interact with these widgets, it just recalls it with whatever the values are.
27:56 Exactly.
27:57 And then, the function itself returns a plot.
27:59 So, in this case, it's a plotly plot, right?
28:01 So, by default, we ship with support for mapplotlib, plotly, bokeh, and altair.
28:07 So, if I would create, like, a mapplotlib object, do all the stuff to it that I would do in a notebook,
28:13 instead of calling show, I just return it from my function, and then it becomes part of the UI?
28:17 Yep.
28:17 Okay.
28:17 Yeah, that seems pretty straightforward.
28:18 Yeah.
28:19 This one happens to be done with profit, a time series library.
28:22 Yeah, you've got integration with a bunch of cool machine learning libraries here as well.
28:27 Yeah, so, the cool thing is, pretty much, if you can write a Python function for it, like, it'll work with Gradio.
28:31 It really doesn't need to be, like, us as the development team don't really need to build that many integrations
28:37 to get anything that you're working with to work with Gradio.
28:39 Pretty much, if you can call a Reddip Python function to do it, and we have supported output types and stuff like that,
28:45 you can display it with Gradio.
28:47 Nice.
28:47 Yeah.
28:47 So, yeah, there's a couple of demos, for example, like, connecting to, like, databases and stuff.
28:52 You can connect to S3 if you wanted to, right?
28:54 Like, you're not...
28:55 Oh, interesting.
28:55 I don't know exactly where they are now, but yeah.
28:57 It's got to be one.
28:58 Yeah.
28:59 This looks like it might be one, potentially.
29:00 Yeah.
29:01 I'll click, I'll find some S3 stuff.
29:03 Here's, there's got to be someone here somewhere.
29:04 Yeah, very cool.
29:05 So, one of the things people may be wondering, and the fact that I don't see a pricing up at the top,
29:11 it might be a big hint here.
29:12 What's the business model?
29:14 What's the story with this?
29:15 Is this just straight open source?
29:17 Is it open core?
29:18 What's the story around your project here?
29:20 Gradio is completely open source, and you can host it anywhere, so you're not tied into
29:24 any platform.
29:26 Gradio did get acquired by Hugging Face maybe, like, almost two years ago, right?
29:30 So, Gradio integrates really tightly with the Hugging Face ecosystem, but...
29:35 Okay, I see.
29:36 Those integrations are normally free, right?
29:39 So, for example, you could host Gradio demos on...
29:42 On Hugging Face spaces or something?
29:44 Yeah.
29:44 On Hugging Face spaces, right?
29:46 And then if your demo needs special, like, hardware or stuff like that, like, you could pay
29:50 Hugging Face to provision that for you.
29:52 Sure.
29:52 But you're not paying for the Gradio.
29:54 You could use whatever you want on Hugging Face spaces now, right?
29:57 It doesn't have the Gradio.
29:58 Yeah, yeah.
29:58 So it's freely available.
29:59 So it's completely open source with kind of a Gradio as a service via Hugging Face.
30:05 Hugging Face.
30:06 Hugging Face, yeah.
30:06 Yeah.
30:07 On their spaces.
30:08 Say that fast a bunch of times.
30:10 So, yeah, really cool.
30:11 One of the things people might not know if they haven't heard of Gradio before is you go to
30:15 your GitHub repo for it.
30:17 Almost 21,000 GitHub stars.
30:19 A serious bit of attention that it's gotten.
30:22 We've seen a lot of growth in the last, yeah, about year and a half, like, ever since the
30:26 Hugging Face acquisition.
30:27 That's really helped us put the library in front of a new audience.
30:31 Yeah.
30:31 The recent advances in ML, like, a lot of people want to build demos for ML models now, right?
30:36 So I think that's definitely helping Gradio as well.
30:38 Yeah.
30:38 Trying to give people a sense of scale.
30:40 This is, like, third of FastAPI, third of Fast.
30:43 It's, like, that's a lot of people using this.
30:45 So the reason I'm bringing that up is it's not some brand new thing that you came up with
30:49 that maybe people could try, but it's got a lot of users, right?
30:52 Month to month, we're seeing, like, hundreds of thousands of people building these Gradio
30:57 demos repeatedly.
30:58 So, yeah, definitely a lot of growth.
30:59 And, yeah, Gradio is about five years old now, so it's not.
31:01 Awesome.
31:02 Congrats.
31:02 That's really cool.
31:03 That's nothing new.
31:04 Yeah, 3.9, almost 4 million monthly downloads.
31:07 That's a decent chunk.
31:08 This portion of Talk Python to Me is brought to you by Sentry.
31:12 You know that Sentry captures the errors that would otherwise go unnoticed.
31:16 Of course, they have incredible support for basically any Python framework.
31:21 They have direct integrations with Flask, Django, FastAPI, and even things like AWS Lambda and
31:28 Celery.
31:28 But did you know they also have native integrations with mobile app frameworks?
31:32 Whether you're building an Android or iOS app, or both, you can gain complete visibility
31:38 into your application's correctness, both on the mobile side and server side.
31:43 We just completely rewrote Talk Python's mobile apps for taking our courses, and we massively
31:49 benefited from having Sentry integration right from the start.
31:52 We used Flutter for our native mobile framework, and with Sentry, it was literally just two lines
31:58 of code to start capturing errors as soon as they happen.
32:01 Of course, we don't love errors, but we do love making our users happy.
32:05 Solving problems as soon as possible with Sentry on the mobile Flutter code and the Python server
32:11 side code together made understanding error reports a breeze.
32:15 So whether you're building Python server side apps, or mobile apps, or both, give Sentry a
32:21 try to get a complete view of your app's correctness.
32:25 Thank you to Sentry for sponsoring the show and helping us ship more reliable mobile apps
32:29 to all of you.
32:30 What do we think about, I don't want to do an imaging one.
32:34 You can do other demos you've got here, you've got time series forecasting, we talked about that
32:38 as the multiple inputs.
32:40 Yeah.
32:40 XGBoost with explainability.
32:43 Want to tell us about this a little bit?
32:44 Yeah.
32:45 This one also, I think, has like, this one has 12 inputs, right?
32:48 And the idea is it's one of these like kind of Kaggle-esque things where you like predict
32:52 income based on a slew of predictors, right?
32:55 And then the cool thing is that this isn't explicitly built into Gradio, or yeah, this isn't explicitly
33:00 built into Gradio, but you could hook into SHAP really easily, right?
33:03 So if you hit explain, it'll try to explain the prediction of the model and display it in a
33:08 plot for you.
33:09 Wow.
33:09 Okay.
33:09 Right.
33:10 So for those of you who don't know, SHAP is like this algorithm for explaining the predictions
33:14 of any machine learning model.
33:15 I see.
33:16 It's hooking into XGBoost, right?
33:18 But there isn't like an explicit, in this demo, there isn't like an explicit Gradio feature
33:21 that's being used.
33:22 It's just calling SHAP directly from this Python function and then displaying the results as
33:27 a plot.
33:27 The thing it does is it's got a bunch of different sliders and dropdowns.
33:30 It says given an age, your education level, years of school, whether you're married or not,
33:36 all those male, female, how many hours a week you work, and then it predicts, what is
33:42 this?
33:42 Yeah.
33:42 Predicts your yearly income.
33:44 And then the thing you're talking about is with that model, you can ask it, okay, well,
33:48 of all these different things we could put into it, what features, what aspects of that
33:52 are more important and what are less important, right?
33:54 Right.
33:55 Okay.
33:55 The use case is like, let's say like you are a data scientist that was charged with building
34:00 this kind of model.
34:01 The first question after someone seeing the prediction is someone might have is like, why?
34:05 Like, why is it predicting this?
34:06 Right.
34:07 And then you ideally want to be able to explain exactly what element of the predictors contributed
34:12 to the prediction the most.
34:13 And there's a lot of tools that you can use for that.
34:15 Right.
34:16 Shab is, I think, the most well-known to my understanding.
34:19 And then, yeah.
34:20 And then you can just with Gradio really easily just call that algorithm and then just display
34:24 it in a plot.
34:25 Right.
34:25 And then in this example, like one of the inputs is like the capital gain.
34:29 So like how much you make on your investments.
34:30 Right.
34:31 So, and I think in this particular case, like the capital gain is like really big.
34:34 Right.
34:34 So obviously because the capital gain is so big in this particular case, we predict that
34:38 the income will be really big.
34:40 Right.
34:40 Because capital gain is pretty much synonymous with income really.
34:43 So yeah.
34:44 Yeah.
34:44 So that's what this is showing.
34:45 Yeah.
34:46 And I suspect this is important for a lot of reasons.
34:48 If you're building this for your company or for some kind of project, people want to know,
34:53 well, we have all these different inputs.
34:55 what ones actually matter to making a prediction.
34:57 Maybe only the top three are the ones that really matter.
35:00 And you can throw out things like marital status.
35:02 Like it actually doesn't make much of a difference.
35:04 Right.
35:04 Or if you're a policy person and you're this model actually matches real data, you could
35:09 say, well, we're trying to improve the policy for a certain group of people.
35:13 We could focus on any of these aspects, which one or two would make the biggest return for
35:18 our effort to make a change.
35:19 Right.
35:20 There's a lot of cool stuff that comes out of this, I think.
35:22 And then you as a developer, I think it's like the data scientist.
35:27 Right.
35:27 It's really easy to make this kind of thing.
35:29 Right.
35:29 This is like a GR.interface, I believe.
35:31 Right.
35:31 So this is just one line of code to build this.
35:34 So yeah, that's OK.
35:36 Not GR.interfaces.
35:37 The other API that we can talk about now is called blocks.
35:40 Yeah.
35:40 Tell us about that.
35:41 It's it's cool.
35:42 Yeah.
35:42 The way that it works is that you declaratively define your UI.
35:46 Right.
35:46 So it's like this input is going to go in this column and say, well, this input is like
35:50 a dropdown, for example.
35:51 Right.
35:52 So in this example, there's lots of dropdown components, lots of sliders for the age and
35:56 stuff like that.
35:57 And then you define all these components and then you can define the reactivity separately.
36:02 So if you were to scroll down, there should be like a button dot click.
36:05 Right.
36:05 So whenever the predict button gets clicked.
36:07 Yeah.
36:08 So you call this function with these inputs and then return this one thing.
36:12 Yeah.
36:12 So that's like the model.
36:14 Right.
36:14 Like right now, it looks like a lot of code just because there's a lot of like inputs and
36:17 stuff.
36:18 But at the end of the day, it's like pretty simple.
36:19 You're just defining a UI and then you define like what happens when and then Gradio handles
36:24 the rest.
36:24 Yeah.
36:24 It's pretty straightforward.
36:25 So people listen.
36:26 Basically, the UI for the more advanced version is you use context managers, create width blocks.
36:32 So then you'd say, here's something that goes across and with another row, put some columns
36:37 in there with another row.
36:38 And then that's how you build it up.
36:40 So it's pretty straightforward.
36:41 What it reminds me of a little bit is it reminds me a little of Flutter.
36:44 Are you familiar with how Flutter looks?
36:47 No.
36:47 And the code it's, I don't know if I can find a quick example.
36:50 How about an example Flutter?
36:52 Come on.
36:52 It's really sort of hierarchical.
36:54 So that the thing that I think is interesting is the code hierarchy matches the sort of UI hierarchy,
37:01 right?
37:02 So it's a code driven UI where as it gets more indented, that talks about, okay, well,
37:06 that's a row or then you pop off and stuff.
37:09 And they've got, it's real similar in that sense that all right there in the code, there's
37:13 not a designer or a markup language or something like that.
37:16 But yeah, pretty cool.
37:17 Yeah.
37:17 So like the UI is all declarative, right?
37:20 So yeah, like you said, you just say this is this row.
37:22 And then, yeah, there's ways to control like the relative width of each of these columns,
37:26 for example.
37:27 So if you wanted that, you could.
37:29 And then.
37:29 Another thing I saw, I can't remember which demo it is, so I'm not going to pull it up,
37:32 but I saw that there's a way to pass like CSS and styling over as well.
37:37 Is that right?
37:38 That was maybe the very first thing.
37:39 There's a Python API for like defining the theme, right?
37:43 So like every UI element has certain CSS variables and you can control their value via like the
37:48 values of this Python class that you then pass to your Gradio instance.
37:51 But at the same time, there's like a top level CSS parameter that you could do whatever you
37:56 want in that case.
37:56 You don't have to use like the Python API.
37:58 If you don't want to, if there's something different that you want to change, you can change
38:02 the CSS variables.
38:04 You're saying I could do something in Python.
38:06 I could say, well, the style is button order width is three and the color of the borders
38:12 is blue.
38:12 But if I just want to have arbitrary CSS, I can just go, here's your arbitrary CSS string.
38:17 Go with that.
38:17 You could pass it a file and then it'll, we'll read that file and use that CSS in the demo.
38:22 Yeah.
38:22 And then with that, you can also add IDs to each of the UI elements and then you could
38:27 write your CSS to target the IDs that you add, right?
38:30 So let's say you only wanted to modify like one button.
38:32 You could do it that way.
38:33 Right.
38:33 You just want to control one of the plots or something.
38:35 I guess if you're writing arbitrary code to return things like matplotlib plots, do things
38:40 like the XKCD matplotlib.
38:43 Oh yeah, for sure.
38:44 Right?
38:44 Like you could control.
38:45 Yeah.
38:45 Joking, but it's also awesome.
38:47 There's an XKCD Gradio theme, right?
38:50 So let me show you this.
38:52 There is?
38:52 Yes.
38:53 Okay.
38:53 That takes it to another level.
38:55 That's pretty excellent.
38:56 That's the cool thing about the theming is that it's shareable, right?
38:59 So someone built this XKCD theme.
39:02 Wow.
39:02 It's amazing.
39:03 Anyone can use this in their Gradio demo, right?
39:05 All you have to do is pass theme equals gstaff slash XKCD.
39:09 And then your demo will look like the XKCD theme.
39:12 It's so good.
39:13 I love it.
39:13 Oh my gosh.
39:14 This is.
39:15 Yeah.
39:15 Completely community driven.
39:18 Yeah.
39:18 Well done to whoever did this one.
39:19 That's really cool.
39:20 It goes beyond the plot, right?
39:21 You can for sure return a plot in the XKCD theme, but you could also have the whole demo in the
39:26 XKCD theme.
39:27 I often pull this example up, this theme, this XKCD thing for Matplotlib because it's fun.
39:32 But also I think there's genuine value in putting together something that looks like this.
39:37 Because if you show this to decision makers, bosses, managers, types, and they see that they look
39:44 at something that looks like it's working and they're like, oh, well, we're done then.
39:47 No, we have two months more work.
39:49 We're not done.
39:50 But I click the button and it's giving me answers.
39:52 We're really not done.
39:53 But it's not scalable.
39:55 It's not this.
39:55 It's not that, right?
39:56 It's only an estimate.
39:57 Just an XKCD front end on it.
39:59 You're like, look, you see it's not done.
40:00 It's just, it's got squiggly lines.
40:02 It's hand-drawn.
40:03 It's clearly a prototype.
40:04 Like, oh yeah, okay.
40:05 But I could see where this is going.
40:06 I think actually, psychologically, it may have a big impact, even though it's silly.
40:11 Yeah, it's a super interesting point.
40:12 I never thought about it that way.
40:13 But yeah, I mean, I think it definitely gives it a little bit more sketch vibe.
40:16 Like this is like in the...
40:18 Yeah, like a wireframe vibe.
40:19 Yeah.
40:20 Yeah, wireframe, like straight from the workshop.
40:22 Exactly.
40:22 Yeah.
40:23 That's what I was thinking because I presented projects to various stakeholders when I used
40:27 to do that kind of stuff.
40:27 And they'd be like, oh, well, that looks like it's done.
40:30 No, we're going to need some time because it's really not done.
40:32 I know it looks good, but it's not.
40:34 Yeah, yeah, yeah.
40:35 You made it look too good, basically.
40:36 Yeah, exactly.
40:37 That was a serious mistake.
40:39 Okay.
40:40 So we got a little bit more time to talk about a couple of things.
40:43 I want to talk about how people actually share this.
40:45 Like we're still talking about a thing I pip install locally and it has a UI, but what do
40:49 I do?
40:50 I still don't want to set up a Linux machine and Nginx and domains and all that.
40:54 So what are the options?
40:55 But before we get to that, tell us a bit about the internals.
40:58 Like when you guys work on Gradio and I pip install it, like what's running?
41:02 What is this project?
41:03 The backend is a FastAPI server.
41:06 So what Gradio will do, it'll spin up a server for you.
41:09 And then that server will serve like the front end assets.
41:12 The front end is built in Svelte.
41:14 Interesting.
41:14 Okay.
41:15 Basically, whenever you, whenever these reactivity events happen, what that'll happen is that,
41:20 or what will happen is that the front end will just call the backend API and then run your
41:24 function and then make sure that all the necessary processing that needs to happen to get your
41:28 data ready happens.
41:29 But at the end of the day, it's kind of a simple model in that sense, right?
41:33 Obviously there's some more complications with like the streaming, for example.
41:37 That's like a whole kind of different code path almost.
41:40 But at the end of the day, it's like a REST server that's talking with a JavaScript client.
41:46 So it's like the standard developer tools story for Python people is it's not, some of it
41:53 is Python, but you probably end up writing a lot of JavaScript or TypeScript to build this
41:57 tool for other people.
41:58 Right.
41:58 So they don't have to.
41:59 I'm not a huge Svelte expert.
42:01 I like, thankfully there's some of the people I work with are like really good at the really
42:06 knowledgeable and that stuff.
42:06 And yeah, like the front end code, I think it's, I think there's more Svelte code than
42:10 Python code.
42:11 Actually, I'm curious to put the.
42:12 What's the code breakdown?
42:13 Let's break it down.
42:14 Yeah.
42:15 65% Python, 16% Svelte, 13% TypeScript.
42:19 Well, so I think the reason might be that we have a lot of like demos and stuff.
42:23 Yeah.
42:23 I think there might be some stuff like that.
42:24 Yeah.
42:25 The demos are in there.
42:26 Yeah.
42:26 There's a lot of demos.
42:27 You know what feature GitHub needs?
42:29 As you navigate the source tree, right?
42:31 When I click on like client or demo or Gradio, it would be awesome if those stats would also
42:36 be repeated.
42:36 But just for that section of code, wouldn't that be great?
42:38 Like how much of the demos are Python?
42:40 I don't know.
42:41 Maybe I just want to know that, but that'd be cool.
42:42 Anyway.
42:43 Yeah.
42:43 So I suspect that is, there's probably a lot of code and you've got a lot of notebooks
42:47 and stuff in there too.
42:48 That probably puts a big change on it there.
42:50 A lot of the code is actually, yeah.
42:52 Yeah.
42:52 JavaScript Svelte.
42:53 Right.
42:54 It's the take one for the team.
42:55 So the rest of us don't have to write JavaScript.
42:57 Exactly.
42:58 Yeah.
42:59 Cool.
42:59 Interesting.
42:59 Very nice.
43:00 And it says it can be embedded in a notebook, which is interesting, or it can be presented
43:06 as a webpage.
43:07 Tell us about this part.
43:08 If you were to run this on any notebook, like Google Colab, for example, I think this might
43:13 be an example.
43:13 Right.
43:13 So if you call like the way that Gradio works, right?
43:16 Once you create your Gradio interface or blocks, once you call launch, that's how you start
43:21 up the server.
43:22 That's like you kick off the whole process of serving this.
43:24 That'll create the server locally.
43:25 Right.
43:26 So no data is like leaving your machine.
43:27 Right.
43:28 And then if you click, if you call launch on like a Jupyter notebook, Colab, SageMaker,
43:32 like the UI will display like in the notebook.
43:34 Right.
43:34 Right.
43:35 And then you could, if you're doing locally as well, you can go to the local host URL and
43:40 go to the server that way.
43:42 And then the really cool thing is that, yeah, there you go.
43:44 There's a UI.
43:45 That's what we mean that it's embedded locally.
43:47 It feels good.
43:48 Like a little bit like the iWidgets sort of thing.
43:51 It's similar to that, right?
43:53 Like it'll display right underneath the cell.
43:54 Right.
43:55 And then if you run the cell again, like you'll get like a new, like a new server basically.
43:58 Right.
43:59 So you can, you can iteratively build these things.
44:01 Right.
44:01 Is it running FastAPI somewhere in the background when you do that?
44:03 Yeah.
44:04 Crazy.
44:04 Yeah.
44:05 It's pretty, pretty nuts.
44:06 Turtles all the way down.
44:07 Yeah.
44:07 And then what we mean that they can be embedded in a notebook.
44:10 And then you could also like host it anywhere.
44:12 Right.
44:12 So you could, if your machine is exposed to the internet, right?
44:16 You have a fixed IP address.
44:17 You could just give people that URL.
44:19 You could also share it another way.
44:21 Right.
44:21 So every radio interface is like a launch method.
44:24 That's what kicks off the server.
44:25 And that takes a parameter called share.
44:27 Right.
44:27 So if you hit share equals through true, that'll create like a temporary link for 72 hours.
44:32 So you could share with someone.
44:33 Right.
44:33 So you don't have to, you could host it right on your laptop if you wanted to, as long as your
44:36 laptop.
44:37 As long as you leave it on.
44:38 Yeah.
44:38 Yeah.
44:38 As long as you leave it on, it's like not sleeping and stuff.
44:41 Like people can access your, if you go back to that collab notebook, I think we might be
44:44 able to like demo that.
44:45 Oh, interesting.
44:46 If I just say here and say share equals true and rerun it.
44:50 Equals true.
44:50 Let's see what we're getting.
44:51 So you see greater dot live.
44:53 Right.
44:53 So if you click that.
44:54 Totally works.
44:55 Yeah.
44:55 You can select that to whoever you want and they can just use this.
44:58 Right.
44:58 So that, yeah.
44:59 No install needed.
45:01 Right.
45:01 If you're sharing this with your collaborator, your PM, your manager, your friend or whatever,
45:06 you could just give them this link.
45:08 Right.
45:08 So you don't have to do anything.
45:10 I guess it's probably worth emphasizing.
45:12 You should never try to host like production.
45:15 So over this, it sounds like, cause it's only for a limited time and it's gonna, it's just
45:19 a gooey looking URL.
45:20 But so often you'll be in meetings over zoom or something else.
45:25 And they'll be like, Hey, what have you done?
45:26 Can you show me?
45:27 And then you're like, all right, well, let me do screen sharing.
45:28 Oh, I don't have, I'm not a host.
45:30 Can you make me a host now?
45:31 Can I, you're sharing, can I share?
45:33 And then finally you get it up and it's blocky and they're like, Oh, zoom in.
45:36 It's too small.
45:37 I'm on my phone.
45:38 This way you just take that and you give it to them in the meeting.
45:40 Right.
45:41 And they, they have a full fidelity thing they can play with, which is awesome.
45:44 They have the demo itself that's running on your machine.
45:46 Right.
45:46 So they don't have to.
45:47 Yeah.
45:48 Yeah.
45:48 Yeah.
45:48 Like no, you don't have to install anything.
45:50 Right.
45:50 Just go to your point, your browser at this URL.
45:53 And yeah, it'll work for that quick demo.
45:54 Yeah, exactly.
45:55 Use case as well.
45:56 Yeah.
45:56 Definitely don't use it for production.
45:58 Yeah.
45:58 If you wanted to use reproduction, I think like the easiest, the absolutely easiest way to use
46:03 hugging face spaces.
46:04 So if you go to hugging face spaces, it's basically like a drag and drop, right?
46:07 Like all you have to do is just drag your radio script into like their UI and then that'll
46:12 upload it.
46:13 And then radio will already be installed and then the server will, will start.
46:17 And then you have, you have your permanent hosting.
46:19 And then it's also like a, it has like a Git interface, right?
46:21 So like your demo has several files, like directory.
46:24 There's some assets, some images that you need that you want to upload as well.
46:27 Like you could just get pushed to your hugging face space and then you'll, you can do it like
46:32 that as well.
46:33 Okay.
46:33 So you add it as an origin or something and then just push to it.
46:36 Yeah.
46:36 I could try to show it.
46:37 I don't know if you can have a, can I share my screen?
46:40 I wonder.
46:40 Yeah, sure.
46:41 Click at the bottom and share.
46:42 It's easier if you share an app.
46:44 Yeah.
46:45 If I go to my hugging face account and go here and then new space, and then this is
46:50 talk Python demo, then MIT.
46:54 Oh yeah.
46:55 We can do whatever we want with this.
46:57 Right.
46:57 So you could, right.
46:57 You could host stream, like radio, Docker.
47:00 Yeah.
47:01 Anything you want.
47:02 Right.
47:02 So for free, very generous free tier, you have two CPUs.
47:06 That is a generous free tier.
47:07 Two CPU, 16 gigs.
47:09 Yeah, that is good.
47:09 The only caveat is that this will go to sleep after 72 hours if no one uses it.
47:14 Right.
47:14 So, but you could also upgrade.
47:16 You have a machine learning model, you need a GPU, you pay for the GPU per hour.
47:19 And then yeah, you can set public or private and then you just create space.
47:22 And then yeah, this is how the Git interface works.
47:25 Right.
47:26 So you could just get clone this and then add your code and then just get pushed.
47:31 Or you could just copy this.
47:32 Chop, copy the code and just paste it into a file.
47:35 Yeah.
47:35 Add file, create new file.
47:36 It does feel a very Git, right?
47:38 Even has the similar look and feel of like when you go to Git and you say add new file.
47:42 Yeah.
47:43 True.
47:43 A perfect commit message.
47:45 No, no comment.
47:46 Just blank.
47:47 I love it.
47:48 Yeah.
47:49 And then.
47:49 So over here, they run in Docker containers or Kubernetes or something like that.
47:53 The Docker container.
47:54 So what this is doing, it's that it's, it has like a pre-configured Gradio Docker image,
47:59 right?
47:59 That comes with, there we go.
48:01 It already built.
48:01 But it comes with Gradio and like a bunch of standard like data science libraries.
48:06 And then you write them in.
48:08 It'll, and then it adds your code to the container and then it starts a container.
48:11 Right.
48:11 But you could also just use your own Docker file if you wanted to.
48:14 Okay.
48:15 You could host whatever you want.
48:16 Right.
48:16 So here.
48:17 Yeah.
48:17 You got to put your name in a present.
48:18 Wow.
48:19 Look at that.
48:19 Hello, Michael.
48:20 Yeah.
48:20 In the time that we've been talking about this, you've created a space and created a
48:24 new UI and hosted it.
48:25 That's pretty ridiculous.
48:27 Yeah.
48:27 And then you could just share this.
48:29 Share the URL or whatever.
48:30 Yeah.
48:31 Yeah.
48:31 You can just share the URL there.
48:32 And then they have like a machine learning app that they can share to be used with, with,
48:36 they can share with anyone and it'll stay up.
48:38 It's just Git, right?
48:39 So if you have Git locally, if you know how to use Git, you can very seamlessly use, push
48:44 to the Hugging Face platform.
48:45 There's no special magic.
48:46 What if I'm not a hugger?
48:47 What if I, for some reason, don't want to use Hugging Face, can I post this behind Nginx
48:52 or somewhere if I like infrastructure, or I'd like to do my infrastructure as a service?
48:57 Just that app.py file.
48:58 Python.app.py from your cloud.
49:01 And then just make sure that the URL is, the port is accessible for the internet.
49:05 And then you just give that to everyone.
49:06 Or front that with Nginx and put some.
49:08 Yeah, exactly.
49:09 Let's encrypt and then just point it over to that URL and let it go.
49:12 It'd be pretty straightforward.
49:13 You could host it wherever you want.
49:14 It's just, it's all open source tech under the hood, right?
49:16 It's just FastAPI built and then some Python libraries, right?
49:19 There's no, there's no lock in anywhere, right?
49:21 Yeah, cool.
49:22 So it's probably running UVicorn, I would guess, as the server, which is production-ish.
49:28 I guess if you did like really large scale, you might want to do Genicorn with the UVicorn
49:33 workers or other than just UVicorn itself, but you know, for the failover and whatnot.
49:37 But that sounds like, if these words sound familiar to you, it should sound really familiar.
49:41 If they don't, then don't worry about it.
49:43 Exactly.
49:43 That's the standard Python web infrastructure stack type of stuff.
49:47 Right.
49:47 And in that model, it's completely free, right?
49:49 It's open source.
49:50 I can do, I can just run it there, right?
49:51 Yeah.
49:51 Just run it wherever you want.
49:53 Very nice.
49:53 Well, Freddie, let's wrap up our conversation, get a little short on time with just where
49:58 things are going.
49:58 We'll talk about where we are, where it came from, where are we going?
50:01 Thanks for that.
50:01 So I think where we are, so I think we're trying to get Gradio into like as many platforms as
50:08 possible, right?
50:09 So, and like as many kind of like deployment modes as possible.
50:12 So one of the cool projects that we're working on is Gradio Wasm, right?
50:16 So like running Gradio entirely in the browser.
50:18 Wow.
50:19 Okay.
50:19 So yeah, so that's, it's not ready, it's not released yet, but it's something that we're
50:24 actively working on, right?
50:25 So you can, yeah, so if you want to just build your machine learning demo, running everything
50:30 directly in the browser, right?
50:31 There's like the ML for the web space is growing a lot.
50:34 It's advancing really quickly.
50:35 Like we're getting ready.
50:36 We're getting ready for that.
50:38 So what's that look like in terms of foundations?
50:40 Is that Iodide?
50:42 Is that PyScript?
50:43 Is that something else?
50:44 It's using Pyodide right now.
50:46 Okay.
50:46 Yeah.
50:47 So yeah, that's how.
50:49 Yeah.
50:49 That's a pretty good choice because one of the selling points of Pyodide, not just that
50:53 it has Python in the browser, but that it has the ability, it has a bunch of the machine
50:57 learning libraries either available or compiled over to Wasm WebAssembly.
51:02 And so you can actually do machine learning stuff, not just like, hi, my name is plus name.
51:07 You know what I mean?
51:07 That's one of the cool projects we're working on this year.
51:09 The other cool project that we're working on is, yeah, like the custom components, right?
51:13 So let's say that you wanted to build your own custom machine learning demo, your own custom
51:17 web app, right?
51:18 But you need something that we don't have.
51:20 Giving you the API to build that component locally and then just hook it into the app without
51:24 having to like merge anything into Gradio upstream.
51:27 We're working on that actively.
51:28 So that'll be really exciting.
51:30 And then, yeah, really excited just because it'll enable like a lot of people like in the
51:35 creative community to collaborate with each other and build like really impressive stuff.
51:38 Yeah.
51:39 Kind of like on their own, right?
51:40 Like they don't need like, they don't need like the core development team necessarily.
51:43 Sure.
51:44 Like pytest plugins rather than trying to change pytest.
51:46 So that, yeah, really excited about that.
51:48 And then, yeah, the other cool stuff is that we're, one thing that we didn't talk about that
51:53 I would want to talk about if we have time is that all these demos that we've built
51:57 are sort of available via API, right?
52:00 So if you click on any of these demos, like if you click on that first one, if you scroll
52:03 to the bottom and you see it says use via API, right?
52:06 So this gives you like a little code snippet as to how you can call this demo from Python
52:11 or JavaScript.
52:11 Okay.
52:12 What does that mean, right?
52:13 That means that basically all these ML apps that are available on Huggy Face or just anywhere
52:16 on the internet, like they now become like building blocks that you can use in your
52:20 own workflow, right?
52:21 So, and actually this demo itself, I'm familiar with it.
52:25 It's actually really cool because it's calling two other Gradio demos via API, right?
52:29 So this is an example of someone building their Gradio app by calling other Gradio apps.
52:33 Wow.
52:33 Okay.
52:34 We're creating this ecosystem where...
52:36 It's like Gradio microservices.
52:38 Exactly.
52:38 Right.
52:38 So it's like all these Gradio apps are building blocks that you can then connect together via
52:43 API.
52:43 And that's really cool, right?
52:45 Because like it basically means that machine learning is available.
52:48 Like you don't need to use a GUI to get state-of-the-art machine learning, right?
52:53 You could like use an API and that means that you could put these models like pretty much
52:57 anywhere, right?
52:57 So like one of the cool things that we launched two weeks ago, I believe, or like a week and
53:01 a half ago is that you can deploy like a Gradio chatbot to Discord, like just one line of code,
53:06 right?
53:06 So if you, let's say if you have a Gradio app that talks with open AI, like GPT-3 or Llama
53:13 or any of these like open source LLMs, if you can build a Gradio app for it, you can like
53:18 seamlessly hook it into your Discord server, right?
53:20 So that's all built via this like API functionality, right?
53:24 So this is something like, I'm personally like super excited about it.
53:27 Like we want to push this further just because it's like Gradios that you like Gradio historically
53:32 has been built for the UI, but it can also be used to get these machine learning models into
53:36 more places.
53:37 One of the things that I'm really excited about in the coming years, making this a little
53:40 bit more, a little bit more visible and getting.
53:42 Yeah.
53:42 You could integrate some really cool LLMs and other types of chat into your Discord.
53:48 I imagine you could probably do it with a Slack as well.
53:50 And if somebody asks in your company, how do I, whatever you could think of go, Hey,
53:55 I've already, I'm private GPT.
53:58 I've already ingested all of our docs.
53:59 You want me to take a shot at answering that?
54:01 Like, sure.
54:02 Why not?
54:02 That's one of the pet projects I want to do is just do that for the Gradio Discord,
54:06 right?
54:07 So there's like a Gradio Discord where you could chat with the Gradio community and there's
54:10 like people ask questions in it, but it'd be really cool if we had a Gradio chat bot that
54:14 like knew a lot about Gradio that then you could just ask.
54:16 Exactly.
54:17 Chatbot slash Gradio chat.
54:19 Like, like how do I, whatever, how do I display a plot?
54:22 Right.
54:23 And then it'll tell you, just return a matplotlib from your function.
54:26 People could think, well, why don't I just use ChatGPT or something?
54:28 But these are the things you teach it like deeply about, you give it all the docs and you
54:32 say, study this.
54:34 And then I want to ask you about it.
54:35 Right.
54:36 And a lot of times the docs and other things go beyond the token level that the standard
54:40 models can take.
54:41 Like I've tried to get ChatGPT to tell me about transcripts on Talk Python and it can't even
54:46 adjust like one transcript before it runs out of space.
54:49 Like I can't quite load all that.
54:51 Well, that's, I wanted to ask you about all of them.
54:52 You can't even do one.
54:53 So this is not working for me.
54:55 Yeah.
54:55 So you could fine tune like an open source LLM and then host it wherever you want.
54:58 Right.
54:58 So.
54:58 Yes, exactly.
54:59 More control.
55:00 Yeah.
55:00 That's cool.
55:00 So you could teach all about Gradio.
55:02 Real quick question.
55:03 Mr.
55:03 Magnetic in the audience asks, what about a hugging face desktop app instead of the browser app?
55:07 Yeah.
55:08 So that's something that there's an open issue for that.
55:10 It's something that we've been kicking around as well.
55:13 It's just like, how do we get like a Gradio desktop app as well?
55:16 So yes, stay tuned.
55:18 I think let me try to find that issue and then comment on the YouTube.
55:21 But yeah, I would love your thoughts on that.
55:22 If anyone has thoughts on that.
55:24 But yeah, we, it's something we're thinking about.
55:26 It's not, I don't think it'll happen maybe in the next like month or two, but maybe before
55:29 the end of the year or next year, it could happen.
55:31 Excellent.
55:31 All right.
55:32 Well, I think that pretty well covers it.
55:35 It's a super exciting project.
55:36 So good luck with it.
55:37 I mean, already you've had a lot of luck with it.
55:39 So you don't need my wishes, but further good luck on that.
55:42 And yeah, before we get out of here, let me ask you a final question here.
55:46 If you're always asked, like to ask the guests or like some cool IPI project they've run across.
55:50 It's been really awesome.
55:51 Maybe not super popular, but has made a difference or you've, wow.
55:54 How did I not know about this?
55:56 Any come to mind for you?
55:57 Python project?
55:57 Yeah.
55:58 Something I can pip install like FastAPI, but not FastAPI.
56:01 Cause everyone knows that.
56:02 I think when I was just starting out, I think I, I was like a really big noob and like I always
56:07 ran into like environment issues.
56:09 And then a friend of mine showed me about like pip depth tree.
56:12 It shows you exactly like why things get installed.
56:15 And that's a good one.
56:16 Yeah.
56:16 I think it's really, I think it's really magical, honestly.
56:18 Yeah.
56:19 I think it's really helpful just to like figure out like, especially like when someone files
56:23 an issue and like they, we don't know what's wrong with them.
56:26 Like sometimes I'll just like, where did this thing even come from?
56:28 And then just use like pip depth tree.
56:30 I think that's, it's really cool.
56:31 And it's like really simple.
56:32 But yeah, I think it definitely has saved me a couple hours of time.
56:35 So.
56:35 It's cool.
56:36 I've used it for my own stuff.
56:37 I hadn't thought about using it for tech support, but yeah, of course, because then
56:41 people run into problems because their environments are screwed up and they say they have a thing
56:44 or they don't, or they say they have a version of a thing, but they don't.
56:47 And with this, you can just say, run this one command and it'll give you like a really
56:50 cool, they have all these things installed and this is installed because it's required.
56:54 Yeah, exactly.
56:55 It's really nice.
56:56 Yeah.
56:56 Cool.
56:57 Excellent recommendation.
56:58 All right.
56:58 Final call to action.
56:59 People want to get started with Gradio.
57:01 What do you tell them?
57:02 Hit install Gradio and then go to gradioapp.com and then get see the demos there.
57:07 In our website, there's a link to our discord server.
57:09 So yeah, join the discord and say hi.
57:12 And then yeah, there's lots of people there.
57:13 We're willing to help.
57:14 And then yeah, never hesitate to file an issue.
57:17 What's really cool about this is like seeing that the demos that people build and like
57:20 people build stuff that I frankly push the limits of what I thought people could build
57:24 with Gradio.
57:25 And it's really cool seeing that.
57:26 Yeah, that's awesome.
57:27 Don't be afraid to, or don't hesitate to build really cool stuff with Gradio and think, well,
57:31 we're really good about amplifying that.
57:32 So if you have something really cool, just like tag the Gradio Twitter account or reach
57:37 out to us on discord or something.
57:38 We'll amplify it for you.
57:39 Cool.
57:39 Well, excellent project.
57:41 And thank you for being on the show.
57:43 Thank you for having me, Michael.
57:44 I had a lot of fun.
57:44 Yeah, same.
57:46 This has been another episode of Talk Python to Me.
57:49 Thank you to our sponsors.
57:51 Be sure to check out what they're offering.
57:52 It really helps support the show.
57:54 The folks over at JetBrains encourage you to get work done with PyCharm.
57:59 PyCharm Professional understands complex projects across multiple languages and technologies.
58:05 So you can stay productive while you're writing Python code and other code like HTML or SQL.
58:10 Download your free trial at talkpython.fm/done with PyCharm.
58:15 Take some stress out of your life.
58:18 Get notified immediately about errors and performance issues in your web or mobile applications with
58:23 Sentry.
58:23 Just visit talkpython.fm/sentry and get started for free.
58:29 And be sure to use the promo code talkpython, all one word.
58:32 Want to level up your Python?
58:34 We have one of the largest catalogs of Python video courses over at Talk Python.
58:38 Our content ranges from true beginners to deeply advanced topics like memory and async.
58:43 And best of all, there's not a subscription in sight.
58:46 Check it out for yourself at training.talkpython.fm.
58:49 Be sure to subscribe to the show, open your favorite podcast app, and search for Python.
58:53 We should be right at the top.
58:55 You can also find the iTunes feed at /itunes, the Google Play feed at /play,
59:00 and the direct RSS feed at /rss on talkpython.fm.
59:04 We're live streaming most of our recordings these days.
59:07 If you want to be part of the show and have your comments featured on the air,
59:11 be sure to subscribe to our YouTube channel at talkpython.fm/youtube.
59:15 This is your host, Michael Kennedy.
59:17 Thanks so much for listening.
59:18 I really appreciate it.
59:19 Now get out there and write some Python code.
59:21 music music music music music music Thank you.
59:41 Thank you.