Monitor performance issues & errors in your code

#414: A Stroll Down Startup Lane Transcript

Recorded on Saturday, Apr 22, 2023.

00:00 At PyCon 2023, there was a section of the Expo floor dedicated to new Python-based companies called Startup Row. I wanted to bring their stories and the experience of talking with these new startups to you. So in this episode, we talk with the founders of these companies for about five to 10 minutes each. This is Talk Python to Me, episode 414 recorded on location at PyCon in Salt Lake City on April 22nd, 2023.

00:27 [Music]

00:40 Welcome to Talk Python to Me, a weekly podcast on Python. This is your host, Michael Kennedy.

00:45 Follow me on Mastodon, where I'm @mkennedy, and follow the podcast using @talkpython, both on Be careful with impersonating accounts on other instances, there are many. Keep up with the show and listen to over seven years of past episodes at

01:01 We've started streaming most of our episodes live on YouTube. Subscribe to our YouTube channel over at to get notified about upcoming shows and be part of that episode.

01:12 This episode is brought to you by Sentry and us over at Talk Python Training. Please check out what we're both offering during our segments. It really helps support the show. We kick off the interviews with Devin Peterson from Ponder. Ponder is taking Modin, a distributed compute library for Python, and pushing data science compute directly into the database. Welcome to Talk Python here on Startup Row. Thank you. Thank you. Yeah, it's fantastic to have you here. You know, we met yesterday here at PyCon US, and you were telling me about your project, Ponder, and how it's built upon Modin, the open source project. And as I looked around, I'm like, Like everyone here has a story.

01:53 And I just thought it'd be so great to have you on the show along with all the others and just kind of tell your story.

01:58 You know, how did you, how did you get here to start up Row at PyCon?

02:01 - Yeah, it's interesting.

02:02 So Modin started as my PhD project and I was doing my PhD at Berkeley and I started in the genomics world, trying to build large-scale data science tools for, you know, the people who actually do the science.

02:16 I'm not a biologist myself.

02:17 I don't know the first thing about biology, honestly.

02:20 - But you got some good programming skills and they can always use that applied to their data, right?

02:23 - Right, right.

02:24 The problem was we were building tools in Spark and it was really hard for these Spark-like APIs to translate natively to the way that they were reasoning about data.

02:31 And like, they're using Python.

02:33 And so, there's a very kind of natural way that scientists think about interacting with data that's not Spark, right?

02:40 It's not as intuitive as Intuitive and Spark, even PySpark, right?

02:43 - So a lot of Python people avoid databases as much as they can, at least SQL and directly talking to them like that.

02:49 - Yeah, totally, because often the way, when you're exploring data, you have a mental model of how you want to interact with the data.

02:57 And that is not SQL often.

02:59 It's just the way that it is.

03:02 So yeah, I had a moment there where a data scientist was like, "I don't want your tool.

03:08 "Can you just make my tool run faster?" And so I was like, "Ah, yes, wait a second.

03:13 "This is actually a real project." And so I started looking into pandas and looking into the world of databases and the kind of academic space, nobody had really dug that deep into pandas because in the academic sense, everybody was like, okay, pandas is just a bad database.

03:31 That's what database people thought at the time.

03:34 So we did a bunch of work and it kind of turned out that's not the case, they're totally new things.

03:39 And so from there we built Modin and now with Ponder, we're kind of extending that to basically bridge these two worlds where you can use Python, but we're generating SQL on the back end and able to run pandas directly in your database or your data warehouse.

03:54 - Yeah, fantastic.

03:55 So when I first heard about what you're doing at Ponder, I immediately thought of Dask.

04:01 And Dask is another popular startup success, open source startup success story with Matthew Rocklin and Forman Coil and stuff.

04:09 And I mean, I think they may have outgrown Startup Row, but you know, good for them.

04:13 - Yeah, totally.

04:14 - My first thought was, okay, well, how is this different than Dask?

04:17 But the big difference is Dask is grid computing and yours runs in the database.

04:22 - Yeah, for Ponder definitely.

04:24 Open source modem also integrates with Dask clusters as well.

04:27 So Dask has Dask Data Frame and that runs on Dask clusters.

04:31 We can also run a modem open source on Dask clusters.

04:34 It's very important to us that whatever infrastructure that you have, you can run pandas on top of that.

04:39 So Ponder is the next level of that where if your data is in the database, it doesn't leave, right?

04:44 We can just execute it directly there.

04:46 And all of your assumptions from Python and Pandas hold true in the database, even though the database actually doesn't like the assumptions that you might have in Pandas, right?

04:55 Yeah.

04:55 We emulate those behaviors.

04:56 And we've done a lot of work to actually make that feel very native.

04:59 So that is a key difference with Ponder and Dask, though, is that your data never leaves the database.

05:05 So you don't have to have a separate Dask cluster to kind of pull the data into and execute on it there.

05:10 You can just run things natively in the database with the data warehouse.

05:13 If you have a large database, you already have a probably powerful database server, why transfer all the data off of that, load it into something else, analyze it, and throw it away, right?

05:22 Just like make it run there.

05:24 Exactly.

05:25 Exactly.

05:26 Yeah.

05:27 So maybe a quick elevator pitch type of thing might be like, you all take pandas and turn it into SQL statements that run on the database, but people get a program in pandas.

05:35 Yes.

05:36 Exactly.

05:37 That's exactly it.

05:38 Yes.

05:39 native and pandas, like describe for example, df.describe, super, super common.

05:45 - It seems easy, like it just gives me some summary stats.

05:48 - Yes, exactly.

05:49 That's 300 lines of SQL.

05:51 - No.

05:52 (laughing)

05:53 - Like you wouldn't believe it looking at it though, because it seems so simple, and it is a simple output, right?

05:59 I wanna get some summary statistics for my data.

06:01 But SQL is so declarative, and the language itself doesn't lend itself well to this type of iterative, interactive kind of workflow.

06:09 - Right, and the notebooks, remember, step by step, they have like a history, sort of a memory, whereas SQL's, every statement is standalone.

06:17 - Exactly, so all or nothing, basically.

06:19 And you have to do the whole thing up front.

06:21 And that's the thing people love about Pandas, is that you can incrementally build these things up.

06:26 So we're giving that interface to SQL, basically.

06:29 - Awesome, all right, well, let's wrap this up with a bit of a talk, how you got to Startup Row, how'd you start this company, where are you?

06:36 So many people are excited to take their open source and instead of making it their side job or something they do part-time at their company, make it their full-time energy.

06:45 And you're there. How'd you do it?

06:47 Yeah, so the way that we started was we talked to a lot of companies where they basically asked us, "Can you make this work on top of our infrastructure?" We only supported in the open source, Ray and Dask.

07:00 And we saw a motion there to have kind of an open core model.

07:03 So we follow the open core model where these more enterprise-y features like security features and being able to push into data warehouses.

07:10 An individual consultant may not have a data warehouse.

07:15 They probably don't, but enterprises do.

07:17 And these are the types of features that enterprises really care about.

07:19 So this open core model, I think, lended itself really well to our business, particularly because enterprises will pay for these features.

07:28 And then we went out and we raised a seed round and saw the opportunity to come here and be in PyCon Startup Row PyCon Startup Row, unfortunately, it's a competitive process, really it is.

07:40 We feel very fortunate to be chosen among the few that are chosen here.

07:45 But yeah, that's kind of our journey, is basically starting talking.

07:49 So for folks out there who are interested in this, talk to people who are using this, people who are interested in the problem that you're solving, and figure out where the gaps are, and kind of ask questions.

07:59 Don't be afraid to ask, would you pay for this?

08:01 Or how much would you pay for this?

08:02 Those questions, they're uncomfortable to ask.

08:05 Especially the developer who's not used to presenting salesy type marketing things, you always, salespeople as kind of, yuck, I got it.

08:13 It's a necessary evil.

08:14 Totally.

08:14 It totally is.

08:15 Yeah.

08:16 So, but you have to ask, because how do you know if you can kind of take that next step, unless you ask, Hey, would you pay $50 a month for this?

08:23 Would you pay $10 a month for this?

08:25 Right.

08:25 You can't know unless you, unless you really go out there and ask.

08:28 So that's what I would encourage folks to do if they're interested in this is, you know, find those gaps and really ask the hard questions that are kind of hard.

08:35 But yeah.

08:36 Awesome. Well, congratulations.

08:38 Thanks for taking the time to talk to us.

08:39 Thank you. Thank you.

08:40 Yeah, you bet. Bye.

08:41 Next up is Generally Intelligent and Josh Albrecht.

08:43 Generally Intelligent is an independent research company developing AI agents with general intelligence that can be safely deployed in the real world.

08:51 Josh, welcome to Talk Python to me.

08:53 Hey, thanks.

08:54 Hey, it's great to have you here.

08:55 Tell people quickly who you are.

08:56 Yeah, so I'm Josh.

08:58 Josh Albrecht, I'm the CTO of Generally Intelligent.

09:00 We're an AI research company based in San Francisco.

09:03 - Awesome.

09:04 I love the humbleness.

09:06 Generally, generally intelligent, right?

09:08 You're not a super genius, but no, it's a clever name.

09:11 I like it.

09:12 - Thank you.

09:12 - Yeah, yeah.

09:13 And what's the problem you're solving here?

09:15 - Yeah, we, kind of as it says on the tim, we're working on artificial general intelligence.

09:20 We don't usually like to use that term 'cause it can mean lots of different things to lots of different people, but in general, what we're working on is making more capable, safer, more robust AI systems.

09:29 And in particular, we're focused on agents, so systems that can act on their own.

09:33 And right now, mostly what we're focused on is agents that can work in your browser, on your desktop, in your code editor, those kind of virtual environments and digital environments.

09:42 - How much of this are you envisioning running locally versus running on a big cluster in the cloud?

09:48 - Yeah, I think it'd be nice someday in the future to have things run totally locally, but right now, a lot of these technologies do require a large cluster of GPUs, which are very expensive, And most people don't even have a GPU or have a bunch of GPUs at home, so it's kind of hard to actually get it running locally.

10:03 Hopefully, someday in the future, we'll be able to do that, but for now, you'll probably need internet access to use a lot of these things.

10:08 - Right, okay, so you're envisioning a bunch of these agents that have access to an API that can quickly respond over there.

10:16 - Yeah.

10:17 - Okay, so give us some ideas.

10:19 - Yeah, so what this looks like concretely, you can imagine like a coding agent.

10:24 So one thing you can do with GitHub Copilot right now is you can write a function declaration in a doc string and have it generate the function.

10:30 But you can imagine for a coding agent, you can not only generate the function, but also generate some tests, run those tests, see errors in those tests, try and fix the errors, kind of do that whole lifecycle to ideally give you a output that's actually a lot better.

10:42 And then also, if you're thinking about this as an agent, maybe it's more of a back and forth.

10:46 It's not just an autocomplete in your editor, but it can come back to you and say, I'm sort of uncertain about this part here.

10:51 What did you mean?

10:52 Or, hmm, I wrote these tests, but I'm not sure if it's quite what you wanted.

10:55 or maybe it's kind of running in the background and flagging different things that it sees in your code base.

10:59 Maybe you made some change and it can detect that your doc string is out of date and flag that for you.

11:03 So thinking about it more as an actual pair programmer.

11:06 - Okay, and is it primarily focused on--

11:08 - For example.

11:09 - Yeah, are you thinking to focus mostly on programming or is it more broad like, "I'm looking for a great deal "on this classic car. Go scour the internet "and negotiate it for me." - Yeah, so the company is generally intelligent, so we certainly do want to be able to address address all these different use cases over time.

11:25 I think for us right now, one of the domains that we are interested in is code, especially because it's so objective.

11:31 You can know if it's right or wrong.

11:32 You have tests, that sort of stuff.

11:33 So it's a nice playground for ourselves.

11:35 And it's something that we can build for ourselves to iterate on internally.

11:38 But we're not exactly sure what the final product will be.

11:41 We're also training our own large language models.

11:43 We might productize some stuff around those.

11:44 So there's lots of possibilities.

11:46 We're not wedded to anything yet.

11:47 Thankfully, we have the luxury to take a little bit of time to figure that out as a research company.

11:51 - Yeah, that's excellent.

11:52 - What about science?

11:54 - Yeah, science is definitely a thing that we're interested in.

11:56 It's pretty hard, and so, you know, do we necessarily want these things like, you know, running around, making things in test tubes or whatever, I think that's probably a little bit harder than coding, and coding's already pretty hard, so I think we'll get there.

12:06 That's some of the stuff that we, like, personally on the team are really excited about, to see, you know, how can we use these to uncover new cures for diseases or whatever.

12:14 I'm really excited for that kind of stuff a little further in the future.

12:16 - Yeah, that'd be amazing.

12:16 I was just talking to someone on the expo floor hall here about protein folding.

12:21 - Yeah.

12:21 - Right, that kind of stuff.

12:22 kind of been elusive for people.

12:24 We more or less have just tried to brute force it.

12:26 - Yeah. - Right?

12:27 With the folding at home thing.

12:29 Let's just run every computer and just try every possibility.

12:31 But there's a lot of possibilities. - Yeah, yeah, exactly.

12:33 - Alright, so where's Python fitting here?

12:35 What are some of the tools that you're using?

12:37 - Yeah, so Python is-- we love Python.

12:39 We basically write everything in Python, or Bash, but, you know, mostly Python.

12:43 Or Python generates a little bit of Bash, you know, but it's mostly Python, so yeah.

12:47 We use a lot of PyTorch for our models.

12:49 And then other than that, you know, let's see, libraries to use.

12:52 I mean, we use tons of Python libraries, like Numpy and Scikit and, you know, Adders.

12:56 And just, there's so many, like, wonderful, you know, things that people have built that we just, yeah, that are just so nice to work with.

13:02 So we love that Python, you can kind of take it, open it up, look at all the source, and like, really understand everything in that full stack.

13:08 For us doing research, that's really valuable to be able to know everything that's going on.

13:11 - Yeah, you have these Lego block types of things.

13:14 Like, what if we arranged it like this?

13:16 You don't have to write the whole machine learning bit.

13:18 You can click a few pieces together and off it goes.

13:21 - Yeah, yeah, we build on top of Mosaic, for example, or other open source libraries that people put together for training stuff and kind of adapt it for yourself.

13:28 It's so nice that you can just pull things in and so easily change everything.

13:31 - Yeah, awesome.

13:32 I must have somehow blinked along the way and these large language models just seem to have come out of nowhere and all of a sudden, you know, AI is one of these things, it's kind of worked, kind of recommended stuff, and now all of a sudden it's mind-bogglingly good.

13:45 - Yeah.

13:46 - Do things like TensorFlow and stuff work with these large language models?

13:49 Or do you need other libraries?

13:50 - Yeah, so TensorFlow and PyTorch are probably the two main machine learning libraries that people do deep learning systems on top of.

13:58 Pretty sure that GPT-3 and GPT-4 were probably trained on top of PyTorch.

14:02 I think a lot of the stuff at Google, like Palm and Bard and those types of things are trained on TensorFlow, but at the end of the day, they're actually very similar, and they're sort of converging to kind of similar ideas too as well, so it's interesting to see them evolve.

14:14 - Yeah, fantastic.

14:16 All right, last question, close out our conversation here, is we're sitting here on Startup Row.

14:20 - Well, just outside of startup row, I suppose.

14:23 But it's, you know, there's a bunch of people out here who are working on open source projects who would like to make it, somehow find a way to make it their passion, their job.

14:32 Spend more time on it, maybe make it a company.

14:34 How'd you get here?

14:35 Tell people your journey.

14:36 - Yeah, so we got here in a little bit of a different route.

14:40 So we, a lot of us were working at a previous company called Sorceress that did applied, more of an applied machine learning thing where we were taking machine learning and applying it to the job of recruiting and trying to figure out, can we find good people online that might be a good fit for a particular position and reach out to them and get them interested in the job and that sort of stuff.

15:02 We went through YC with this in 2017 and we raised our Series A.

15:05 Eventually, it was growing.

15:07 We had a few million in revenue and customers and everything.

15:10 In 2019, we were looking and it felt like there's so much really interesting stuff happening in self-supervised learning and in deep learning and in machine learning.

15:17 It feels like recruiting is very important, but is this going to be the most important thing in the world?

15:22 Is this going to really be the thing that changes the world?

15:22 Or will there be something a little bit larger in this more general purpose AI?

15:25 And the more we thought about it, the more we felt like, the AI stuff is probably going to have a huge impact.

15:28 We should really be working on that.

15:31 We kind of wound down the previous company.

15:32 A bunch of us moved over and started up Generally Intelligent.

15:35 And then we've been working on stuff ever since then.

15:37 - Fantastic.

15:39 Well, I know you've got some really cool stuff where the agents can sort of look at the code they're writing, think about it, evolve, and it looks like a really interesting take.

15:43 So congratulations and I'll put a link to all your work in the show notes people can check it out.

15:49 - Yeah, sounds good.

15:50 - Yeah.

15:51 - Thank you very much.

15:52 - Yeah, thanks for being here.

15:53 - It was great to chat.

15:53 - Yeah, you bet.

15:54 This portion of Talk Python to Me is brought to you by Sentry.

15:59 Is your Python application fast or does it sometimes suffer from slowdowns and unexpected latency?

16:06 Does this usually only happen in production?

16:08 It's really tough to track down the problems at that point, isn't it?

16:12 If you've looked at APM, application performance monitoring products before, they may have felt out of place for software teams.

16:18 Many of them are more focused on legacy problems made for ops and infrastructure teams to keep their infrastructure and services up and running.

16:26 Sentry has just launched their new APM service.

16:30 And Sentry's approach to application monitoring is focused on being actionable, affordable, and actually built for developers.

16:38 Whether it's a slow running query or latent payment endpoint that's at risk of timing out and causing sales to tank, Sentry removes the complexity and does the analysis for you, surfacing the most critical performance issues so you can address them immediately.

16:51 Most legacy APM tools focus on an ingest everything approach, resulting in high storage costs, noisy environments, and an enormous amount of telemetry data most developers will never need to analyze.

17:04 Sentry has taken a different approach, building the most affordable APM solution in the market.

17:09 They remove the noise and extract the maximum value out of your performance data while passing the savings directly onto you, especially for Talk Python listeners who use the code Talk Python.

17:20 So get started at and be sure to use their code, Talk Python, all lowercase, so you let them know that you heard about them from us.

17:30 My thanks to Sentry for keeping this podcast going strong.

17:34 [AUDIO OUT]

17:36 Now we talk with Mo Sarat from Wherobots.

17:39 They're building the database platform for geospatial analytics and AI.

17:43 Hey, Mo.

17:44 Welcome to Talk Python.

17:45 Thank you so much.

17:46 Yeah, it's good to have you here.

17:47 Let's start off with a quick introduction.

17:49 How are you?

17:49 Absolutely.

17:50 So my name is Mo, and I'm the co-founder and CEO of a company called Wherobots.

17:55 Wherobots' grand vision is enable every organization to drive value from data via space and time.

18:00 Awesome.

18:01 I love it.

18:02 I love it.

18:02 So yeah, thanks for being here on the show.

18:04 Let's dive into wherobots of what is the problem you're solving?

18:08 What are you guys building?

18:09 Think about, again, every single data record that is collecting on a daily basis.

18:14 Even we're here right now, we're talking on this podcast at this specific location at this specific time.

18:20 So if you think about the space and time aspect, it's actually a very important aspect of every single piece of data that is being collected.

18:26 Right.

18:26 If we're here next week, who knows why we're here?

18:28 We could be here for a different reason.

18:29 That might mean something different, right?

18:30 Absolutely.

18:31 Yeah.

18:31 So that's exactly-- so that space and time lens that you can apply to your data can actually also tell you a better story about your data.

18:38 You can drive more value, more insights from your data if you apply that space and time lens.

18:43 And this is basically what we are.

18:45 Not necessarily-- this is exactly what we focus on in our company.

18:49 But more specifically, I mean, we are trying to build a database infrastructure to enable people to use that space and time lens to drive value from their data.

18:59 OK, fantastic.

19:00 Now, when you talk about space and time and data, are we talking records in a time series database?

19:06 Are we talking regular database or NoSQL?

19:08 Or could it be even things like the log file from Engine X about the visitors to my website?

19:14 What's the scope?

19:15 The scope is actually very wide.

19:17 So think about any data could be structured, semi-structured, unstructured data that you have.

19:22 And as long as it have a geospatial aspect to it, a geospatial aspect to here means like the record or the document has, was, let's say, created in a specific location or represent an event that happened in a certain location at a certain time, or represent, again, an object or an asset that you monitor at different locations at different times.

19:44 Whatever it is, it can be stored in any of these kind of formats.

19:48 As long as it have this kind of geospatial aspect to it, you can definitely apply that kind of geospatial or space time lens to it.

19:55 - Right, okay, so what are some of the questions you might answer with--

19:58 - Questions, it varies.

19:59 I mean, so there are, it depends on the type of the data, depends on the use case.

20:03 You have a horizontal technology that enable you to enable so many industry verticals, but I'll give a couple of examples.

20:08 - Yeah, yeah, make it concrete for us.

20:10 - Absolutely, think about like a logistics company or a delivery company, like the most, I mean, well-known delivery companies, Amazon, right?

20:18 I mean, you go to the app, you purchase an item or a product and then the whole journey of that product from the supplier to the warehouse, to the driver, Amazon driver, all the way that makes it to your door.

20:31 There is a whole kind of, everything has a geospatial location to it, attached to it.

20:36 The package is moving around, you're located somewhere, their house is a certain location.

20:40 Handling the logistics behind all of that, understanding how things are, you're monitoring all these assets in space and time.

20:48 As it reaches the door, this whole journey, there's a lot of kind of data processing, data analytics happening that you have to do through, again, the geospatial kind of aspect, the geospatial contextual aspect of things.

21:01 So this is one example.

21:02 Another example could be if you're like an insurance company and you're insuring homes, for example, and you want to understand what are the nearby kind of climate conditions, natural disaster conditions compared to your home.

21:15 This also-- the home has a location.

21:17 These kind of natural disaster, weather changes at different locations all the time.

21:22 that will impact how you take decisions about insuring these homes.

21:26 - Do I buy it?

21:27 Do insurers want to insure it?

21:28 Or do I have to pay for that?

21:29 - Exactly.

21:30 So that's another example again, that the space and time lens, or the geospatial aspect impacts your decision when it comes to taking, it's an important decision that you take in here.

21:39 So that's another example.

21:40 So these are just a couple of use cases, but there are tons of other use cases and use cases that may not exist even yet.

21:47 So there's a lot of movement now into climate tech and AgTech, and we are, like, what we're trying to do at Wherobots is we're building the database infrastructure that enable the next generation climate tech and agriculture technology.

22:01 - So they can ask the questions that they might have, but you already have the machinery to answer them.

22:06 - We have machinery to answer them, and they build their own secret sauce on top of our infrastructure, yeah.

22:11 - Kind of a framework platform?

22:13 - Absolutely, yeah.

22:14 - Got it. - Yeah.

22:15 - So Python, where's Python fit in this story?

22:17 - That's a great question.

22:18 So geospatial data or the geospatial aspect of data has existed for so long.

22:24 As you said, we live in the space-time continuum.

22:26 Everything has a space-time aspect, geospatial aspect.

22:29 And that's why developers already have APIs to interact with geospatial data.

22:34 And these APIs, the language varies.

22:36 So there are some people that use SQL to interact with the data, process the data in either SQL databases or any other kind of SQL processing engine.

22:45 But a lot of the geospatial developers or people developing with geospatial data, they use Python.

22:52 There are so many libraries that use Python to actually, example of these libraries is a library called Geopandas.

22:57 It's a fantastic library.

22:59 It's an extension to Pandas to kind of frangle and crunch geospatial data.

23:03 - Ask questions about what things are contained in here, what things are outside of here, how far away is it?

23:08 - Absolutely, so this is what Geopandas does.

23:10 The only problem is that Geopandas is a library, has a great functionality, but again, it's not enterprise ready for the most part.

23:17 It doesn't scale, all that kind of stuff.

23:19 So what we do at Wherobots is that we provide SQL API to the user to run spatial queries on the data, but we also provide a spatial Python API.

23:29 Like if you're using Geopandas, you can use the same API, do the heavy lifting enterprise scale, kind of processing of the data using our platform, and then do the major Geopandas kind of functionality you're familiar with to, again, do the geospatial processing with it.

23:46 So this is how it fits within Python.

23:48 And actually, looking at our-- we have an open source software called Apache Sedona.

23:53 It's an Apache under the Apache license.

23:57 And it has all these APIs, SQL and Python.

24:00 And Python is the most popular.

24:02 So it's been-- the Python package alone on PyPI is being downloaded a million times over on a monthly basis as we're speaking today.

24:11 So definitely Python fits very well within our--

24:15 - Yeah, that's awesome.

24:16 - Absolutely, yeah.

24:17 - So it sounds like your business, Wherobots, is a little bit following the open core model, you say?

24:23 - Yes.

24:24 - Let's round out our conversation here with talking about the business itself.

24:27 How'd you get to startup row?

24:29 - We follow the open core model.

24:30 You're totally right about that.

24:31 So we have our open source software, Apache Sedona.

24:33 It's available for free open source, very permissive license, the Apache license 2.0.

24:38 And it's open source.

24:39 It's also used in operational production in so many use cases.

24:42 There are so many contributors outside.

24:43 I'm the original creator of it, as well as my partner, Jia.

24:46 We're both the original creators, but it's grew beyond us now.

24:49 So there are like dozens, like 100 contributors now, something like this.

24:53 And we use Sedona as an open core, but we build a whole platform around it.

24:58 So if we want to think about like what we do compared to the other data platforms in the market, there are generic data platforms like Snowflake, Databricks.

25:06 There are more specific, specialized data platforms like MongoDB for NoSQL, there's Neo4j for Graph.

25:13 We are, Orobots is like the data platform for geospatial.

25:16 So this is basically, and we use Apache Sedona as an open core to enable us to do all of this, yeah.

25:22 - Fantastic, all right, well, congratulations on being here.

25:25 - Yeah.

25:26 - I wish you success with the whole project and thanks for coming on the show.

25:29 - Thank you so much, I appreciate it.

25:30 Looking forward to it.

25:31 - Yeah, you bet.

25:32 - Thank you so much.

25:33 - Yep, bye.

25:33 - Time to talk to Neptyne, who have created Python programmable spreadsheets that are super powered with Python and AI.

25:40 I gotta tell you, this product looks super awesome.

25:43 It looks so much better than things like Google Sheets or Excel, and I can't wait to get a chance to play with it.

25:49 Hey, guys.

25:49 - Hello.

25:50 - Welcome to Talk Python.

25:51 - Yeah.

25:52 - It's great to have you here.

25:53 First, introduce yourselves.

25:55 - Thanks for having us.

25:56 I'm Dawa.

25:57 I've been doing Python professionally for, I don't know, 20 years or so.

26:01 - I'm Jack.

26:02 I'm Dawa's co-founder.

26:04 Been doing Python a little less than that, but met Dow about five years ago, and we founded Neptyne about a year ago.

26:10 - Yeah, so let's dive into Neptyne.

26:14 What's the product, what's the problem you're solving?

26:16 - Yeah, the proposition that we have is pretty straightforward.

26:20 We build a spreadsheet on top of a Jupyter notebook engine, which basically gives you all the data science superpowers that the notebook gives you in a familiar spreadsheet environment, which means that you can share your work as a Python programmer, much easier with people that are not familiar with notebooks because they have the universal data canvas of a spreadsheet.

26:43 >> How interesting, because one of the big challenges data scientists often have is they work in Jupyter, they work in Jupyter, and then some executive wants to share it at a presentation, or they want to continue working on it, but they're not developers.

26:55 So what do you do?

26:56 You write an Excel file, and you hand that off, and then you re-import it somewhere, maybe?

27:01 I don't know.

27:02 >> Yeah, yeah, the typical flow is it needs very much like you write out a CSV, you email that to the person that is gonna put it into Excel, that person then creates a graph in Excel, screenshot that graph in Excel, and sends it to the person that puts it in the presentation and then the CEO can do something with it.

27:19 - It goes either in PowerPoint or it goes in Word.

27:22 Yeah, one of those two, right?

27:23 Probably the picture.

27:24 But that's a bunch of steps that are disassociated from data.

27:28 So that's one problem, right?

27:29 That's the one problem.

27:30 But since no one really sees your product in action how we're talking here.

27:35 Maybe just a bit of an explanation.

27:36 Like it looks very much like Google Docs or one of the online Excel, I say Docs, I mean Sheets, like one of the online spreadsheet things.

27:44 It doesn't look like something embedded into notebooks, right?

27:48 - Yeah, that's right.

27:49 It is a spreadsheet first and foremost.

27:51 It looks a lot like Google Sheets, but you can run Python in it.

27:55 - Yes.

27:56 - You can run Python both directly in the spreadsheet cells.

27:58 You can also define other functionality in Python and then run that with your spreadsheet.

28:02 I mean, to me, that's where the magic is, right?

28:05 Like Excel or sheets, the spreadsheets more broadly are super useful.

28:10 But it's always like, "How do I do an if statement in this dreaded thing again?

28:14 And how do I do a max with a condition?" You know, just all the programming aspect of going beyond just having raw data is just like, "Oh boy, this is..." And you just showed me an example where like, here you just write range of a thing and boom, it just writes that out.

28:27 Or you write up Python tertiary statement and it just runs.

28:32 - Right, yeah, but also common things in Spreadsheet are hard, are data cleaning, right?

28:38 You get some data from somewhere, and it's not quite right, and most of the time, people end up doing this by hand.

28:44 And that's fine the first time you do it, the second time and the third time, it gets very annoying.

28:49 While if you just write a little bit of Python, you can clean data like that, and then the next time you have the data, you just rerun the script, and it's clean again.

28:58 So that's a very powerful way of doing this thing, and we have a full Python environment.

29:03 It's not just a lightweight, you know, runs in the browser.

29:06 You can do pip install anything you want.

29:08 So you can connect to any API out there, use any data, export any data.

29:13 It's a complete environment.

29:14 - Yeah, how interesting.

29:16 There's a little window where you can write straight Python, you know, dev some function that does arbitrary Python, and then you can invoke it like a function in the spreadsheet, right?

29:25 - Exactly, exactly.

29:27 - And you can talk to things on the internet?

29:29 For example, I could do web scraping there?

29:30 - We'll call an API, like a currency API?

29:33 - Yeah, exactly.

29:35 - Okay.

29:36 - That's, yeah, any REST call you want to make, you just import requests and go for it.

29:41 - Wow, so where's it run?

29:42 Is this a PyScript, Pyodied, is this scoped, is this Docker on a server?

29:47 - It's all running in a Docker container, server side.

29:51 That's how it works.

29:51 And that's kind of, we do that for maximum flexibility, maximum capability.

29:55 So it means that anything you can install, anything you can run on a Jupyter notebook running on Linux, you can run in Neptyne.

30:02 - I see, so we get full Python 3.11 or 3.10 or whatever it is.

30:05 - Yep, yep, and we ship with a bunch of useful packages pre-installed, but if you want to install something else, you just open up our dependency management window, install anything else you want to use.

30:17 It's all very manageable, very configurable.

30:19 - Well, it looks super good to me.

30:21 What's the user model?

30:23 Do I go and create an account on your site and it's kind of like Google Docs, or what's the story?

30:27 - Yep, exactly.

30:28 You can try it out.

30:29 You can go to in the upper right.

30:31 Just click log in.

30:32 You can create an account.

30:33 It's totally free to use, the free tier.

30:36 Yeah, give it a shot.

30:38 - Awesome. - Yep.

30:38 - All right, final question.

30:40 How'd you guys get here to start up Row?

30:42 Everyone wants to build something amazing with open source, but how did you turn that into a business and something you can put your full time into?

30:49 - I mean, I guess we're kind of lucky in that when we started, I pitched it to a bunch of people that due to no fault of their own got into some money.

31:03 And they were willing to back us.

31:05 And then later we joined YC for the winter batch.

31:09 And in that process, we got a little bit of publicity and were picked up for the startup role.

31:16 - Just to add to that too, based on our experience in Y Combinator, there are lots of open source tools out there that are able to get started on some commercial path just based on the community that they're building, based on the users.

31:28 - Right, right.

31:30 - It's a very good path.

31:31 - I feel like this whole open core business model has really taken off in the last couple years where it used to be a PayPal donate button and now it's a legitimate offering that businesses will buy and it's good.

31:43 I think it's very positive.

31:44 So I'm really impressed with what you guys built.

31:46 I think it's awesome.

31:47 I think people really like it.

31:49 Yeah, so good luck.

31:50 Thanks for being here.

31:51 - Thank you so much.

31:52 - Now up is Nixtla.

31:53 We have Federico Garza and Christian Chula here to tell us about their time series startup, ready to make predictions based on an open source time series ecosystem.

32:02 - Hey there. - Hello.

32:03 - Welcome to Talk Python.

32:04 - Hello, hello.

32:05 - Hello, let's start with introductions.

32:07 Who are y'all?

32:08 - So I am Christian Chalhoun, I'm a co-founder of Nixtla.

32:11 - Yep.

32:12 - Hello, I'm Fede, I'm CTO and co-founder of Nixtla.

32:15 - Nice to meet you both.

32:16 Welcome, welcome to the show.

32:18 Really great to have you here at PyCon.

32:20 And yeah, let's start with the problem y'all are trying to solve.

32:24 OK, yeah, so at Nixtla, what we do is time series forecasting.

32:28 So as you know, time series forecasting is a very relevant task that a lot of companies and practitioners need to solve.

32:36 Essentially, predicting future values of something, right?

32:40 It could be demand of a product or the weather.

32:42 So there are many use cases for forecasting.

32:45 It's a very common problem in industry.

32:47 And essentially, we want to provide tools to developers, engineers, researchers to be able to do this more efficiently and with good practices.

32:56 And yeah, that's mostly it.

32:58 Right, OK.

32:58 So is this like a Python API?

33:01 Is this a database?

33:03 What is the actual--

33:04 That's how it looks like.

33:06 Yeah, the product, I guess.

33:07 The product.

33:07 So we have an ecosystem of Python libraries.

33:11 And we have different libraries for different use cases.

33:14 For example, we have the stats forecast library, which specializes in statistical econometric models.

33:22 And also, we have more complex models and libraries for deep learning and machine learning applications.

33:30 Yeah.

33:31 Nice.

33:32 And have you trained some of these models yourself on certain data, things like that?

33:36 Or where do you get the models from?

33:38 The idea behind the libraries is that you can use whatever your data is.

33:44 The only restriction is that it must be time series data, but you can use whatever data you have.

33:50 Yeah.

33:50 OK.

33:51 Fantastic.

33:52 And where's its data?

33:54 Python's at the heart of so much data processing these days.

33:57 And I guess, give a shout out to all the different Python packages that are already out there, maybe.

34:02 You want to just give a rundown on those and what they're for, and then talk about them?

34:06 Yeah.

34:07 So we have like six packages right now.

34:10 They're all libraries on GitHub that you can pip install or install it with Conda.

34:14 And essentially, they focus on different ways of approaching forecasting.

34:18 And they're essentially libraries build on Python.

34:21 Depending on someone build on Numba, other methods are in Python.

34:25 Oh, you guys are using Numba?

34:26 Oh, OK.

34:27 And it makes a huge difference?

34:28 Yeah, it makes a difference.

34:29 All right.

34:30 Tell people really, really quickly, what is Numba?

34:32 So Numba is this library which allows you to compile just in time your code.

34:39 So it's a lot faster than using just plain Python.

34:43 And how easy is it to use?

34:45 It's really easy.

34:45 OK.

34:46 In fact, we wanted to make our library more efficient and more faster, and we did it in like two weeks only using Numba.

34:58 So it was really easy to use.

35:00 Yeah, awesome.

35:01 Awesome.

35:02 And some other packages uses PyTorch.

35:05 So like our deep learning methods, neural forecasting approaches are built on PyTorch, or PyTorch Lightning.

35:12 Yeah, fantastic.

35:13 So would you say that your business model is something of an open core model where it's kind of built on top of these libraries and--

35:20 - Absolutely, yeah.

35:21 Yeah, so for now we have been focusing on building these libraries, the community.

35:25 We have a very active community on Slack and people that use us and contribute with our code.

35:30 And we are building services on top of these libraries, like enterprise solutions or hosting computation or even simplifying the usage further.

35:40 So for example, APIs where you can just simply pass your data.

35:43 I want to know what is gonna happen next on this data.

35:46 - Do you pass it some historical data and ask it to make predictions?

35:49 - Make predictions and then we produce the predictions.

35:51 - Okay.

35:52 - Yeah, this is one of types.

35:54 So we are working on these different applications and services.

35:57 - Awesome, it sounds really cool.

35:59 - Thanks.

35:59 - So final question, how'd you make your way over here to startup row at PyCon?

36:03 Like how'd you start your company and how'd you get here?

36:06 - Yeah, it has been a long journey.

36:09 Like sorry, I mean, we have been like for a year working on these libraries and services.

36:16 And right now, we are focusing on building the startup.

36:20 We want to be able to do this full time for a long time and really build something that can help people.

36:27 Yeah, are you looking to offer an API, like an open AI sort of model, or running people's code as a service?

36:34 Or where are you thinking you're going?

36:36 Yeah, yeah, that's definitely one of the options.

36:39 But yeah, we are finishing our funding runs.

36:42 And once we finish that--

36:44 - Funding helps a lot on software development, right?

36:46 - Funding helps a lot on development.

36:48 And yeah, so we're exploring different venues.

36:50 And there's very exciting things to come.

36:53 - All right, well, we all wish you the best of luck on your project.

36:56 And thanks for taking the time to talk to us.

36:58 - No, thank you for inviting me.

36:59 - Yeah, you bet.

37:00 - Thanks. - Bye.

37:01 - We'll speak with Piero Molina from Predibase.

37:03 They empower you to rapidly build, iterate, and deploy ML models with their declarative machine learning platform.

37:09 - Piero, welcome to Talk Python to Me.

37:14 - Thank you very much for having me.

37:16 - Yeah, it's fantastic to have you here.

37:17 Quick introduction for everyone.

37:19 - Sure, so I'm Piero and I'm the CEO of PrediBase.

37:20 Can tell you about PrediBase in a second.

37:23 I'm also the author of Ludwig, which is an open source Python package for training machine learning models.

37:29 - Awesome, well, great to meet you.

37:32 Tell us about your company.

37:34 - Yeah, so PrediBase tries to solve the problem of the inefficiency in the development process of machine learning projects.

37:43 Usually they take anywhere from six months to a year or even more, depending on the organizations, their degree of expertise in developing machine learning projects.

37:53 And so with using our platform, companies can get down to, like from months to days of development, and that makes them substantially faster.

38:02 Each machine learning project becomes cheaper and organizations and teams can do many more machine learning projects.

38:06 - Yeah, I mean training is where the time and the money is spent.

38:10 Yeah, at least the computation, I mean paying developers is expensive.

38:12 - Right, right, right.

38:13 - But in terms of, people say machine learning or AI, it takes all this energy, and it does take energy to answer questions, but it really takes energy to train the models, right?

38:22 - Yeah, yeah, definitely.

38:24 Training the models is a huge part.

38:25 Managing the data and putting it in a shape and form that is useful for training the models is also another big piece of the reason why these teams take so long to develop models.

38:37 And also, usually there's several people involved in the process.

38:43 There are different stakeholders.

38:44 Some of them are more machine learning oriented, some of them are more engineers, and some of them may be analysts or product developers that need to use the models downstream.

38:53 And so the handoff of the artifacts and of the whole process between these different people is also a source of a lot of friction.

39:06 And with the platform that we are building, we are trying also to reduce the friction as much as possible.

39:11 - Yeah, sounds great.

39:12 Is it about managing that workflow or is it about things like transfer learning and other more theoretical ideas?

39:19 Like where exactly are you doing this?

39:22 - Yeah, so to give you a little bit more of a picture, I would say where we are starting from is from Ludwig, which is his open source project.

39:31 And what Ludwig allows people to do, it allows to define machine learning models and pipelines in terms of a configuration file.

39:37 So you don't need to write a little level, PyTorch or TensorFlow code.

39:42 You can just write a configuration that maps with the schema of your data.

39:46 And that's literally all you need to get started.

39:49 So it makes it substantially easier and faster to get started training models.

39:53 Then if you are more experienced, you can go down and change more than 700 parameters that are there and change all the details of training, of the models themselves, the pre-processing, so you have full flexibility and control.

40:06 And you can also go all the way down to the Python code, add your own classes, register them from the decorator, and they become available in the configuration.

40:15 This is what we have in the open source.

40:14 And what we're building on top of it is all the, again, you can think about this, for people who may be familiar with Terraform, for instance, what Terraform does for infrastructure, so defining your infrastructure through a configuration file, Ludwig does it for machine learning.

40:32 That's a good analogy.

40:34 And so, PriDebase, what does it do?

40:35 It uses this basic concept of models as configuration, really, and builds on top of it all sorts of infrastructure that is needed for organizations that are big enterprises to use it in the cloud.

40:44 So we have, like, we can deploy on cloud environments.

40:48 We abstract away the infrastructure aspect of it.

40:51 So you can run the training of your models and inference on either one small CPU machines or a thousand large GPU machines, and you don't need to think about it, basically.

41:00 - Oh, cool.

41:01 So I just say train it, and if you happen to have GPUs available, you might use them?

41:05 - Right, absolutely, yeah.

41:06 - Okay, excellent.

41:07 So where does Predibase fit into this?

41:11 Like where's the business side of this product?

41:14 - Right, right.

41:15 I would say Predibase makes it easy for teams, really, to develop machine learning products, right?

41:21 As if, Ludwig, you can define your own configurations, but it's like, you know, a single user experience, if you want, right?

41:27 Predibase becomes like a multi-user experience, where again, you deploy on the cloud, and you can connect with data sources.

41:33 In Ludwig, you provide like a CSV file, or a data frame, a Pandas data frame, With PrediBase, you can connect to Snowflake, to Databricks, to MySQL databases, to S3 buckets, and do all of those things.

41:48 And also there's a notion of model repositories, because when you start to train a model, the first one is never the last one that you train.

41:55 And so, in an analogy to Git, really, in Git you have commits and you have teams doing different commits and collaborating together.

42:02 In our platform you have multiple models that are configurations, multiple people training new different models, spawning from the previous ones, so there's a lineage of models that can be compared among each other.

42:10 And then the very last piece is that we make it easy to deploy these models with one click of a button.

42:14 So you go from the data to the deployed model very, very quickly.

42:18 - Fantastic, it sounds great.

42:19 So final question, a lot of people out there working in open source, they'd love to be here on Startup Row talking about their startup based on their project.

42:28 It sounds like what you built is based on the open core model, which seems to be really, really successful these days.

42:36 Tell us a bit about how you got here.

42:39 Right, so basically I think it started from the open source, really.

42:43 I started developing Ludwig when I was working at Uber.

42:45 And initially my own project was a way for myself for being more efficient and working on the next machine learning project without reinventing the wheel every single time.

42:57 And I built that because I'm lazy and I don't want, when I do one thing more than twice, then I automate it for myself, really.

43:04 - Productive laziness or something like this.

43:06 - And so then other people in the company started using it, and that convinced me that making it open source, also because it was built on top of other open source projects, would have been a great way to both have people contribute to it and improve it, and also give back to the community, because again, I was using myself a lot of open source projects to build it.

43:24 And then from there, I made it so that we donated the project to Linux Foundation.

43:31 So now it's backed by the Linux Foundation and also the governance is open as opposed to what it was before when I was at Uber.

43:36 And from there, actually, I met a bunch of people, some of my co-founders at the company, thanks to the project.

43:44 And we decided that, so for instance, one of them is Professor Chris Sack from Stanford.

43:49 He was developing a similar system that was closed internally at Apple.

43:52 And so we said, "This thing worked at Uber, worked at Apple, works in the open source.

43:53 let's make a company out of this, right?

43:54 - Fantastic, yeah.

43:55 Solving some problems for these big teams, right?

43:58 Excellent, well, best of luck on your company.

44:01 - Thank you very much, man.

44:02 - Yeah, thanks for being here.

44:03 - Yeah, absolutely, a pleasure.

44:04 - Yeah, bye. - Thank you so much.

44:05 - We'll finish up our stroll down startup lane by talking with the folks at Pynecone.

44:08 We have Nikhil Rao to talk about the PurePython Fullstack web app platform that they've built.

44:14 Nikhil, welcome to Talk Python.

44:16 - Yeah, great to be here.

44:17 Thanks for having me.

44:18 - It's great to have you here.

44:19 I'm loving going through all the different projects on startup row and talking about him and shedding a little light on him.

44:25 So happy to have you here.

44:26 Yeah, yeah, give a quick introduction on yourself.

44:29 - Yeah, so I'm Nikhil, I'm the CEO co-founder of Pynecone and we're building a way to make web apps in pure Python.

44:35 So we have an open source framework and anyone can install this and basically start creating their apps front end and back end using Python.

44:42 Our company went through the recent Y Combinator batch that just ended the winter 23 batch.

44:47 And recently we raised our seed round and starting to hire out and pretty much grow out your project and company from here.

44:52 - Okay, well, awesome, congratulations.

44:54 That sounds really cool.

44:55 Give us an idea of, I guess, why do you build this, right?

44:59 We've got Flask, we've got Django.

45:01 - Yeah.

45:02 - Heck, we even have Ruby if you really want it.

45:03 - Yeah.

45:04 - There's a lot, so previous to this, like you mentioned, there's frameworks like Flask and Django and whenever you wanted to, a Python developer wanted to make a web app, they used something like this but you always have to pair it with another front-end library.

45:15 So you can't just make your front-end using Python.

45:17 You still have to end up using JavaScript, HTML, React, stuff like that for your front-end.

45:21 And so a lot of people, if you're coming from a Python background, it's a lot of work to get started with these.

45:26 It's a different language, different tool set.

45:27 So we really wanted something where Python developers can just use these tools they already know and be able to make these web apps without having to go learn something completely different.

45:35 So as opposed to these tools like Flask and Django, we're very focused on unifying the front end and back end into one framework.

45:41 So you don't need a separate front end and back end.

45:43 And that allows us to--

45:44 the user can just focus on the logic of their app and not these technical details on the networking and all this other stuff.

45:49 - Yeah, it sounds interesting.

45:51 I mean, I know many Python people who don't want to do JavaScript.

45:55 They don't want to do multiple languages.

45:57 - Exactly.

45:58 - But, you know, it's traditionally, at least in the web framework world, you're speaking many, many languages.

46:03 You're speaking HTML, CSS, JavaScript is a big one.

46:08 And honestly, I think that there was a period where people were super invested in JavaScript and thought that was kind of the right way or the necessary way.

46:17 And that would take away a lot of, what's important about the web framework, right?

46:21 Like, well, it doesn't matter if it's Flask or Django.

46:24 We're just going to return JSON anyway, because it's all Angular, so who cares, right?

46:28 But I don't think that's where people really--

46:30 many people, at least the people choosing Python, want to be.

46:33 And so how is your stuff different?

46:35 So I think exactly what you said before this.

46:37 To make a serious web app, you always have to go to JavaScript.

46:40 And what we're really trying to do is make everything in Python, including your front end.

46:44 And so basically, we're trying to integrate the two together.

46:47 So basically, you don't have to go learn these technical details you didn't want before.

46:52 We realized for all the logic of your app, you're using Python anyway.

46:56 Like, Python's used in so many industries, databases, ML, AI, infrastructure.

47:00 And when these people want to make a front end, it is possible to make JavaScript, or these JavaScript front ends, but it's a lot of overhead.

47:07 And before our framework, there are different low-code tools to make front ends without JavaScript, but they all kind of have a limit, and they all have a graduation risk, is what we found.

47:16 So you can start making your UI - Yeah, so like-- - Can you make any website with them?

47:19 - Right, like Streamlit and Anvil are both notable ones that kind of come to mind.

47:24 But neither of 'em, I like them both a lot, but neither of 'em are necessarily like, I'm just gonna build a general purpose web app.

47:31 They're focused in their particular area.

47:33 - Yes, exactly.

47:34 So I've used tools like Streamlit, Gradio in the past, and a lot of that was inspiration for Pynecone.

47:39 It's really great 'cause it's super easy to get started with, you don't have to go learn these things, but they all have this kind of ceiling you hit.

47:44 So they're mostly good for data science apps, dashboard apps, but as you try to expand your app into a full stack web app, start adding these new features, a lot of times you find these frameworks don't really scale with your ideas, and your two options are either you have to constrain your idea into what these vendors offer you, or you use that for prototyping, and when you're making a customer-facing production app, you scrap it and go to a JavaScript React world.

48:07 So what we're really trying to do is make something like these Anvil or Streamlet easy to get started with for Python developers, but as you want to expand to these complex cases, you should be able to stay in our framework, and we should be able to handle that also.

48:18 - Interesting.

48:19 So how does the front end interactivity work if it's Python?

48:21 - Yeah, and this is also where I think we're a bit different.

48:23 We're trying to really leverage a lot of the web dev ecosystem and not recreate everything from scratch.

48:28 So for the front end, we leverage React and Next.js.

48:30 So our front end compiles down to a Next.js app.

48:33 And from this--

48:33 - Oh, you're transpiling the Python?

48:35 - We transpile the Python to Next.js.

48:37 And this gives you a lot of great features.

48:38 You get single page app features from Next.js, a lot of these performance features.

48:42 And that means from our perspective, we don't have to recreate all this stuff.

48:45 And also, we don't have to create components one by one.

48:48 We just leverage React.

48:49 And what we do in Pynecone for the front end is we just wrap React components and make them accessible.

48:54 So even if we don't offer something, and other low-code tools, sometimes if they don't offer a component you need, you may be kind of constrained in what you can build.

49:02 We easily have a way for anyone to wrap their own third-party React libraries.

49:06 So we're really trying to make the existing stuff out there accessible rather than recreating it.

49:10 - Yeah, so you can sort of extend it with React if you get boxed in, that's your escape hatch?

49:14 - Exactly.

49:14 - Okay.

49:15 - So that's kind of how our front end works, and for the back end, we use FastAPI to handle all the states.

49:20 So the user state is all on the back end, on the server, and this is what allows us to pretty much keep everything in Python.

49:25 So none of the logic is transpiled to JavaScript, only the React, and all the logic stays in Python.

49:30 So you can use any of your existing Python libraries, any existing tools.

49:33 You don't have to wait for us to kind of make these integrations.

49:36 So it's kind of leveraging React, but also leveraging Python, and kind of bringing them together.

49:40 - What's the deployment look like?

49:42 - So we're working on a easy deployment, So you can just type PC deploy, we'll set up all your infrastructure, and you'll get a URL back with your app live.

49:49 But also, we're an open source framework, so it's also very easy to self-host and self-deploy.

49:53 And so what we're really trying to do is make it accessible and easy, but never kind of lock you into our framework.

49:58 I see.

49:59 So I could put like Nginx in front of it or something.

50:01 Exactly.

50:01 So right now, we're still working on our hosting deployment.

50:04 So everyone right now who's deployed is hosting on AWS DigitalOcean or a tool like this with Nginx.

50:08 And so it integrates just like you would deploy a Flask or React app.

50:11 Got it.

50:11 But we're really trying to make an optimized a service around this later.

50:15 - Yeah, sure, it makes sense.

50:16 All right, sounds like a great product.

50:18 - Thanks, Seth.

50:19 - Final question here, how'd you get here?

50:21 How'd you start the company?

50:22 How'd you land on Startup row?

50:24 I mean, you talked about Y Combinator a little.

50:26 - Yeah, so I talked a little bit.

50:27 We did the Y Combinator batch, and really the idea is not only having an open source framework, but having a business model around it and being able to create these features around it.

50:36 So we're really focused on being similar to have an open source framework, similar to how Vercell has Next.js and their hosted version, and we're trying to bring that to the Python community.

50:45 So Python is like one of the fastest growing languages, obviously, like that's why PyCon is so big.

50:50 And for the web dev part, there's not really a good ecosystem for that.

50:53 So when people want to share their ideas, we're really trying to become that de facto way for Python developers to create their apps and share.

50:59 And so, yeah, basically working on our hosting service, growing out our team now, and trying to build up all this ecosystem around it so people can easily get their ideas out to the world.

51:08 - Awesome, well, congratulations and thanks for being here.

51:11 This has been another episode of Talk Python to Me.

51:14 Thank you to our sponsors.

51:16 Be sure to check out what they're offering.

51:17 It really helps support the show.

51:19 Take some stress out of your life.

51:21 Get notified immediately about errors and performance issues in your web or mobile applications with Sentry.

51:27 Just visit and get started for free.

51:32 And be sure to use the promo code, talkpython, all one word.

51:36 Want to level up your Python?

51:38 We have one of the largest catalogs of Python video courses over at Talk Python.

51:42 Our content ranges from true beginners to deeply advanced topics like memory and async.

51:47 And best of all, there's not a subscription in sight.

51:49 Check it out for yourself at

51:52 Be sure to subscribe to the show, open your favorite podcast app, and search for Python.

51:57 We should be right at the top.

51:58 You can also find the iTunes feed at /iTunes, the Google Play feed at /play, and the Direct RSS feed at /rss on

52:08 We're live streaming most of our recordings these days.

52:11 If you want to be part of the show and have your comments featured on the air, be sure to subscribe to our YouTube channel at

52:19 This is your host, Michael Kennedy.

52:21 Thanks so much for listening.

52:22 I really appreciate it.

52:23 Now get out there and write some Python code.

52:25 (upbeat music)

52:28 [Music]

52:43 (upbeat music)

Back to show page
Talk Python's Mastodon Michael Kennedy's Mastodon