00:00 Michael Kennedy: We all knew that Python is a major player in application, and machine learning, and AI. This often involves grabbing Keras or TensorFlow and applying it to a problem. But what about AI research? When you're actually trying to create something that has yet to be created, how do researchers use Python here? Today you'll meet Alex Lavin, a Python developer and a research scientist at Vicarious, where they are trying to develop artificial general intelligence for robots. This is Talk Python To Me, episode 124, recorded May 31st, 2017. Welcome to Talk Python To Me, a weekly podcast on Python, the language, the library, the ecosystem, and the personalities. This is your host Michael Kennedy. Follow me on Twitter where I'm @mkennedy. Keep up with the show and listen to past episodes at talkpython.fm, and follow the show on Twitter via @talkpython. This episode's brought to you by Linode and Talk Python Training. Be sure to check out what the offers are for both of these segments. It really helps support the show. Alex, welcome to Talk Python.
01:24 Alex Lavin: Hi, thanks for having me.
01:25 Michael Kennedy: Yeah. I'm pretty excited to talk about AI, researching AI, and all the stuff that you guys are up to, except that we haven't really touched that much on the show and it's both interesting at a low level, technical level, but also it's like a philosophical level almost. So looking forward to talk to you about it. But before we get into details, let's start with your story. How did you get into programming and Python?
01:48 Alex Lavin: Well my background is a bit untraditional for what I do. I actually studied Mechanical and Aerospace Engineering at school and I was determined to make a career as a spacecraft engineer and even some of the mechy assignments where we used MatLab, I hated those. But that eventually led me to Carnegie Mellon to study space robotics and the atmosphere there is incredible. It's hard not to get caught up in software and AI. And that was really my first exposure to AI.
02:21 Michael Kennedy: That's cool, when I think of universities that are cool and amazing, and programming and robotics, and things like that, Carnegie Mellon is like right there. Maybe Stanford, MIT, Carnegie Mellon. Those are the top three at least in my mind.
02:37 Alex Lavin: Yeah, absolutely, and I was fortunate enough to work with this incredible roboticist Red Whittaker who was one of the, I guess, founders of the DARPA Grand Challenge of autonomous vehicles.
02:49 Michael Kennedy: I was going to ask you about the DARPA Grand Challenge. Tell people about what that is. That is such an amazing thing.
02:54 Alex Lavin: That started back in 2003 or 2004. And this is a challenge to create an autonomous vehicle that drives across the desert down in Mexico. And the first year was, you ask most people, a complete failure, but it really kicked off all this research. And then the next year, it was won by, I believe, Sebastian Thrun and his team from Stanford. And now it's just almost a piece of cake for these autonomous vehicles to complete the course.
03:23 Michael Kennedy: Didn't the Stanford team eventually more or less move over to Google for their Google car stuff? Do I have that particularly right?
03:30 Alex Lavin: Yeah I believe a lot of them went over there. And Sebastian Thrun, I think he is very involved in the Udacity effort to have a self-driving cohort.
03:40 Michael Kennedy: Okay, yeah, yeah, that's really cool. So when you talk about the car driving through the desert, it's not like it just drove a little ways, there's basically a dirt path through the open planes as well up into the mountains for like a hundred miles, right?
03:58 Alex Lavin: Oh yeah, you can check out some of the videos from these races. And these are very intense off-road vehicles. It's through the desert in Mexico. I think they're on the Baja Strip. And I want to say it's like hundreds of miles.
04:13 Michael Kennedy: Yeah, yeah, that's really cool. So, yeah, there's a great Nova on this called The Great Robot Race. I'll be sure to include it.
04:21 Alex Lavin: Ah cool.
04:22 Michael Kennedy: Yeah, yeah, it's really, really neat. But it's super inspiring. This is not the first year but the second year that they did the documentary. So it's not where they all failed. And they had different machines, some people had like robotic motorcycles, some people had huge trucks, it seems like the SUVs were the really winners though.
04:43 Alex Lavin: And it's also worth checking out the DARPA Challenge for autonomous human robots. There's some pretty funny fail videos out there on YouTube.
04:54 Michael Kennedy: Okay so I do herald your story but I definitely wanted to just talk about, 'coz that was the world that you are in at Carnegie Mellon, right? Okay this requires programming, right?
05:05 Alex Lavin: Yeah, yeah, so I was studying Mechanical Engineering and doing things like CAD and FEA analysis and all that. And my last semester there I ended up taking one software, well it was a software/AI course, and it was all in MatLab of course. And that really just got me hooked. I loved it. And I figured after graduation I would just figure out a way to just pivot my career and I wanted to do AI and software, and AI research specifically. So naturally well Python seemed to be a natural fit. And I started teaching myself Python and then at the same time, dove into any AI textbook and paper I can get my hands on. And then as soon as I could, I started implementing some of these algorithms and models in Python. And some of the motivation for that came from things like Kaggle competitions and such. Eventually I felt comfortable enough as a programmer and AI researcher to seek a position with a few different companies and that lead me to Numenta and I worked there for a few years.
06:16 Michael Kennedy: Cool, what kind of stuff did you do there?
06:18 Alex Lavin: So Numenta is interesting. They run the full gambit from a theoretical neuroscience all the way into machine learning models that are implemented for production code. So it was an incredible learning environment from software engineering perspective and about machine learning algorithms. But I've since moved over to Vicarious and I've been here for about eight months now.
06:44 Michael Kennedy: Okay, nice, so what kind of work you do now?
06:47 Alex Lavin: At Vicarious, we're working to build general artificial intelligence, so that's AGI. Any problem a human can do, we want our AI to do. And in the past we've pointed to Rosie the Robot from The Jetsons. But I can say that we're not, we're not building exactly Rosie per se.
07:08 Michael Kennedy: More like R2-D2.
07:10 Alex Lavin: Oh sure.
07:12 Michael Kennedy: Okay, that sounds really interesting. When you talk about AI, there's usually some kind of split around specialized AI or General AI, right? Like a specialized AI might be Siri or self-driving cars, or the thing that solves Go. It's incredibly good at driving the car but there's no way that it could tell you what's the weather or something that's unrelated to what has been taught, right?
07:41 Alex Lavin: Exactly.
07:41 Michael Kennedy: Then you've got the more general one that you're working on.
07:44 Alex Lavin: Yeah, yeah, there's been a lot of press lately about deep learning models and frameworks. And they've been incredibly successful and useful for some of these narrow AI problems. A lot of times in research development of these models and algorithms, we'll use video game environments, specifically the Atari environments, just kind of good test beds for the approaches. And you take, for example, Breakout or Space Invaders environment, and deep learning can blow away any human player. It's solved. But you add some minor tweaks to the game, like change the pixel values ever so slightly or just move some blocks around, and things that are just imperceptible to human. They pick that up easily. The same deep learning model fails. So that is kind of exemplifying the narrow component of those models.
08:45 Michael Kennedy: Yeah, that's pretty interesting. So one of the challenges with neural networks and deep learning, is that basically like taking multiple neural networks and sort of treating them as a group of things that then answer questions or is there more to it than that?
09:05 Alex Lavin: In a way, let me dial back real quick.
09:08 Michael Kennedy: Yeah okay.
09:09 Alex Lavin: Artificial neural networks are a model that dates back to...
09:13 Michael Kennedy: Yeah, at least the 90s for sure.
09:15 Alex Lavin: Yeah and even earlier than that. I feel like I'm offending some people out there by saying...
09:20 Michael Kennedy: I did that in small talk in 1974, what are you talking about?
09:23 Alex Lavin: I know, exactly. But nowadays in the past five years and so, we've seen that we can stack layers and layers and layers of these neural networks and make them very deep, that's where the name deep comes from, because we have the computation power to do so. So these deep neural networks are essentially function approximators and you can feed them a ton of data. It could be videos from your self-driving car, it could be text in English and French if you want to do machine translation. And these deep function approximators are able to learn from all that data, how to take your input as, for example, English, and give you the output in French.
10:04 Michael Kennedy: Right, yeah, that's really amazing. I feel like AI, machine learning, was one of those things like nuclear fusion. It was always 30 years out no matter when you ask somebody. In the 90s it was like, oh yeah, we going to have AI and here we're working on some Lisp, but it doesn't really work. And then 10 years later we're still doing the same basic things. It's not really making any progress. But I feel like, kind of like you said, like the last five years, especially the last three, four years, it's just, wait a minute, this is actually a thing now. This is like really today there's reasonable uses of AI in tons of ways. I mean we have like Star Trek coming to life with Microsoft, Skype, and PowerPoint, right, where it, in real time, translates to other languages as you speak, right? And just things like that, I mean, amazing.
10:56 Alex Lavin: Yeah, there's some really powerful uses already deployed in applications but it's kind of funny the field of research for AI and specifically for General AI, is almost always like the mountain-climbing dilemma where you think, oh the peak is just over the next pass, and then you get over the next pass, and it's like, oh it's the next pass, and then the next pass and the next pass.
11:19 Michael Kennedy: Yeah, remember you told me about this mountain, now it's on the other side of this one, right?
11:23 Alex Lavin: Yeah, exactly. So, we do have some pretty powerful AI nowadays but we keep pushing the limits and pushing limits and defining this path to AGI as we go.
11:33 Michael Kennedy: Yeah, yeah, so one sequence maybe is you talked about, with the neural networks, they can kind of do a thing, where you had the deep learning neural networks that are really good at stuff but you make minor changes and they're kind of entirely lost, right? So it's on one hand we've built these things that you can train and they do a thing but maybe they can't adjust so much so. Is one of the new areas of how do you evolve a deep learning model or something like this?
12:02 Alex Lavin: Yeah, this is a big area of deep learning research is the ability to generalize across tasks so you would be able to take your example, AlphaGo, the deep learning model that has been defeating the Go champions across the world and to try to generalize that to maybe just a slightly different game board, you can use a larger game board for example, or even generalize that to a different game entirely like chess or something. But here at Vicarious we're taking a bit of a different approach, we aren't using deep learning as much as other AI research shops around. We specifically are taking a more structured model-based approach using probabilistic graphical models and we're finding that our models and algorithms are much more attuned to things like generalization and unsupervised learning which are some problems in the deep learning community.
13:01 Michael Kennedy: Yeah, yeah, that sounds so interesting to work on. So maybe tell us, most of us listening know what it's like to write software, create some utility, write a web app, or whatever. How does working on AI research projects, how much is that the same or different than writing a web app?
13:20 Alex Lavin: Well the workflow is pretty similar to a lot of software engineering companies out there. At a high level it's really just agile practices. We iterate and move very quickly but the difference is these are through experiments. Experimenting on our models and algorithms, not really through product releases.
13:42 Michael Kennedy: Right, you just don't plan it out and say these are the features we wanted and so we're going to build them. 'Cuz you don't necessarily know what the answer is right?
13:48 Alex Lavin: Yeah, exactly. So we don't have a kanban board of tasks and picking up in everything that you discussed at scrum or something. That it's a lot more loose than that, and researches here, a lot like developers in agile workflows, have a lot of freedom. And we need that to come up with these short-term ideas and long-term ideas, flush them out and experiment on them. So when we're building our code bases and our experiments, well I like to think of developing and software engineering code into kind of three classes, there's research, prototype, and product code. And we mostly dabble in the research and prototype code where research code is you're quick and dirty, you're moving fast through experiments, hacky code is okay, test coverage is usually discouraged because it slows you down, and then prototype is a step-up where we want well-designed, well-architected for data flow interfaces, we want unit and integration test coverage, style standards and documentation. But still the mentality is this is code that will someday be scrapped. So you're not married to it. It doesn't have to be beautiful, 99% test coverage.
15:09 Michael Kennedy: Yeah, that makes a lot of sense. It seems like you just want to play with the code in the beginning. And then once you have an idea, now this looks like it's going to work, let's go try to really build this out and see what this does.
15:20 Alex Lavin: Well almost. The workflow is a little bit different because it really starts with theoretical and mathematical foundations. So when we get some sort of research idea, if it's defined as just a problem that you want to try to solve or some new constraints on a different experiment, we first go through, okay what are the mathematical foundations of this? And we derive things like message passing algorithms over factor graphs and then we take that math and abstract it into software. And that's how we build our experiments. And then we do simple little toy problems on these experiments to really inspect the models and algorithms and then we get into things like building well-designed software and everything.
16:08 Michael Kennedy: Sure and do you guys use Python for a lot of this?
16:11 Alex Lavin: Yeah, absolutely, Python is the main language that we use here. And it's really becoming ubiquitous in machine learning and AI. It's elegant with a really simple syntax we can iterate quickly. And it's relatively easy for people to learn. A lot of AI researchers come straight out of academia and just toiling with MatLab for their Ph.D. for five years so they need to learn how to actually program and Python is great for that. And then on top of that, Python is really powerful. I love the phrase batteries included. There's so much that comes with the standard library. And then tons of PYPI packages that you can throw on top of that. And Python sometimes gets some flack for being a little slow but it's really easy to control a C++ back end and integrate with some C++ code.
17:08 Michael Kennedy: Right, so maybe you prototype it out and this part is like not quite fast enough. So you can say well let's write this in Cython or write C++ and then bring it into our Python app.
17:18 Alex Lavin: Yeah, yeah, absolutely, of course we profile and optimize algorithms first. So if an algorithm is something like 0(N squared) because we have nested for loops but can be refactored into something sublinear, we do that. And then we do more profiling and see, okay, this is really a bottleneck, let's port it to Cython.
17:37 Michael Kennedy: Yeah, sure, so that's a really good point that even though maybe you could make it run ten times faster, something in Cython, if you have an algorithm that has a super high complexity, it doesn't really matter, right, if you're going to have to take more and more data. It's really the algorithm that's the problem. A lot of people have put it out that working in Python is easier to conceptualize and evolve the algorithm. And then if you have to, to optimize it further, right?
18:06 Alex Lavin: Oh, yeah, yeah, it's pretty straightforward to have an idea and then say, okay, I want to implement this in some nested loops but, God, this is awful and ugly, and I'm just going to come back and refactor this. And then you come back and fix it and it's just magnitudes faster.
18:24 Michael Kennedy: Nice, so, maybe tell us a little bit of some of the tools that you use, Python is pretty obvious, but things like TensorFlow, Keras, NumPy, SciPy, those types of things.
18:36 Alex Lavin: Yeah, yeah, we use TensorFlow for some of our projects. It's really becoming very popular in AI and specifically deep learning community where distibuted computing over Tensor graphs is really valuable.
18:50 Michael Kennedy: Yeah, maybe for people who are not familiar with TensorFlow, just give them a quick elevator pitch. What is it?
18:54 Alex Lavin: Oh sure, so, TensorFlow is offered in more than Python at this point but the main API for it is Python. And it abstracts away a lot of the C and C++ computations for paralyzing Tensors which are basically just symbols that are distributed over these computation graphs. And they offer some really cool tools for visualizing these computation graphs as you're doing things like training your model, distributed it across clusters or running inference on these models too.
19:30 Michael Kennedy: Nice, okay, yeah, perfect, and what are the other ones that you use?
19:33 Alex Lavin: Well just like any software company, we try to leverage external tools. We're not trying to reinvent the wheel here. So a lot of our vision utility functions for example are handled with toolkits like OpenCV which is C++ with Python bindings. So a lot of that is pretty fast. And Scipy like you mentioned for operating on numpy arrays like Gausian filters and convolutions.
19:59 Michael Kennedy: It sounds like there's a lot of math involved and some pretty interesting programming. Do you have cognitive science people doing almost like brain-modeling work? What other disciplines are trying to work together to solve the problem there?
20:14 Alex Lavin: Yeah so there's definitely a lot of math involved. And actually when people come to me for advice and say, "Hey I want to get into AI research," and one piece of advice I give them is you need to have your first language be math, and then whatever programming language come after.
20:30 Michael Kennedy: Sure, give us a sense of how much math. If I was, say, super good at differential calculus but that's all I knew, is that enough? Do I need to know statistics and probability? Do I need to know linear algebra, graph theory? When you say math, is it a Ph.D. or is it a minor?
20:48 Alex Lavin: Well we do have a lot of Ph.D.'s in things that are just obscenely complicated and impressive. And those are across computational neuroscience to theoretical math. But I would say graph theory like you mentioned specifically modeling and running computations over probabilistic graphical models is very important. And the foundations of which are in probability and linear algebra.
21:17 Michael Kennedy: Okay so not super out of touch math but you definitely need to be able to speak those things.
21:23 Alex Lavin: Oh yeah, yeah.
21:24 Michael Kennedy: Like a pretty good, not first language actually, but second language.
21:30 Alex Lavin: Sure.
21:31 Michael Kennedy: Okay sure. This portion of Talk Python To Me is brought to you by Linode. Are you looking for bulletproof hosting that is fast, simple, and incredibly affordable? Look past that bookstore and check out Linode at talkPython.fm/linode, that's LINODE. Plans start at just $5 a month for a dedicated server with a Gig of RAM. They have 10 datacenters across the globe so no matter where you are, there's a datacenter near you. Whether you want to run your Python web app, host a private Git server or even a file server, you'll get native SSDs on all the machines, a 40 gigabit network, 24/7 friendly support even on holidays and a seven-day money back guarantee. Want a dedicated server for free for the next four months? Use the coupon code Python17 at talkPython.fm/linode. It seems like a really fun thing to do, right? Like everyday you come in and people have new ideas and you try them and you're kind of exploring new space, right?
22:29 Alex Lavin: Yeah, absolutely, there's not really a defined path to AGI and we try to build it from what are you thinking artificial agent is in a world? It is sensory perception, it's goal directed behavior, it's building a model from sensory motor interactions. And from that we try to derive the fundamental properties and constraints of intelligence. And that leads to our research ideas and eventually experiments.
23:02 Michael Kennedy: Yeah, so, you talk about OpenCV, computer vision, and sensory stuff, how much of what you're working on do you hope exists in the real world? And how much are you thinking if we could make a digital thing that could just live, say, on devices, on the internet and only interacts digitally, right, not actually see the world but interact with people, say, with voice or text or things like that?
23:28 Alex Lavin: Interesting, so, at Vicarious, we're specifically building AI towards robotics. Any robot, any task, we build a brain for it. That's our initiative here. So a lot of it is in the physical world and me specifically, I'm focused on our main vison team.
23:48 Michael Kennedy: Nice, and how much of this do you see as a consumer sort of thing versus, say, factories? Are you focused on doing this for people or for companies? I guess that's the question.
24:00 Alex Lavin: Yeah, so there's not so much I can really share about the specific robots and applications that we're building.
24:06 Michael Kennedy: Don't share your secrets, I don't want those.
24:10 Alex Lavin: But the idea is we want to build General AI. So all of those things.
24:15 Michael Kennedy: Yeah, that sounds, it sounds really cool. What are some of the other companies that are doing this type of work? I know what maybe companies are using it but AI research is a pretty small group of people, I would expect.
24:32 Alex Lavin: You're right, there's a lot of companies out there using AI, practicing AI, but for doing the nitty-gritty fundamental research, the shops are namely OpenAI, DeepMind, which is out in the UK and they were acquired by Google some years back, and us Vicarious.
24:54 Michael Kennedy: Okay, that's a pretty small group. There's also a Google Brain. Do you know what the relationship between Google Brain versus DeepMind is? They're not even in the same continent, right?
25:04 Alex Lavin: No, and I've heard some conflicting information from people on the Brain team and people in DeepMind team. But as far as I can tell, DeepMind is kind of a separate entity. They don't interact too much on projects. DeepMind is much more building towards general AI and Google Brain does AI research but a lot of it is directed more towards the applications for Google.
25:30 Michael Kennedy: Sure, makes sense. Do you consider companies like Uber or Google Alphabet's, Waymo, sort of self-driving car type things? Would you put them into this bunch? Or do you think of them more as like using AI?
25:47 Alex Lavin: I would say using AI. So, my understanding of a lot of these companies, specifically the self-driving car companies is they're looking at what sort of deep learning models can I use for my vision problems, my lidar sensor problems, and my motion planning problems. And they'll take existing models that are out there from toolkits such as, Tensorflow or Pytorch or Keras even. And then they'll tweak those for their specific problems. So that's more of a hyper parameter tuning type work. Not so much the real fundamental research.
26:26 Michael Kennedy: Right, okay, and a lot of these guys are open sourcing their tools like TensorFlow and what-not. And I think there was even like a pact about this, like Google and Microsoft, some other people, didn't they team up on sharing their research and their work? It's been like a year since I've heard about this. So my details are flaky.
26:48 Alex Lavin: Sure, well, there's been some collaboration over sharing these research environments. So, like I mentioned, video games are often a test bed that we use a lot. So Microsoft has developed Minecraft to be a test bed for AI development, and OpenAI has released their gym for different AI agents, and then also universe which is just a big collection of all of these different games, and I believe there was some collaboration or teamwork between the two of them.
27:21 Michael Kennedy: Okay, cool, yeah, universe, I heard about that a while ago. That sounds really pretty wild. And as an AI research company, do you guys find stuff like that useful? Like hey I could take this AI and drop it into Counterstrike, or I could drop that into Pitfall, and all of these different places where it can interact? Is this helpful?
27:42 Alex Lavin: Yeah, yeah, definitely helpful. We haven't explored universe so much here at Vicarious but we also build some internal environments that are very similar. Mainly we use internal tools because then we can define any API that might give us some different information than some of those APIs expose but it's definitely helpful to have a lot of test beds and use cases out there.
28:08 Michael Kennedy: Yeah, it seems like if you were, say, a grad student writing a master's thesis, being able to just plug in to different video games, is like a super big boost, but if you're a full-on many person in AI research company, you need something that you can exactly tweak the environments just so like I want to understand how the robot interacts with the squishiness of an object, right? As you probably get in unreal game or something, right?
28:35 Alex Lavin: Yeah, exactly, exactly, and a lot of these open sourcing is specifically targeting those grad students in universities. I see open sourcing these toolkits and also for the same reason why all of these companies are publishing a lot of their papers as a recruiting tool. Because a lot of AI researchers are very academic at heart, we want to share our information and help each other. So the more that these big companies are sharing and being open about what they're building, the more that they can attract top talent.
29:11 Michael Kennedy: Yeah, so companies like OpenAI and Google might have a better chance of getting somebody who's mostly interested in the research side of thing than, say, Apple?
29:19 Alex Lavin: Yeah, exactly, all of this has been a problem for Apple as of late.
29:23 Michael Kennedy: Yeah, I guess there's a couple of questions around universities that you kind of touched on. One is do you know if there are machine learning AI bachelors these days? Or are there just computer science and you can focus on that?
29:37 Alex Lavin: I don't know if you can major exactly in AI machine learning but there are definitely a lot of undergraduate courses that you can take. And from what I've heard, these courses are becoming ridiculously popular. I tried to mentor a couple of students from Stanford and CMU, and I always tell them, "Oh there's a probabilistic graphical model scores, you have to take that."
30:06 Michael Kennedy: Yeah, that's cool. It seems like there'd be a lot more money in working at Google and TensorFlow or with you guys or whatever rather than, say, being a professor at a university. Is there a drain? As professors sort of get their researches going do they get sucked out of academia?
30:26 Alex Lavin: Yes, it's a typical example.
30:29 Michael Kennedy: I know what their problem for a data science. I just don't know about machine learning. It seems like there'll be a parallel, right?
30:34 Alex Lavin: There absolutely is. Now main example of this is Uber going into Carnegie Mellon and grabbing really all their computer vision researchers a few years back. But I think you largely see more of a collaboration of sorts. So sometimes top professors from universities will have appointments with some companies like OpenAI or Google. And they'll still work with their universities and in their research labs on some of the projects but at the same time work with the industry partner in a way. And that also works as an incredible recruiting tool for these companies.
31:16 Michael Kennedy: Yeah and also seems like it's probably the best of both worlds. You still get to be a professor, you're still going to do your research and teach people but you also get some real world experiments and experience, you're not just in your lab poking around, right? You get to see really what people are trying to do.
31:33 Alex Lavin: And there's a big difference between trying to just fill out grants and get money and resources through academia versus having some sort of private/public company and if you need just infinite AWS clusters, you can get it.
31:54 Michael Kennedy: Yeah, like, I need a hundred GPUs in a cluster. Can we do that? Yes sure, push this button.
32:00 Alex Lavin: Yeah, yeah, it is kind of funny, someone did a blog post on huge spike in AWS GPU demand, the week leading up to the paper deadline for NIPS which is like a few weeks ago is impressive.
32:17 Michael Kennedy: Yeah, we've got to finish our research, you got to get this going.
32:20 Alex Lavin: Yeah, yeah.
32:21 Michael Kennedy: Yeah, so you guys probably do a lot of cluster computations, stuff like that? GPUs and other specialized chips or anything?
32:30 Alex Lavin: For some of the models that can take advantage of the parallel compute on GPUs, like deep learning models, yeah absolutely. But a lot of our larger models are still used on AWS but not necessarily GPUs.
32:44 Michael Kennedy: Just straight EC2 clusters, something like that?
32:46 Alex Lavin: Yeah, yeah.
32:47 Michael Kennedy: Yeah, alright, very cool. So, we talked about the math requirements, linear algebra, calculus, probability, things like that. But if people, they want to try this out, they want to get started, maybe they know Python but nothing else, how would they get started?
33:02 Alex Lavin: I would recommend getting inspired by a model or problem, maybe a NIPS paper or something from ICML or even a blog post that shows some pretty cool results on natural language processing and dive in that way.
33:18 Michael Kennedy: Yeah, do you have some examples of what a problem might look like? Will they have those at Kaggle?
33:23 Alex Lavin: Yeah absolutely, that's exactly how I got inspired. Go into Kaggle and there are a whole slate of machine learning problems that you can dive into and oftentimes those communities will discuss their approaches and share some of their methods. So that is really good. Almost a code first way to dive in but like I was saying earlier, to build up a math chops, to be able to do AI research, I recommend trying to reproduce paper results. So you pick out a research paper and you go through it, you try to understand the models and the algorithms, the assumptions, the experiments, and then you try to implement that from scratch. It's not easy. You're going to fail a lot but it's the best way to learn. And fortunately if you're doing this in Python, a lot of the Python communities are very helpful.
34:18 Michael Kennedy: Yeah, so what are some of the tools, maybe the ones we talked about, TensorFlow, Keras, things like that?
34:23 Alex Lavin: TensorFlow and Keras, they have some great pre-packaged models that you can take out of the box and even some that point specifically to the papers that they're from, so you can get up and running pretty quickly.
34:35 Michael Kennedy: Yeah, maybe a really simple way to get started might be to take somebody who's written a paper or done some project but it's only in MatLab, and say, let me try and reproduce this in Python, even starting from their code, right?
34:47 Alex Lavin: Yeah, yeah, that's a good idea.
34:48 Michael Kennedy: You can kind of like take it, like let me not actually try to solve the problem but just take it technically through the steps of creating it in Python and see how that goes, and you've kind of got that experience and then you can go and try to actually solve it independently.
35:02 Alex Lavin: Yeah, ultimately it comes down to just trying to build a link and then to model of how the math is connected to the software implementation. And if that's building the code and then trying to figure out how the algorithms and equations are defined there, that works best for you, go for it. If it's more of a bottom-up approach, do that.
35:23 Michael Kennedy: Yeah, yeah, cool. So you talked about your dev workflow being very sort of loose and you start from just let's play around and not worry too much about patterns and factoring our code and doing testing stuff. What are some of the pain points that you might end up with from that workflow?
35:42 Alex Lavin: Because we move fast and it's a lot of research and prototype code, we can build up some technical debt, and this largely comes because people, researchers here, want to build the next thing, not put in another few days to refactor their experiment into beautiful abstractions with 99% test coverage.
36:03 Michael Kennedy: And they're probably, they're researchers, right, there's more mathematicians and scientists, not pro developers, who really care about this design pattern or something, right?
36:11 Alex Lavin: We have a mix fortunately. So there are some that are very academic researchers who just do some kind of ugly things in the code. And fortunately we have our code review, our code review process that can help teach them and be a tool for building better software engineers but then we also have a lot of people here that are very strong engineers and have built up their software chops coming from companies like Google and Facebook.
36:42 Michael Kennedy: Yeah, of course, that's pretty serious development, right there. Yeah so, do you use special, like any particular tools for refactoring code or for profiling or things like that?
36:56 Alex Lavin: Yeah, so, profiling code is really useful around here because, I mean, the biggest mistake a programmer can do is trying to optimize a code without actually knowing where the pain points are.
37:08 Michael Kennedy: Yeah, I find that even harder in this type of computational stuff where it's sometimes not intuitive.
37:14 Alex Lavin: Yeah, yeah, and Python comes pre-packaged with some useful profilers. I prefer to actually use kernprof line profiler because some of the Python built-in profilers will really just show you this function is your bottleneck. By calling line by line, I found to be much more helpful.
37:34 Michael Kennedy: Yeah sure, so here's a hundred line function, it's slow, great, that helps me a lot.
37:40 Alex Lavin: Yeah, yeah, exactly.
37:41 Michael Kennedy: And maybe I guess also if you had your code more factored into little methods and things like that breaking across even packages, maybe it's easier for you to say, well this is slow, I got to get well, I clearly see the function that is slow, but if you're just playing around and experimenting, maybe you don't have it so fine-grained, so it's even worse than normal maybe.
38:03 Alex Lavin: Well we try to keep our code well-designed for both conceptual level and practical level, where conceptual is like functions and objects are abstracted to follow the math, practical is more of running end to end experiments efficiently. So conceptual is important because we want one designed code so we can extend it naturally with future research. So that might be where things like computational bottlenecks are really obvious. But when you try to run these experiments end to end, that's when you see, oh well we need to go back in profile and optimize.
38:40 Michael Kennedy: Yeah, yeah, that makes sense. Yeah, so, that kind of touches on this tension between research and software priorities. How do you keep those in balance? What are the trade-offs there?
38:52 Alex Lavin: Well it's interesting because the research rewards experiment results and maybe sharing some cool figures and visualizations with the team are more near-term, while the software rewards like benefits of clean well-documented code base, those are farther off. So it makes it difficult to prioritize good software like refactoring in cleaner abstractions. I found it very helpful to try to communicate, overly communicate actually, the value of high quality software like maintainable code base makes onboarding new engineers much smoother, and they can get up and running and contributing much faster. And then also I try to reward in a way examples of good code. Recently there was a good use of dunder slots for fast attribute look-up in a PR is reviewing the other day.
39:51 Michael Kennedy: Yeah, so, maybe tell people about dunder slots. If used in the right in the right place, it can dramatically change memory and performance. It's interesting. We haven't talked about it much in the show. So, what's dunder slots?
40:04 Alex Lavin: Oh sure, so, my understanding of it is basically when you have a class in Python and you have attributes for that class, it automatically will create this dunder dict object. And if you don't need that representation for every attribute of your class, it can be a lot of overhead to do look-ups in that dunder dict so sometimes you can use dunder slots which avoids the creation of dunder dict and you can define your attribute look-ups there.
40:37 Michael Kennedy: That sounds right, so, you can put in a class, just anywhere, just, say, dunder slot equals, and you give it array list of basically the field names, the variable names. And then those are the only variables it can have, right, so in normal Python classes you could dynamically add another field to it or things like this and that uses that dunder dict to manage that but every instance of the class has a separate dictionary with separate names of the keys and things like that. And so the slots basically means there's none of that, none of that creation, allocation, assignment all that kind of stuff, yeah, so it's really amazing. Alright so somebody did this like, oh wow look at this thing.
41:17 Alex Lavin: Yeah, so it's just like a quick shout-out on our Slack channel, just saying hey this is awesome.
41:24 Michael Kennedy: That's really cool. This portion of Talk Python to Me is brought to you by us. As many of you know, I have a growing set of courses to help you go from Python beginner to novice to Python expert. And there are many more courses in the works. So please consider Talk Python Training for you and your teams' training needs. If you're just getting started, I've built a course to teach you Python the way professional developers learn by building applications. Check out my Python Jumpstart by Building 10 Apps at talkpython.fm/course. Are you looking to start adding services to your app? Try my brand new Consuming HTTP Services in Python. You'll learn to work with RESTful HTTP services as well as SOAP, JSON and XML data formats. Do you want to launch an online business? Well, Matt Makai and I built an entrepreneur's playbook with Python for Entrepreneurs. This 16-hour course will teach you everything you need to launch your web-based business with Python. And finally there's a couple of new course announcements coming really soon. So if you don't already have an account, be sure to create one at training.python.fm to get notified. And for all of you who have bought my courses, thank you so much. It really, really, helps support the show. So I looked at the Vicarious website to sort of check out what you guys are up to, and I looked at the team page, and there are a lot of people that work there. Way more than I expected actually. How many people do you work with?
42:43 Alex Lavin: So, we're at about 50 now and I started eight months ago when I was number 30 or so. We're growing a lot and we're looking to keep adding more engineers and researchers. And we're expanding also to a big new office building and everything out here in San Francisco. There's a lot of us here. And the really interesting thing is that there's a lot of us from very diverse technical fields so we have some people that are theoretical neuroscientists, some people that specialize in robotic grasping and manipulation and everything in between.
43:26 Michael Kennedy: Sounds like a pretty cool space to be in.
43:27 Alex Lavin: Yeah, yeah, it's fun.
43:29 Michael Kennedy: So, you guys are working on this general artificial intelligence which is maybe the most controversial kind but also has the greatest possibility for helping society out, right? It's one thing to make a car that can drive itself, it's another to make a robot that can do all these different jobs, right, and learn and so on. So, Elon Musk came out with his statement that we're living in a multiverse and he's scared of AI and things like that. Do you guys talk about this kind of stuff? Do you put any credence in it? I'm not so sure about the multiverse, I'm pretty skeptical of this. But the thing becoming too smart, is this something that you guys actually are seriously concerned about?
44:15 Alex Lavin: It's something that we seriously consider. I wouldn't necessarily say concerned. And it's funny. Whenever I meet random people and say what I do, I ended up having this conversation a lot.
44:30 Michael Kennedy: Here we go again.
44:30 Alex Lavin: Yeah.
44:31 Michael Kennedy: No we're not living in a multiverse.
44:36 Alex Lavin: Well, AI will do amazing things for society like transforming healthcare from reactive medicine to something that's predictive. And essentially eliminating motor vehicle accidents. I dream of the day I can ride my bike around San Francisco and not worry about someone slamming into me.
44:56 Michael Kennedy: Yeah, yeah, absolutely.
44:58 Alex Lavin: But on the spectrum of fear and optimism, where the former are guys like Nick Bostrom who wrote that Superintelligence book, and Elon Musk, and the latter where the optimists are those of like the futurist, Ray Kurzweil, I would say most AI researchers, and myself included, fall somewhere in the middle, leaning towards optimism. So I would say as with previous technological revolutions there's definitely concern over the big societal changes and rightfully so. Jobs will be lost to automation and industries transformed but it's much easier to look at the existing jobs as examples painting the picture of oh wow AI is going to replace all these, we're going to lose all these jobs, than it is to imagine a future where there are new industries that are yet to be created and all the jobs defined.
45:53 Michael Kennedy: Every change in human history has not wiped it out of people just went and sat in the corner, right, like the industrial revolution, the move away from farming, the move to knowledge workers, like all of these things have been done.
46:07 Alex Lavin: Exactly, exactly, we will go on. But I would say the real concern is the speed with which this transformation will take place. It's significantly faster than the agricultural and industrial revolutions. Fortunately companies are taking this into real consideration. Elon Musk, OpenAI is one, here at Vicarious, we have a group dedicated to AI ethics.
46:31 Michael Kennedy: Oh wow, okay, that's pretty interesting. Yeah, I definitely think the change would go quicker. The change for the industrial revolution, things like that, they spanned people's working careers. It's not like next year the thing that you did was completely gone and it was totally fine the year before so I think you're right, there's probably going to be some kind of step where there's like a series of shake-up. One example of this that comes to mind is the US, the number one job title, job role for men is driving of some sort, like trucks and things, self-driving cars, self-driving trucks and semis, and deliveries could put a serous dent in guys' jobs. And if that happened too quickly, I could see some serious social unrest because if you have a large, young group of men who are not employed and have no positive outlook, that could really turn badly right.
47:35 Alex Lavin: Yeah, it could. And you see some earlier examples of this with companies like Uber and Lyft who aren't necessarily replacing drivers but they're transforming an industry in a way, where taxi drivers are the source of this unrest because now they've opened up the workforce to this kind of car-sharing or ride-sharing.
47:59 Michael Kennedy: Yeah, I would say it's similar to what the internet did for knowledge work. It used to be you would go to your office and you would compete with the people in your city for those jobs, and then all of the sudden the internet came along and open-sourcing and globalization and everything, you are all of a sudden competing with everybody, not just the people in your town. And it's a little like that. How many people are going to actually go and become taxi drivers versus, hey, I have a car, I have a smartphone, I got two hours and need some money, just go do it, right? It really opened up the space of potential people in the marketplace. I agree that will probably get through it pretty smoothly and there's very likely going to be something amazing on the other side. Maybe one more question on this theme for you. I know you have these questions a lot. What do you think the biggest change that AI is going to bring, that people will obviously notice, not something super subtle that makes the change and they don't necessarily see? What are people going to go wow, here's what AI has brought us in 10 years or 20 years?
49:04 Alex Lavin: Well, it's kind of like the mountain-climbing dilemma that we were talking about earlier, where new AI technologies roll out but it's very incremental in a way, and you don't necessarily notice that Facebook's algorithm for identifying whose faces are whose has improved a lot but it did and that was some really impressive AI research. But something that I think, I hope people will look at and just be completely floored by is how AI is going to transform healthcare and medicine.
49:40 Michael Kennedy: Yeah, that's one of the things I definitely think as well, diagnosis basically.
49:44 Alex Lavin: Yeah, yeah, like I had mentioned earlier, healthcare has always been very reactive, someone is sick and has some sort of symptoms and we guess at what is going on, and that is all suspect to what kind of quality of healthcare you can see fortunately, we live in a country where we have a lot of good healthcare doctors but in third world countries, it's absolutely not the case. AI, I see as less diagnostic, more preventative in a way. So you know that you're getting sick months before you actually show any symptoms.
50:20 Michael Kennedy: And the treatment is minor, instead of catastrophic.
50:22 Alex Lavin: Yeah, yeah.
50:23 Michael Kennedy: Interesting. So one of the rumors, there's not an iPhone8 yet, but one of the rumors around the new iPhone coming out is that Apple's working on a dedicated AI chip that goes in the iPhone. What do you think of these types of development?
50:37 Alex Lavin: This is really interesting because we've been hearing for years a lot about this neuromorphic chip development which was basically companies like Intel and Arm, whatever, doing some research towards okay, what is the next platform for computing, and recently we've seen or heard about Apple's new chip and Google also has their TPU, the Tensor Processing Unit, and these are devices dedicated to running specifically their algorithms on devices, I guess, on iPhone and iPad, and I don't know too much about the hardware of these devices but they would be optimized for computing inference on deep learning models and such.
51:23 Michael Kennedy: Yeah, I think it really opens up the possibility for something interesting but I have no idea what it's going to be. But we'll find out in a few years I guess.
51:31 Alex Lavin: Yeah the exciting thing is a lot of the work in AI is no one knows what it's going to be.
51:37 Michael Kennedy: Yeah, keeps the fun, alright, maybe we should leave it there for the AI topic. I really appreciate you sharing what you guys are up to, and your thoughts on AI research.
51:46 Alex Lavin: Yeah, I'm glad you guys are interested.
51:48 Michael Kennedy: Yeah, it's cool. Alright, so now, it's time for the final two questions. So when you write some of your deep learning Pythonic code, what editor do you open up?
51:58 Alex Lavin: Oh I definitely go for Sublime. There are some add-ons like, GitGutter, I like a lot, and flake8 linter, and then for C++, I'll use CLion.
52:10 Michael Kennedy: Yeah, okay, cool, yeah, definitely, the plug-ins for Sublime are cool. I played a little bit with Clion but I just don't have enough C++ code to write. It looks neat but I haven't done anything with it. And notable PYPI package?
52:22 Alex Lavin: Oh there's so many, the one I mentioned earlier, Kernprof line profiler has saved my neck many times, and then also I like working from the command line a lot. A lot of people in research will use Jupyter notebooks for the interactive visualizations but working from the command line, I like ptpython.
52:44 Michael Kennedy: Oh yeah, I love ptpython, that's great. I use that as well.
52:47 Alex Lavin: The tab complete is just so clutch. But then also Argparse I love a lot because I can just throw in an experiment in a script and then have some parameters that I define at the command line.
52:58 Michael Kennedy: Yeah, alright, awesome, those are three good recommendations. So, final call to action?
53:02 Alex Lavin: Well if you're trying to get into AI research, like we were discussing earlier, you can definitely check out some of these toolkits that have pre-packaged models like Keras and TensorFlow and PyTorch, but you should try to implement, for example, accomplished neural net in NumPy code. That's how you really learn what's going on.
53:20 Michael Kennedy: Alright so start from the ground up and figuring out anduse the toold.
53:24 Alex Lavin: Yeah, yeah.
53:25 Michael Kennedy: Alright, excellent, so, thanks so much for being in the show, and sharing your story with everyone, appreciate it.
53:30 Alex Lavin: Yeah, thank you for having me.
53:31 Michael Kennedy: You bet, bye. This has been another episode of Talk Python To Me. Today's guest has been Alex Lavin. And this episode has been brought to you by Linode and Talk Python Training. Linode is bulletproof hosting for whatever you're building with Python. Get your four months free at talkpython.fm/linode. Just use the code Python17. Are you or a colleague trying to learn Python? Have you tried books and videos that just left you bored by covering topics point by point? Well check out my online course Python Jumpstart By Building 10 Apps at talkpython.fm/course to experience a more engaging way to learn Python. And if you're looking for something a little more advanced, try my Write Pythonic Code Course at talkpython.fm/pythonic. Be sure to subscribe to the show. Open your favorite podcatcher and search for Python. We should be right at the top. You can also find iTunes feed at /itunes googleplay feed at /play and direct rss feed at /rss on talkphyton.fm. Our theme music is Developers, Developer, Developers by Cory Smith. Google by smixx. Cory just recently started selling his tracks on iTunes so I recommend you checking out at talkpython.fm/music. You can browse his tracks for sale in iTunes and listen to the full-length version of the theme song. This is your host Michael Kennedy. Thanks so much for listening. I really appreciate it. Smixx let's get out of here.