#91: Top 10 Data Science Stories of 2016 Transcript
00:00 It's been an amazing year for Python and data science, it's time to look back at the major headlines and take stock and what we've done as a community. I've teamed up with the partially drove podcast, we're running down the top 10 data science stories of 2016. In this joint episode, this is talk Python to me, Episode 91, recorded November 18 2016.
00:27 in many senses of the word because I make these applications and use these words to make this music I constructed to think when I'm coding another software design, in both cases, it's about design patterns, anyone can get the job done. It's the execution that matters. I have many interests.
00:45 Welcome to talk Python, to me, a weekly podcast on Python, the language, the libraries, the ecosystem, and the personalities. Because your host, Michael Kennedy, follow me on Twitter, where I'm at m Kennedy, keep up with the show and listen to past episodes at talk python.fm and follow the show on Twitter via at talk Python. This episode has been sponsored by robar and continuum analytics, I want to say a special thank you to the folks at continuum analytics, you know, the Anaconda distribution people for joining talk Python, as a sponsor, I thank them both for supporting the show by checking out what they have to offer during their segments. Jonathan, welcome back to talk Python. Hey,
01:23 thanks so much for having me. I'm really excited to be here.
01:25 I'm so excited to have you back. Because every time we do a show together, I have a great time people seem to love it. And I know what we have on deck today. People are gonna love the show is gonna be really fun. Yeah, I
01:36 think so too. It's just it's I love I'm so glad it was fun doing this last year. And I'm glad we're doing it again. It's just kind of cool to have an opportunity to look back over the past 12 months. And really, there's just so much news and so much data stuff that comes out over 12 months. It's an industry that's moving so fast. Just to have a chance to reflect a little bit is kind of cool. I remembered some things that I'd forgotten about the past year.
01:56 Yeah. And me too. Like for example, Tay, I remember Tay and I just didn't because we brought it up. So we'll talk about Tay the bot later. But yeah, what do you what have you been up to the last year like you're still doing partially derivative? You got your your business properly. Your data science business going?
02:13 Yeah, yeah, absolutely. So I'm partially derivative, our podcast about data science, at least extensively about data science kind of about drinking largely about screwing around is still there. So everybody who's who's interested in the little data nerdiness can go check that out. Yeah. And then the the business has been going well, where we're doing more projects, which is cool, kind of moving slowly and slowly away from the kind of startup mentality, which I think has been healthier for everybody, and been doing some really cool research projects, especially some natural language stuff. So yeah, it's it's been a really good year. How about you, man? How's your how's your year in review?
02:45 My year in review is amazing. You know, I, I went independent. In February, I've been running this podcast and my courses as my primary business. And it has just been like a dream come true. It's amazing. So it's been a fabulous year.
03:00 Yeah, I'm not surprised. But Python coursework is really awesome. I don't know how much you plug your own stuff on your show. So I'll just do it for you. For all your listeners. If you're interested in learning Python, and you haven't taken the courses yet, you should go do it right now. It's the best way to learn.
03:14 Thank you so much. Yeah, it's really fun to do them. Alright. So I think, in our tradition, because it's happened one time I do at the end of the year is look through all of the interesting news stories that have to do with big data, with data science with machine learning with Python, and couldn't do our take on them. So let's start at something that's been in the news a lot this year, the White House
03:43 Yeah, so very interestingly, so you know, we've been maybe 18 months, a couple years into the tenure of the country's, the us our first chief data scientist. So DJ Patil came into the White House, we've had a CTO and chief technical officer in the White House for a little bit longer. And that group of folks has been really out in front on the way that we think about data and society, which has been a fascinating conversation, I think there's been like little trickles of information about what it means to do machine learning in an ethical way, how we avoid kind of algorithmic or bias in our, in our models and our machine learning models. They did this report about how as a society, we should think about the impacts of artificial intelligence, of machine learning, and big data to make sure that we're not taking some of the bias that's inherent in our society and therefore is inherent in the data and inherent in our models. And just perpetuating some of that over time through technology. It was really a cool position for an administration to take, you know, sometimes governments a little bit behind technology that are the technology industry, but in this case, I felt like they were really out in front like kind of driving the conversation. So it was a cool story.
04:51 Yeah, it is a very cool story. Of course, all the stories will be linked in the show notes. You can probably just flip over in your podcast player and click them. But what I find really interesting about this is we, as technologists see technological progression, almost always as rainbows and unicorns, right? It's like, every, every, every new device that comes along that connects us or enables something, this is uniformly good, right? But that's not always the case, right? And actually, we'll get into some interesting twists later, and some of the specific areas how this can go wrong. But basically, they said, Look, there's so many opportunities for data science and technology to do good for people. But at the same time, these like, let's take data science algorithms coming out of the machine learning could have a bias, and not necessarily even a conscious bias, right?
05:44 Yeah, that's actually one of the most interesting things about it, because I think, still, statistically speaking, and I think maybe even trending in this direction, the technology community and the data science community is still largely male, and largely white. And so the interesting takeaway, I think, from a lot of these discussions about the way that bias is kind of infecting our technology, or may not necessarily be this steady march to progress, the way that we view it, is because people often don't see or have a difficult time understanding the perspectives of people who aren't like them, which is kind of an obvious statement. But when we're encoding our worldview, effectively, into the technologies that we're developing, then we may not see the consequences of that technology. We're not intentionally intentionally encouraging racism or intentionally kind of encoding that institutional bias, but it's inevitable that that's going to be a byproduct of a community that's relatively homogenous still. And so I think it's just good that it's something that we're discussing, I think the only way to get past that is to have more awareness of it, and then ideally, for more diversity in the technology industry. But that's sort of a separate and longer conversation. But yeah, so I think, again, it's just it's, it's cool that such a high profile group of people who are leaders in the technology community took it upon themselves to initiate this conversation.
07:03 Yeah, absolutely. So there's a report they released about this, and it's not all negative, it seems pretty balanced. Like, look, there's all these great things that we're going to be able to do. But there's also these safety checks we need to make sure are in the system. And then they also put a little note that said they encourage you to follow along the summer and spring where they're hosting a series of public workshops on artificial intelligence and machine learning. Like, when did you think the White House would host workshops in artificial intelligence and machine learning?
07:31 Yeah, it really is. It's a it's a new world. It's it's pretty exciting. And I agree, that's good to point out like, the we're having this, I feel like I've been framing this as if it was a like a finger wagging or like an admonishment. And it's really not, it's actually, there's so much potential for these amazing technologies. Let's just make sure we're doing it in a way that it includes the entire society, and not just the single viewpoint. Yeah, absolutely.
07:53 Absolutely. All right. The next one up is this research paper, it's hard to call it paper digital paper, this research article, called social bonds distort the 2016 US presidential election online discussion. And if that sounds like a sort of tough title to say, as a, you know, academic thing, because this is an academic paper. It's by Alessandro Bessie and Emilio Ferrara. And these are two professors or postdocs, I can't remember exactly their position, but in Southern California, some local universities there. And there's this place called first monday.org. And it's a peer reviewed journal on the internet. And it's kind of a double, double meaning here. So it's a peer reviewed journal that you can get for free on the internet. But it's a peer reviewed journal about research on the internet. So it's pretty cool. They've got a bunch of stuff about like how Reddit behaves, and other sorts of things that we would probably care about purely academic research. And it's super interesting what they found. So these guys, they created this thing called bot or not, which is a machine learning framework. And more or less, they basically set up a bunch of hashtags, and a few keyword searches. And they said, we're going to monitor the Twitter firehose, for these things, right. That's the real time data flow coming out of Twitter for those particular things, which already is actually a challenge. They talk about the technology like consuming that much data, which is pretty interesting. It's written in Python, and you can actually get the the bot or not thing on GitHub, and they say it has an accuracy of determining whether a thing like a social thing, is a bot or is human at 95% or better. Wow, that's pretty solid. Right?
09:37 That's kind of amazing. Yeah, that's a difficult distinction to make. I think a lot of the time. That's cool. Yeah. They said they take in over 1000 pieces of data to make that dimensions I guess, to consider, consider that right. It was interesting to see the way that they dimensions are the features that they built in order to in order to make that Model Predictive, the kind of the behavioral things, the signatures that distinguish between A real life person who just tweets a lot. And a bot, it's interesting because it's,
10:03 there's a lot of things that they were able to sort of distill down. But it's very interesting. And we don't want to go too much into the details, but they really write it up. It's like a 30 page paper. So that's cool. And kind of like we were just discussing, they have a similar take for social media, as we were talking about with big data. And they say, you know, social media has been extensively praised for increasing democratic discussion, right? You think of the Arab Spring, for example, and things like that. But they say that you can also take this social media and use it for your own purposes, good or evil, right. So you can exploit this, these social networks, to change the public discussion to change the perception of political entities or even try to affect the outcome of political elections.
10:48 Yeah, and this is something I mean, not to get us to off topic. But I have a little bit of a research background and understanding how highly motivated like hyperactive users of a social media platform can basically form a group that's just large enough to seem just too big to really recognize. But if they act together, it's like, let's say you have 1000, people that are just really hyperactive. And tweeting the same thing. Real people are bots, those tweets are the hashtags that they promoted the content that they circulate, seems as if it's gaining really widespread, organic traction. And so you can effectively like force a narrative onto a social media and like hijack the mechanics of a social network using some of these techniques. And we're seeing it increasingly from groups that have some kind of ideological agenda, everything from terrorist groups all the way to political organizations, all the way to maybe foreign states that start with are trying to influence the US elections. Like it's a really, it's both, like from an academic or intellectual perspective, it's kind of fascinating. But at the same time, also a little ominous,
11:54 it definitely is like, I really love social media. But and I think it is a positive thing, generally. But there are definitely examples. And this is one, I'm going to give you some stats here in a second. But another real clear example, which is not something trying to influence this, but it's just, you know, speaking of algorithms, and unintended consequences, like Facebook, and people living in bubbles, and how they perceived the news this year. And all those sorts of things are very interesting to study.
12:19 Yeah. And in fact, at the time of this recording, it's not yet released, but probably by the time this airs, we will have published it, we've actually done some research that shows that we think shows that the when people in a particular community on Facebook, share URLs from these fake news domains, or hyper partisan domains more often, and actually has a direct impact on the amount of bias that we see in their language. So that kind of that loop where the community gets more bias, the new sites get more hyper partisan or more extreme, and then the community gets more biased again, like that kind of feedback loop seems to be a real thing. And it's kind of a How do you pull people back on an environment where they're literally not living in the same reality that you are, which is kind of strange, it's very strange, you know, especially when it comes to things that are outside their personal experience, right. So even though like we all have the same kind of jobs, we all like love our families and our kids, our day to day lives are mostly the same. But we can still get kind of whipped up into a frenzy about these things that are kind of at arm's length from us. They could talk a lot about the get talked about and kind of political campaigns. It's it'll be it's interesting. It'll be interesting to see how these networks start to combat it now that they're aware of it. Yeah, yeah. Looking forward to that.
13:32 I totally Am I looking forward to your research as well. But that's cool. I feel like if the Twilight Zone were still a thing, the movie from the 50s and 60s is a couple of episodes, they could give more news here. So are you ready for the conclusion? Did these bots have an effect? After all this research and a bunch of analysis that they laid out? I said the bots are pervasively present and active in the online political discussion in the 2016 election. And the estimate 400,000 bots engaged in the discussion responsible for 3.8 million tweets or one fifth of the entire political presidential conversation.
14:10 Wow,
14:10 that's a huge percentage of the week. It's weird. We think about it, like this great forum for public discourse, but it's actually bots talking to exactly like, arguing amongst themselves.
14:20 Yeah, it's probably true. I bet they did fight with each other.
14:25 Actually, that was disappointed with myself, because I won't remember the name. But if people listen to this Google kind of social activist bot fighting with trolls or you know, something along those lines, there were a couple really interesting stories about people who would use who wrote bots, kind of more activist artist types who wrote Twitter bots, and wrote them in a way that would start kind of banal online fights with people. So they'd find kind of like far right trolls, and they would say things like, I think your opinion is wrong. Like your argument isn't even valid, and they would just like content free argument, but people would engage with them for hours like they were just a fight with this bot for hours at a time. Yeah,
15:05 but it's, but I love it.
15:09 Yeah, it's, it is pretty fascinating, I think and something I mean, you kind of mentioned it. But I feel like it's worth reminding all of the listeners, because everybody, many of your listeners will be developers. The code for this is released on GitHub. And there's an API that you can ping. So if you're doing any kind of research, or if you're building an application that engages with Twitter, you can pretty easily check to see whether or not an account is a bot. So if that's something that is useful to you, like the I really credit the researchers for making this available to the general public. That's a really cool serve.
15:39 Yeah, that's, that's for sure. It's cool. It's on GitHub. It's cool. It's in Python, and easily accessible to everyone listed here. So we have one more thing about the election, and then I promise we'll move on, right?
15:50 What's the deal? Absolutely. I mean, you know, elections are a big deal in the years that they happen. So you know, fair? Well,
15:57 I think this one is especially interesting, because it broke a lot of norms. And prediction just failed across the board in so many ways in the media, in the pollsters, and so on. And so I think it's a looking back to see what do we need to fix? What do we need to change? So what's the next news item about?
16:13 Yeah, well, it's it's about just that. So as anybody who was a poll watcher during the election cycle, which I think is most of the population these days, because of the bots, you couldn't escape it. Exactly. The every time the polls change, the bots wanted to talk about it. The polls were rigged. No, the polls show my candidate in the lead horse race, horse race, horse race. And what's interesting is that the vast majority of those polls and predictions were wrong, like dead wrong. And so there's been this kind of like, I don't know, huge, reconciling in the data science community and kind of the anybody who does this, predictive forecasting, on a number of things, like there's the technical things that went wrong, and what's wrong with our models? There's what's wrong with our data? What's wrong with the polling process? And on and on, like, kind of into the weeds about what technically did we fail to get right in this process? And then there's these larger questions about why do we even do this? Like, who actually benefits from that kind of checking 538 every day and seeing like, oh, Clintons up 78% this week, and oh, Trump's up 1% this week, you know, and like, and just watching that change over the course of an election. And meanwhile, the politicians, we don't hear very much about the policies of the individual politicians, as voters are we actually informed more. So anyways, we actually did a whole episode about we did too. As a matter of fact, we did one episode with a bunch of data scientists, and people that are kind of on the inside of political campaigns and understand a lot about how campaigns pull people and how they make their predictions. We had people Natalie Jackson, who is responsible for the Huffington Post model. That was that was wrong, but so was everybody else's. And then we actually did a second episode with Mona Chalabi, who's now the director of data journalism for The Guardian for the US guardian. But she used to work at 538, and has just some really, I think, really smart things to say about whether or not these polls help our public discourse, like whether or not data journalism actually should be involved in this kind of horse racing, or even this kind of horse race prediction. So and how much like human bias in interpreting the output of these models really impacted, how they were published, and how they were communicated to the public. Because there's this weird thing happening based on the outcome of this election where a data scientist saying, Hey, we made this huge mistake. And people are starting to fall back on their previous positions that I think felt comfortable for everybody, which was, well, I don't know how that thing works. I don't really trust the data. I'm going to go with how I feel. Here's an example where all of you guys thought you were right. And you were clearly wrong, which reinforces my initial position, that data is fallible, therefore, I'm not going to trust it. And I think that it might do a real harm to not just to data journalism, but to potentially society in general, that lack of faith in data is I think, misplaced. There's a really there should be a lack of faith in the way that humans interpret the output of their models. Like ultimately, there was a lot of human bias that was injected into that process. That was pretty clearly important. Yeah, absolutely. And anyway, so obviously, I have a lot to say about this. I'm like, I'm not rambling on about this a huge story.
19:18 Well, that's awesome. I'm really looking forward to checking those episodes. And of course, I'll link him in the show notes. There's a couple of things that come to mind when when I hear what you're saying. One is, it feels a little bit like we've gotten herself into this type reinforce, tight reinforcing loop, it's super hard to get out of, and that it's a little bit like quantum mechanics, where if you observe a thing, you change it, you don't I mean, like, you tried to measure the poles, but you tell people the poles so often, and then the the various news groups and whatnot are like reinforcing their angle, but also the polls and opinion and it's just like, is that the opinion? Or is that people's reaction to people's perceived opinion? I mean, it's just aware Do you detangle these things,
19:57 yeah, that's a really good point. Because the in the The media, we end up with these, like kind of meta narratives about the election, like Trump's a ridiculous candidate, and Hillary Clinton is the inevitable president, we're just kind of waiting this thing out. I mean, that's an oversimplification. But in a lot of the media, that's what it was. And so I think it's interesting to see even people that are supposed to be objective, like journalists are always supposed to be objective, but data journalists, especially like, believe in the numbers, that's the, you know, that should be the mantra. And yet they I think, every time that their, their data or their analysis, or their techniques show that actually Trump might have a pretty decent chance of winning, if you account for some of the inconsistency in the polling, I think they all went, Oh, that actually indicates a problem with my model, I should tweak it to make sure that it gives the results that are more correct. And whoops. Maybe that wasn't the right conclusion to draw when the models didn't perform as we were expecting. So it's been a really fascinating, well, fascinating, and perhaps, I don't know, it'll be interesting to see whether or not we continue to engage in this kind of like, this kind of entertainment or, you know, I guess that's basically what it is, you know, like watching the score change from quarter to quarter. Exactly. Yeah, I
21:07 think, you know, on one hand, you could make statements about humans and whether or not though, they'll just start to adjust. But there's such a commercial interest in the news. I'm thinking of like cable news, especially to just like, continually cover that. So I'm not encouraged that will stop. That's true, given that maybe we should just get better at
21:27 it. Yeah, for the next action.
21:29 Exactly.
21:44 This portion of talk Python, to me has been brought to you by robar. One of the frustrating things about being a developer is dealing with errors, relying on users to report errors, digging through log files, trying to debug issues, or a million alerts just flooding your inbox and ruining your day. With roll bars full stack error monitoring, you'll get the context insights and control that you need to find and fix bugs faster. It's easy to install, you can start tracking production errors and deployments in eight minutes, or even less. rhobar works with all the major languages and frameworks, including the Python one such as Django flask pyramid, as well as Ruby, JavaScript node, iOS and Android. You can integrate robar into your existing workflow, send error alerts to slack or HipChat, or even automatically create issues in JIRA and Pivotal Tracker and a whole bunch more. roll bars put together a special offer for talk Python to me listeners, visit robar.com slash talk Python to me sign up and get the bootstrap plan free for 90 days. That's 300,000 errors tracked all for free. But hey, just between you and me, I really hope you don't encounter that many errors. I love to buy developers and awesome companies like Roku, Twilio, kayak instacart, Zendesk, twitch and more. give rhobar a try today. Go to robar.com slash talk Python to me. Another big theme this year and last year, but especially this year has been encryption right.
23:11 Yeah, absolutely. Maybe especially based on the election outcome. It'll be. Yeah, actually people
23:15 go into places like protonmail protonmail is awesome. It's like a super encrypted PGP type thing out of the guys at CERN. A real but it's kind of like Gmail, but with PGP. And in Switzerland, it's things like that have been going up or like signal,
23:29 right that like encrypted messaging. Yeah, for sure.
23:31 And as well as the whole iPhone Apple thing at the beginning of the year, I think it was 2016. Right. With, should they unlock it? Should they be made to unlock it? and so on? Oh, yeah. After that guy, San Bernardino guns, right. Yeah. So Google decided to take this idea of encryption, and use it for a really interesting AI experiment. So this is from the Google brain team. And their slogan is make machines intelligent, improve people's lives. So what they did is they started with three basic vanilla networks, okay. And, and they named one Alice, one, Bob and one us. And you know where this is going, right. And all they did, all they did was that Alice, the outcome they wanted to measure was Alice has to send a secure message to Bob, Bob has to decrypt it, and Eve has to try to break the decryption. And the only advantage Alice and Bob have is they have a shared key that they can use for encrypting but they didn't even tell the thing that you need to encrypt stuff or mention any sort of algorithm. All they did was give it a loss function. That said, if Eve decrypts this, you lose if Eve does not decrypt this, you win and they just let it run.
24:45 Yeah, it's kind of cool. Gotta go.
24:46 Okay. Yeah. So what they did is they did 25,000 trials of it. You know, how they go through a bunch of iterations each time to like train it up and, and teach it, you know, and let it try to basically invent encryption. Okay, so they said what they did is they more or less created this adversarial, generative network, okay. And then they just let it go. So the data that the networks were given, so Alice was given the key and plaintext, as the first layer of the neural network, Bob was given the key and cipher text as the input, and Eve only got the ciphertext. So what do you think happened?
25:21 Well, I know what happened. So I won't ruin the punch line now. But it's it is a really fascinating result watching artificial intelligence battle each other.
25:29 So let me ask you a different question. Because Yeah, you do know,
25:33 would you have expected this outcome? You know, for a problem that implies a kind of creativity? I wouldn't have actually, I'm not sure that I would have expected. I mean, maybe some version of this, but but the outcome is, what the the AI that ultimately succeeded in encrypting its messaging actually, like the way in which it solved that problem, I think was was what was actually started. Yeah,
25:56 it's very startling, because it they were not told about encryption. They were just told he was not supposed to know this. And I mean, it's not like, it's not really smart, right? It's it's an artificial intelligence. It doesn't know hardly anything. And so the results, most of the time, Alice and Bob did manage to evolve a system that they could communicate with few errors. And only a couple of times, well, relatively small percent, let's say, it looks like three 4%. I don't know if you can factor that out area under the curve and whatnot. But a few percent of the time, Eve showed an improvement over random get random guessing actually figured it out. But soon as Eve did, Alice and Bob just ramped it up, and like crushed, crushed her, she couldn't keep up anymore. So it's funny, it goes for like 6000 iterations where Eve is basically losing, and Alice and Bob are winning, but then it switches. And Eve kind of figures out for a minute, and then there's like, nope, we're gonna change this, and then you're done.
26:55 And what's interesting is that when Alice and Bob were ultimately successful, it's not as if they chose from, like a buffet of kind of cryptology techniques, or, you know, techniques for encryption, or whatever, and then ultimately stumbled upon the one that was the most secure, they invented a new way to go about encrypting their messages, they invented a new kind of encryption in order to accomplish this goal, which is that's, to me the part that is startling, that they actually created something new. And of course, the you know, the jokes on the internet were abound, like,
27:30 basically to AI so figured out how to talk to each other in a way that nobody else can understand. There's no problem. Yeah, there's no,
27:37 there's no problem here.
27:39 This will be fine.
27:40 We're here with that being in the sky. Yeah, so. Yeah. So this is how the Terminator starts. Now, I don't actually think that but I think this is super interesting. And I really do think the creativity aspect is what's so amazing. And I wonder if if you read I mean, we're talking about other encryption techniques, you know, PGP Proton Mail, signal, and so on? What if you really wanted to communicate secretly, you just get some super trained up AI's, and with you and whoever you're trying to communicate with, and you just use whatever that thing does. Right? Like, it's unknown. You don't even know what it does.
28:13 Yeah, well, I mean, and to be totally frank with the audience, I think when it comes to this type of these types of deep learning techniques, like nobody knows what they do anyway. I mean, we know what they do mechanically, but nobody's quite sure, nobody's proven why they're able to be as effective as they are. So we're kind of already in that territory, where we're inventing things that are more complex than our brains can model or understand. Okay, and, and when you have those things that can generate themselves, I don't know, it's kind of interesting to imagine this future world where we don't actually rely on an encryption technique that we understand. We just have some AI that we think are smarter than everybody else's. And we just let them encrypt it, however they see fit, pass the message, and then ultimately, any adversaries will be developing intelligence to try and break our encryption. And they'll they'll just be kind of fighting it out in a world that we don't really understand. And hopefully, our messages are, you know, secure.
29:05 Did you just read from the back of like a William Gibson novel or no,
29:08 this is it. Right, right. I mean, it does. It sounds like we're, at least in some what kind of some of those like those kind of seminal, like 80s and 90s sci fi authors like this kind of far future that they predicted, at least certain aspects of it are starting to become a reality, the smarter that are, the more that algorithms can teach themselves. Yeah,
29:26 it's super cool. I think it's it's an uncertain future. But it's very interesting. It's very interesting. So the next item is actually about deep learning as well. Right? Yeah. Yeah. I
29:35 think just to continue on the conversation about deep learning. This was really the year that I think it came into its own. I feel like a give a quick overview for people who aren't familiar with either machine learning in general or this particular technique. Basically, it's a neural network and the neural network is kind of like a well, there's not really worry about what it is, in theory, if you're trying to there's like neurons and neural networks. And you kind of find a path through the neurons that allows your model to make a decision kind of in the same way that your brain works. Like you kind of light up a sequence of neurons in a very complicated pattern. And that sequence ultimately represents some kind of unique outcome. And in this case, it might be like, I don't know, tell me whether that person in the photograph is wearing a red t shirt, or a blue t shirt, or tell me whether it's a man or a woman. And learning the kind of subtle patterns in the image that allow you to make that determination are the kind of lighting up of some sequence of neurons in a neural network. And deep learning is basically, when you have many, many, many, many, many layers of your neural network. So much so that it's kind of difficult to understand what's happening in the middle. Like, there's an input layer, we kind of know what goes into the neural network, there's an output layer where they tell us what happened. And then whatever happens in the middle, we kind of speculate about and make charts and kind of infer,
30:54 feels a lot like the MRI sort of analysis. Well, creativity happens in this part of the brain. And when you're thinking about math, this part of the brain lights up, but like, that's the extent of our understanding to a lot of these right? And this sounds a little like that.
31:08 Yeah, yeah, it's exactly like that. But the gains from adopting these techniques have been really, really exciting. And I think over the next five years, we'll start to see how these technologies impact the products that we use. For the most part, I think it's the gains have been largely academic, there haven't been a lot of consumer applications. But the kind of things that neural networks have been tried or deep learning has been tried on, like a guy used the neural network to it consumed all of the text from the first seven Harry Potter novels, and then it tried to write new ones. They were not good. They were quite bad, actually. But they were kind of hysterical. And then but plausible, like the language that the the model used in order to generate these new novels was like, structurally correct? Even if it didn't make any sense. If you know anything about the books. Yeah,
31:52 that's really interesting. You know, I would love to see a slight variation on that. If you could abstract away a little bit more and not go straight down to the text, but just to the plot, building blocks. There's Harry and there's her mind me and Harry has this feeling he did these actions and then just go Okay, reorder that and have a writer put actual meaningful words to that like outcome? That would be cool.
32:15 That would be super cool. Yeah. Because I think a lot of the what these networks are still losing is like this idea of kind of context. Yeah. Like, like, like Google did a similar thing where they fed a neural network, a bunch of romance novels, although to its credit, it produced some poetry. And the poetry read like a real poem, like the kind of thing that the romantically inclined among us might have written in high school. It kind of kind of sappy, a little saccharin, sometimes unnecessarily dark. But yeah, you know, it's super, super interesting. But yeah, but it did. It's that it that does seem like the next evolution of it. Like we've we're kind of understanding language at a really fundamental level. But then how that how you we kind of build on that we use the building blocks inside language to form like larger concepts and ideas that maybe map over the course of hundreds of pages, because they're that complex, that fortunately, still seems to have escaped deep learning models. But when they figure that out, just imagine like, we talked about all this election stuff. Could you imagine like a neural network crafting the story of an election over and then deploying thousands of bots communicating with each other in an encryption that we can understand? Like, that's when it happens, man.
33:25 Yeah, it's all. It's all coming together. I hope it's a benevolent AI. Okay.
33:31 Yeah. But but but there's not all it's not all. It's not all potential malevolence, and doom, right. There's actually some really exciting applications of data science, for example. Yeah.
33:39 So for example, the next thing I want to talk about is actually data sciences, data scientists, mathematicians, programmers doing good for the world. So one of the big challenges for humans still remains to be cancer, right. And one of the more common types is breast cancer. So there's this group, that particular something called a dream challenge, the digital mammography dream challenge, right. So the idea is, the current state of the world is out of every thousand women screened. Only five will actually had breast cancer, but 100 will be called back for further testing. And so it's not just Well, it's like another doctor visit. It's like, you're told, hey, we found something in your scan, you need to come back. So there's all the concern and worry probably come back a week later, there's maybe a biopsy like you wait for the result. It's, it's like really disrupting right and expensive. So this group, a bunch of different groups came together. And they're putting out a million dollar prize for anybody who can build a model that improves upon this, and does better than the other people trying to do the same. So what I think is really interesting is the data and how you get access to the data. So fundamentally, what you'll do is you'll submit some sort of artificial intelligence machine learning type thing to process this data and if if you You can say, here's a bunch of images of scans. And, you know, traditionally, there's been a certain amount of data available. But this is actually taken to an entirely new level. So you take this this data, these scans, and you look at the pictures, and you have to say, no, this actually is not cancer, yes, this is cancer, and then they have the actual outcomes verified by biopsies. So you've given that as an input. But here's the deal. Normally, the problem with doing medical research is you've got to anonymize the data, you've got to get permission to share the data and so on. So they don't share the data with you. Right. So the question is, how do you actually process this? How do you teach them or seen anything, right? Well, what they do is they give you like, 500 pictures, or something like that. So you can test right and they give you outcomes, this one was cancer, this month of cancer, so you can kind of get it sort of working. And then they set up this mechanism in the cloud and AWS using Docker. So what you do is you build your model into a Docker image, using tensor flow and a bunch of different capabilities that are available to you. You build your your untrained model, into a Docker image, you submit the Docker image to some cloud computing system running on AWS, and they train it on actual data. And they teach it Yes, this was cancer, no, that was cancer, here's your prediction, right, wrong, and so on. But you have no internet access, you can get like the logs, we can't actually ever see the data. And then they submit you're trained on the real data that you never get to see, to actually a huge amount of data, which they can use, because nobody ever actually has access to it. So there's about 20 terabytes of 650,000 images and 40,000 images, that you're going to run your model against to predict cancer and then you'll be judged on your your work against that,
36:53 I find that really fascinating. So this, this idea that the you basically build a model on your own, like, just kind of speculate on what will or wouldn't work, and then hand it over to be trained and tested on data that you never see. And then you just kind of know whether or not it worked. And then I guess, tweak accordingly. I mean, it's a really awkward process. But at the same time, it's also a really novel solution to I think anybody who's ever worked with or been close to working with medical data, there's a lot, there's a huge need for this kind of work. But most of the people who do machine learning research don't
37:27 have
37:28 access to the data, because they're not employed by the medical institution that has ownership of it and sort of has been given permission to use it and access it as they see fit. And so you almost always run into a wall right around that point in the conversation, where it's like, okay, cool, we'll just, you know, give us as much data as you have, we'll go play around and we'll make a model and, and then we'll, we'll tell you how it goes. And then we'll come together and blah, blah, blah, like that's kind of a normal, a normal data science model building process where you say, give me whatever data you can, and then we'll use that to figure it out. And so so to come up with this technique, this kind of like, double blind or triple blind or this kind of, like, you know, blind trust, I guess, for using training, and then using a model is, is kind of a novel solution. I think, even if it's even if it's awkward, it's like, it's a good first step to just get this kind of thing on the road, Ryan, because I think it's especially for image processing, I think you hear you're starting to hear more about it with diseases that are less personal. Like, there's, there's been some really interesting research that looks for kind of cancer, actually kind of brain cancer that you can detect based on tiny spots and an eye that it's difficult for parents and doctors to recognize unless you already know you're looking for it. But machine learning can see it because it can tell the difference between a normal eye and an eye that has these like small indicators, these small visual indicators that there might be a problem. And that's just one of n problems. I think there's there's so many interesting applications, given the deep learning can now detect these really subtle patterns, these little really subtle distinctions from one image to the next. Much better than a human being could Yeah, so is it just has a ton of potential. So I'm glad that even if it's a little bit awkward that they're just pushing this forward, like let's just make it happen. However, we can, however, we can do illegal, right? It absolutely
39:10 is working within the bounds of, you know, the privacy guidelines and so on. But it's it's really interesting, and this is a framework I believe this group is building out for future dream challenges, not just this one, right. This is like the first of many of these types of things. Let me take just a moment and tell you about a new sponsor of the show. This Porsche talk Python is brought to you by Anaconda Khan Anaconda con 2017 is the inaugural conference for Anaconda users as well as foundational contributors and thought leaders in the open data science movement. Anaconda Khan brings together innovators in the enterprise open source community for educational, informative and thought provoking sessions to ensure attendees walk away with knowledge and connections they need to move their Open Data Science initiatives forward. Anaconda Khan will take place February 7 to ninth 2017 in Austin, Texas. attendees can expect to hear how customers and peers are using the Anaconda platform to supercharge the business impact of their data science work. In addition, attendees will have the opportunity to network with their peers in the open data science movement. To learn more register for the event or even sponsorship inquiries, please visit talkpython.fm/ a con, that's talkpython.fm/ AC o n a con. So the other interesting thing about this is the hardware that you get to use because if you're going to process 20 terabytes of images, and then apply machine learning to each one that's going to be non trivial, right? And so they give you some hardware to work on. And in fact, your Docker image gets to run on servers powered by Nvidia Tesla KD GPUs, which I think GPUs in machine learning is really interesting already. But just yeah, just to give you some stats, here, your machine gets to run on a server with 24 cores, one of these GPUs and 200 gigs of RAM, and the GPUs are insane. Like, they have almost 5000 CUDA cores, 28 gigabytes of memory with 480 gigabytes per second transfer rate, and 8.7 teraflops of single precision, computation power.
41:14 Yeah, the stats on there just, it's mind blowing, it's mind blowing, because I think that's something that sometimes gets lost in the discussion about deep learning is like, the amount of calculations that take place in a deep learning deep neural network are truly mind boggling. I mean, training, your kind of typical machine learning model might take somewhere between minutes to hours, if it's complex, or being trained on a lot of data, deep learning models take days or weeks to train or months, if you're doing it at like Google scale. I mean, the computation is just take for ever. And the reduction in computation time running on the GPU is phenomenal, like many orders of magnitude faster. And so the increasingly powerful hardware is really, I think, the untold story of how much it's accelerating the capacity of this type of machine learning. Yeah, absolutely.
42:03 I suspect these types of things where there's a million dollar prize, and the hardware to actually take a shot at it is quite interesting.
42:11 Yeah. And it's expensive. I think that like the, at the moment, the most this like high end hardware that we're talking about, you know, it cost you $1,000 a day to run a single instance on AWS. But that's only going to come down. And just like we saw before, with the kind of revolution of service oriented architectures, or kind of micro services, where it was like, kind of the idea of being like a screw it spin up a new instance. Right. And like we lived in a world where we would spin up and kill instances all the time and ever think about it for much more sophisticated and scalable and complex applications that live on the web. It's only a matter of time before we have the same kind of mentality with these highly performant, these highly performant instances that are backed by GPUs. And I think that that'll we're only just at the very beginning of that story. Yeah, I
42:57 totally agree. I think it's amazing. Like in 10 years, we'll be doing this in our watch. But speaking of things you don't want on your watch. Microsoft made a bot and I don't want any near anywhere near my watch.
43:07 Yeah, I'm not sure I want Microsoft spot anywhere near my watch, or my child.
43:14 saying the bots about influence. I think
43:16 it was a bad influence on all of us on humanity, perhaps actually. But the funny thing is, is that it was more like humanity was a bad influence on the Yes. So we're talking about Tay, of course, Microsoft's Tay. So for those who missed this story, it was kind of a brief moment, unless you're like a, you know, kind of hyperactive media consumer, like I have. So Microsoft developed a chat bot and released it on Twitter. And the way that it worked is that the chatbot, Tay would learn how to communicate based on how people communicated with it. So you could talk to Tay on Twitter, and then Tay would kind of learn how it should respond, given the context of what you asked. And it's a learn how to construct a language in a way that was consistent with the norms of this particular channel, which was Twitter. And it did a remarkably good job at that, when it responded to people largely responded in the way that made sense given what they asked. And it largely responded in a way that felt like a tweet, you know, like it, it started using weird abbreviations. It sounds like you know, see you the letter C and the letter U to mean See you later, you know, things like that. And so in a lot of ways, it was a remarkable accomplishment. And I should point out that when Microsoft tested the same thing with a Japanese audience, the bot learned to be a sort of genial participant in normal conversation. But when they released the bot in English, to a largely American audience, it learned very quickly to be a horrible racist. God. It was like it's funny, but not I mean, it was funny at the time, a little bit less funny now that we know more about like, the alt right, but at the time, basically, you know, the kind of Reddit 4chan crowd thought it would be funny as a prank, to teach Tay that the way that human beings communicated with each other was to talk about like, Whenever it was asked today it was asked about what it thought about Mexicans it would respond and say, Mexico is going to build the wall and it's going to pay for it. Or, you know, it would ask what it's thought thoughts were about Jewish people, and it would like apologize for the Holocaust like it truly, truly offensive is offensive. Wow. Like just breathtakingly offensive? And that's only kind of, I guess, is it even funny? I mean, there's an aspect of like, the scale of the prank that's kind of funny, or that like making a big corporation look stupid like I can, I can see how it's funny and like a juvenile way. Anyway, it was it was just really interesting commentary on both like, the sophistication of these technologies, like anybody who's done any kind of natural language stuff knows that like, as experienced, I think, how challenging it is to work with the language that people publish on Twitter, because it's not, it's not really normal language. Like there's like a Twitter speak that's unique, just to this weird little niche corner of the internet. I guess it's kind of a big corner of the internet. But, you know, I mean, people speak differently on Twitter than they do anywhere else. And so, for a machine to learn that is really cool. At the same time, it does speak a little bit to internet culture, that the first thing that people decided to do, instead of like, like, again, like a Japanese audience, they treated it like kind of like a pet like a fun friend. And of course, it was immediately exploited to be a kind of a horrible racist. misogynist, you know, like a GamerGate pretty Yeah,
46:17 basically. Yeah. I think you know, the cool how well it did, but I think it's unfortunate that it was turned turned to evil. Oh, well.
46:28 Yeah, so that was a that was that go check it out. If you're a machine learning researcher and their language, computational linguist. Fascinating case study. And then also, if you're interested in internet culture, and have a strong stomach, it's good for that to
46:41 just remember it's a bot. It was made to vivo by the people it wasn't designed that way.
46:46 Okay, that's a good point. And it's not like it learned to be evil. And this is the only like, people of course, made jokes like, you know, you release an AI on the internet. And of course, you know, within like, four hours it's a Nazi and you're like this is not bode well for the future of artificial intelligence. That's not really what's happening. It's not like bots want to kill all human beings. The AI's are not coming for us. Not yet.
47:08 Not yet. But when they do, maybe we can turn them to our will. Okay, so the next one has nothing to do with bots. In fact, this is a academics intersect open source, intersect business story. So William Stein is the guy that created this thing called Sage math, and sage math. Do you know Sage math?
47:27 I don't actually Yeah,
47:28 I was kind of surprised when I saw this. I'm interested to hear more about the sage math is a really interesting project. It's a direct competitor to MATLAB, Mathematica, magma, some of these large commercial computational science sort of platforms that are not open, right, like you. If you want to do machine learning on MATLAB, you probably got to buy some packages, like $2,000 for every person who uses it, and so on. Right? So it's really hard to share your work because you got to have that like that extension back. So this guy, he's came out of Harvard PhD that I believe, and was at UCSD, where he actually decided like everything I do. In my computing, life is open source, except for the one thing that I care most about where I do my math research, which is closed source. So that's it, I'm gonna make make a competitor. And so fast forward 10 years or something like that. We have Sage math, which is a really serious competitor to things like MATLAB and Mathematica. There's some interesting stuff that came out of it like scythe on the compiled high performance version of Python came out of that project, in some interesting ways. So this year, he announced that he's decided that run a successful open source project of that scale. Doing that in an academic setting, doesn't make sense if he really wants that to succeed. So he would say, you know, look, he built a great bunch of people at the University of Washington, Washington, where he is these days, but he would train these people to become great programmers and work on this project. And then they would be hired immediately by Google, or some other places that, oh, you know, data science, you know, this computational stuff, we got a spot for you, and they would be off. So he decided to leave academia, leave a senior track job and start a company called Sage math cloud, which is like a cloud hosted version. You can do all sorts of like, data sciency stuff, they're run like ipython notebooks, the whole Python scientific stack are, and sort of share this across your classes. And I just think it's interesting to see this high profile professor, leaving academics to venture out in the world to start a business based on open source.
49:37 Yeah, I think that that's actually an interesting trend across the machine learning community, where sort of prior to this AI spring or whatever we're calling it, where pretty much everybody wants some kind of the need for machine learning and machine learning expertise is really high. This kind of work did come out of academia and the research labs associated with computer science. departments at universities was where we expected a lot of this to come from. But now most of the large institutions, Microsoft Research, Google research, IBM, some of the most of the really huge technology companies are effectively doing pure research. And so, but pure research, not at academic salaries. So you know, you've earned your PhD, maybe then a couple years of teaching in machine learning at a high profile University, would, it's kind of tough to turn down a couple hundred thousand dollars $250,000 a year, to go work with them, you know, with huge resources at your disposal with some of the smartest people in the world. And universities are aware of this. I think that a lot of a lot of universities are really trying to rethink their relationship with their professors for just this reason, because they don't want to lose them completely to the private sector, but at the same time, recognize that they'll never have the resources of the private sector. So maybe, and you're seeing more people start to take a year off kind of ping back and forth between some of these research institutions and a university. It's kind of a new world on the one and I'm not sure that there is anything wrong with it. I mean, I as a as somebody who benefits a lot from this research, I think the way that the private sector is furthering this industry is really exciting, actually. Yeah, a lot of great things that are coming. I see this as a positive news item.
51:17 I'm super excited for William, I hope he succeeds in doing this because I think it's really great love to see open source projects that are become sustainable businesses or have sustainable businesses making the core project better if you want to learn about Sage math on episode 59. I actually interviewed William there. Another, you know, on episode 81 of talk Python, I interviewed Jake Vetter plus, and he's at University of Washington as well, I believe. But there's no relationship between these two stories other than their they started this thing called the Science Institute, which seems to be like a good balance of like, maybe a modernization of people, doings or industry stuff, but also, academic computational stuff. I think if this story, if the story of people leaving academics to go to private stuff was told in the 90s, it might be a big negative, right? This guy went and started this private company where his smarts are bundled up in this commercial thing and hidden under IP. But there's so much open source, that it's coming out of this even in the private space, although there's some kind of commercial component to it. You know, a lot of the stuff like Sage math, for example, is open source. So it's not like it's being lost to the world. But it's because it's going behind some corporate wall.
52:29 Yeah, I think that that's a really good point. Like, I think that's true. This is mostly an open source story. Anaconda, which is now huge in the Python community is built by a company called continuum analytics here, and here in Austin, the tensor flow, which has now become sort of the de facto platform for building neural networks and deep learning models came out of Google, and on and on and on, like, I think, and sage math is another great example of that, and it's cool to see focused on an area of research that is not necessarily computer sciency, you know, yeah, actually, like actually focusing on the on the kind of pure math aspects of it is a really valuable contribution. So I agree, I think it's a it's it's kind of a cool, a cool trajectory. And I hope that the technology industry continues its commitment to open source because it really, I mean, not to sound hokey, but it really does benefit the world in a serious way. It's
53:19 definitely a better place to be, I totally agree. So I'll give them a quick plug to hopefully, you know, if you're a teacher or a professor out there, check out cloud, sage, math comm there's a lot of cool stuff you can do for your classes, and so on. Alright, so AI is are smart, they can do lots of things, but there's just games, they're never gonna be able to solve that go right? Well, one would think
53:39 one would think, but actually, it's cool. We've talked a little bit about AI is being creative and kind of deep learning models actually coming up with kind of innovative approaches to solving a problem. And I think that that's a that's been a big story this year. So we're kind of comfortable with the idea that machines beat us at games like chess, which as human beings, we think are remarkably complex, there's so much strategy, there's so many potential moves. And that's true, I think a human being can really only hold, like grandmasters at chess can hold maybe eight permutations of the board in their head at any given time, like they can see kind of eight moves ahead, what they'll do what their opponent will do what they'll do, and they can kind of keep that changing picture of the board in their head. Of course, the computer has no such limitations. They can play out almost, especially with the computational power that we have. Now they can play out endless strategies and endless permutations and find the one that gives them the most likely chance of winning, we saw computer, we saw Watson basically kick everybody's butt at jeopardy. You know, consume the all the trivia knowledge of the universe, understand language and figure out how to beat us at that game. I
54:40 think that's super interesting, because it's the natural language component of it's not like clear rules. This thing is on this square, it can move to those three squares.
54:48 Yeah, yeah, absolutely. And being able to connect what was being asked in the question to the kind of like deep graph of knowledge that Watson has at its disposal, right, kind of understanding the relationships between different contexts and Blah, blah, blah, very cool. The game that people thought probably would be inaccessible to computer for machines for kind of a long time is something called go and go is kinda for those who aren't familiar with it. And I'm not like I'm not a go player. So I might be getting this wrong. But it's basically like chess times 100, like really, really, really complicated chess, because there's so many different ways that the board can change that there's so many different strategies, there's just, it's a lot more complex, the, the, the rules of the game are more complex, and the possible outcomes of the game are more complex. And because it's so complex, it relies a little bit more on, like, sure, some knowledge, but like strategy and intuition, because it's a little bit difficult to understand the consequences of your move, like 10, moves down down the line. So you kind of got to feel your way through it based on your expertise a little bit more than you can with a game like chess that you can pretty much keep all in your head at one time. And because of that, people thought, well, that's kind of a tough road to sell for an artificial intelligence. But not apparently, because if, in March of this year, the world go champion least at all, basically had had his butt handed to him by by DeepMind, which was developed by Google. So it's a it's a deep learning model and understood how to play the game of Go in a five game match a clean the floor with him. And not only that, but it used like, some really unorthodox techniques, like it was basically an exceptionally creative and exceptionally intuitive player of the game of Go. So it was kind of like the last, the last stand for human beings in terms of beating computers at games. And it wasn't really much of a contest. Yeah.
56:38 So we don't want to go up against AI anymore doing. I think it's really interesting. And again, I think the creative aspect of it is what's what's cool, right? The intuition, right? Those are the things we think that computers can do that sure they can, they can map the entire problem space. And if they're fast enough, they could actually map out, you know, potential ways which you could not win if they follow this series of 100 steps or whatever. But if you're not doing that, then this gets even more interesting.
57:06 Yeah, yeah, absolutely. And just to kind of understand the difference in techniques, like anybody who's done maybe some kind of basic kind of first steps in machine learning, like a popular popular example is to have people try and beat a game of tic tac toe, like to have your to successfully win at Tic Tac Toe using an artificial intelligence or to write a program that can do that. And the strategy is basically learn every possible outcome of the game. And then, at any given moment, pick the path of all possible paths, that gives you the best chance of victory. That's fairly straightforward. But on a tic tac toe board, the number of possible permutations of the game are really, really small. Nevertheless, if you haven't done that before, I mean, when I first did it, I found it to be very challenging. It's a challenging problem to go and solve. extrapolating that to go, I think just really demonstrates the huge leaps that we've made in this field over the past, you know, maybe decade, it's exciting, like it is what else when released on the right problem, like what else? Can these models potentially figure out? What solutions can they see that are just unavailable to us? Because we just don't have the computational capacity in our brains? It's kind of exciting. So
58:12 what if we had way more self driving cars? And the game was to minimize traffic jams? That'd be lovely. Right? It would be?
58:20 And what if? What if Think about this, my friend, what if the game was to simulate human existence? What about that?
58:27 That's totally science fiction, like red pill blue pill? So nothing, right?
58:34 Well, one would think but given the advances over the past, if we look at the past 10 years of video games and artificial intelligence and virtual reality, one presumes, or at least Ilan musk presumes that and this is our last story, that if you extrapolate that into the not too distant future, surely we should be able to simulate the entirety of human existence and play it out. And if we could do that, what's to say that we aren't the simulation of some future existence? And given how many simulations we probably run, like any sufficiently advanced society might run billions of simulations? And given that, what are the odds that were the base reality and not just one of these simulations? playing out pretty small, my friend pretty small.
59:17 Yeah, therefore, and that is insane. Therefore, we're
59:19 in the matrix.
59:20 We are in the matrix. Yeah. On one hand, like it seems, okay. That's a really interesting thought experiment. You know, I mean, it reminds me of when I took philosophy in college, right? And my professor told me about Jones economy paradox, or whatever it was called, where, in order to walk out of the classroom, you have to walk halfway to the door. And then you've got to walk halfway, still halfway of that, and half of that, but that's actually an infinite series of steps. So how do you ever walk out of the door? I remember my mind being a little bit blown. Like, how are we gonna get out of there? Like, I understand that. I walk out of here, but logically like how are you going across an infinite number of halves, right? That's crazy. But then you know, of course, I went to calculus and realize, well, you're also at a rate of speed going infinitely faster. So it's like a limit that approaches Well, one no big deal. And so, when I hear this on one hand, I feel like Xenon is paradox. Like, you can set it up. So you trick yourself to go, oh, oh, my gosh, you're right. It's impossible. This is crazy. And then there's just like, a moment of clarity where it just unlocks it. Yeah, this is actually ridiculous. But, but I have huge respect for Elon Musk. On one hand, I mean, more than almost like, he is like Edison times 10 or something. I mean, he's amazing. And I just heard that Google is now releasing, like, structural map, Street View type stuff, for places like Las Vegas, where you can like literally map out in VR towns. So you know, like, but 50 years or 1000 years on that, and then what happens, right?
01:00:55 Yeah, yeah, absolutely. I mean, and the argument that seems to resonate with me most about this is kind of like, well, if that's true, who cares? Yeah. You know, like, it's not as if it'd be one thing as if it was like, actually the matrix and we were all living in a manufactured reality to our detriment. But if the idea is that like, we're a simulation that's running itself, it's like, well, how is that really different than an actual reality? Like, why is that any different than a quote unquote, base reality? Like, it's, you know, that's kind of a biased definition of what reality is in the first place, and on and on and on? And then Okay, fine. It's kind of like that. The first question that the only philosophy class that ever took when the teacher walked in, it was like, in very dramatic fashion, like, we all sat down, and he just said, prove to me that I'm real.
01:01:38 Yeah, absolutely. You know, and
01:01:40 you're like, Alright, okay, cool, Dead Poets Society. But, you know, it's an interesting conversation, it seems more like really, those advances will lead us to the point where, like, we might start to not care so much about whether or not we're like our surroundings are artificial, quote, unquote, or like the synthetic, you know, manufactured by machine, or biologically manufactured in the way that we're accustomed to today? Absolutely. And yeah, that's, that's an interesting conversation to be had maybe a more useful one, then whether or not we're like, our universe is one of many in the multiverse as manufactured by a computer program, hundreds or thousands of years in the future. But it is, it's cool. It's a fun thought experiment. It's like one of those things where like this, I applaud Ilan musk for basically like posing like a sci fi philosophy question to the world, knowing that he basically had the world as an audience. And so for the next month afterwards, nerds like us were like,
01:02:33 well,
01:02:34 let's debate both sides.
01:02:35 Yeah. Do you think he just woke up one day said, You know what, I'm going to go to this place. When I'm giving a speech. I'm just going to deadpan this sucker. I'm just, I'm gonna put it out there and just pretend this is, and let's just see what happens.
01:02:48 Yeah, yeah, at some point, you know, at some point, you have to wonder, I get that, you know, like, periodically if I'm like, you know, things are kind of cruising along, like thing am doing is going pretty well. thing. BM, there's one real fires to put out intellectual am a little bit bored this week. You know what I mean? And of course, that goes away, because you know, whatever. We're all busy and we get consumed in our problems. But when Elon Musk has that kind of boredom, maybe this is what happened.
01:03:09 It could be what happened. It was interesting. It's definitely nice. In my I believe that we're not actually living in some kind of singularity yet. But I do think it's fun to think about. All right, Jonathan, agree. I think we should leave everybody with this philosophical thought for the rest of the holiday, till they come back to work and focus on actual things.
01:03:29 Yeah. In the meantime, ponder your existence when you're thinking about your New Year's resolutions.
01:03:33 Yeah, you got to come back to work next year, or do you?
01:03:38 What is working? What is what is meaning? Exactly?
01:03:42 All right, man. Well, those were 10 really addressing stories. And I think it's been a great year for data science and AI and things like that. Yeah, me too. It's
01:03:49 been it's been a fascinating year. And I look forward to 2017 being just as interesting and exciting. Thanks. Thanks so much for having me on. And for doing this. I think this has been a really fun episode.
01:03:58 It's been great fun. You're welcome. So looking forward to 2017. Everybody should be going to pi con, right?
01:04:04 Oh, absolutely. Absolutely. Because I heard a rumor that there may be some very exciting Python focused podcasts that are all hanging out waiting to
01:04:14 talk to him. So that red is absolutely right. So partially derivative talk Python, by some bytes podcast in it. We're all getting together. And we're doing a big group booth. You can come talk to all of us meet all of us. We're going to be doing maybe some live recordings. We don't quite know what that looks like yet, but we're definitely putting together a group booth somewhere in the expo hall. So I think it's I'm not sure by the time this airs early bird discounts may be over but don't wait till the end to buy your ticket buy them right away because they sold out last year and they were sad people they reached out to me and wanted to come and I couldn't help them.
01:04:47 Yeah, and if the trends are any indication, Python is only going to be more popular only going to be more widely adopted Python will only get bigger and more fully attended. So I agree. Get your tickets now and and coming With your favorite podcasters Yes,
01:05:01 it'll be great. It's gonna be great fun. I'm looking forward to seeing you there. Yeah, me too. Alright, catch you later. All right, thanks. Bye. This has been another episode of talk Python to me. Today's guest has been Jonathan Morgan and this episode has been sponsored by robar and continuum analytics. Thank you both for supporting the show. rhobar takes the pain out of errors. They give you the context insight you need to quickly locate and fix errors that might have gone unnoticed until your users complain of course, talk Python to me listeners track a ridiculous number of errors for free@robar.com slash talk Python to me. Whether you want to hear the keynote by Rohan current from forscher research, meet with the guys behind Anaconda or just mingle with high end data scientists you need to find your way to Austin Texas for Anaconda con this February. Start at talkpython.fm/a con ac ON are you are a colleague trying to learn Python. Have you tried books and videos that just left you bored by covering topics point by point? Well check out my online course Python jumpstart by building 10 apps at talkpython.fm/course to experience a more engaging way to learn Python. And if you're looking for something a little more advanced, try my write pythonic code course at talkpython.fm/pythonic. And you can find the links from this episode at talk Python FM slash 91. That's right. Anytime you want to find the show in the Show page in show notes, just talk Python FM slash episode number. Be sure to subscribe to the show open your favorite pod catcher and search for Python we should be right at the top. You can also find the iTunes feed at /itunes, Google Play feed at /play in direct RSS feed at /rss on talk python.fm. Our theme music is developers developers, developers by Cory Smith Goes by some mix. Corey just recently started selling his tracks on iTunes. So I recommend you check it out at talkpython.fm/music. You can browse his tracks he has for sale on iTunes and listen to the full length version of the theme song. This is your host Michael Kennedy. Thanks so much for listening. I really appreciate it. Let's mix. Let's get out of here.
01:07:11 Dealing with my boys
01:07:14 having been sleeping I've been using lots of rest