#113: Dedicated AI chips and running old Python faster at Intel Transcript
00:00 Where do you run your Python code? No, no, no, not Python 3, Python 2, or PyPy, or any other
00:04 implementation. I'm thinking way lower than that. This week, we're talking about the actual chips
00:10 that execute our code. We catch up with David Stewart and meet Suresh Srinivas and Sergey
00:16 Midnav from Intel. We talk about how they're working at the silicon level to make even Python
00:21 2 run faster and touch on dedicated AI chips that go beyond what's possible with GPU computation.
00:27 This is episode 113 of Talk Python to Me, recorded live at PyCon 2017 in Portland, Oregon on
00:36 May 19th, 2017.
00:55 Welcome to Talk Python to Me, a weekly podcast on Python, the language, the libraries, the
01:07 ecosystem, and the personalities. This is your host, Michael Kennedy. Follow me on Twitter,
01:12 where I'm @mkennedy. Keep up with the show and listen to past episodes at talkpython.fm,
01:17 and follow the show on Twitter via at Talk Python.
01:20 This episode is brought to you by Talk Python Training and Hired. Be sure to check out what
01:25 we both have to offer during our segments. It helps support the show.
01:28 David, Suresh, Sergey, welcome to Talk Python.
01:31 Thank you very much. Great to be here. Thank you very much.
01:35 David, you were here. We talked in the early days of the Intel Python distribution,
01:39 and you guys have a lot of new things to discuss that you've done in the whole Python space. In addition,
01:46 you know, with the Intel Python distribution, but also with other things, right?
01:49 Yes, that's right. Yeah. We have a lot of, I mean, yeah, about a year ago when we talked the first time,
01:55 we had really put the plans in place for not only our upstream contribution to Python,
02:01 Python. We were also doing a lot of work with PyPy, the JIT interpreter for Python, as well as the
02:07 Intel Python distribution. Since then, we've released the Intel Python distribution, and we've had some
02:13 very significant, you know, upstream contributions and some proof points with some customers that have
02:18 been showing some very positive things. So yeah, you know, not to repeat maybe last time, but just to
02:25 say, generally speaking, if you're doing scientific computing, the Intel Python distribution,
02:30 particularly things like using pandas or scikit-learn or numpy, scipy, these are all things
02:37 that really work together very well with this Intel Python distribution. And it's a sort of a one
02:41 distribution, just download and install the whole thing as one, right? So not a lot of messing around
02:46 with it. We also have significant proof points with PyPy. In particular, we were showing off a doubling
02:53 of throughput of like OpenStack Swift with PyPy contributions we had made. And that's, you know,
02:58 so that means faster throughput, less, more users being able to, you know, being maintained and
03:04 things like that. So that was a year ago.
03:05 Yeah, that was a year ago. And when you say you're doubling the speed with PyPy, does that mean the
03:10 contributions you've made back to PyPy now result in going faster? Or is that somehow running the Intel
03:16 Python distribution on PyPy?
03:17 It's actually so the Intel Python distribution separate from PyPy, right? So we do we have two major efforts from a year ago that we're doing upstream open source contributions, and the Python distribution, which is a program product, right? Right, right, right. So what we had the doubling actually was just initially,
03:26 initially out of the box, let's see what PyPy gives us. And we were stunned that we got, you know, twice the throughput and like 80% better response time on Swift, just out of the box. And that said, Oh, let's start doing some work, you know, to actually optimize this thing and make it better and try more sort of proof points with other real customers, right? And since then, as well, the Python distribution,
03:45 has had a lot of proof points with customers. I think you've had financial organizations, right? Financial, oil and gas, government organizations, a lot of different usages.
04:07 Yeah, it's great. So okay, so yeah, that's, that's a really cool that you guys are working on that. I know we're not just talking about the Intel Python distribution. But let's dig into that just for a minute, like, is that basically CPython forked with some changes in the middle? Or like, is it a from scratch implementation? Or how does this relate back to CPython in terms of the beauty of Python, right, is it's a language specification and sort of the the sort of standard implementation CPython, right? But it's very open
04:37 to any number of any number of other right interpreters or implementations of the language, PyPy being one of them. And it's basically a JIT, which means a just in time compiler, which means that instead of just interpreting the byte code from Python, it actually generates native code, any opportunity it can for the hot parts of the program. And that's incredibly helpful strategic, because then we can make use of a lot more processor instructions, make use of more processor parallelism.
05:07 Yeah, okay, that sounds, sounds great. Suresh, were you involved in this Intel Python distribution work?
05:37 So we can deliver some workloads, and then optimize, start optimizing the CPython itself. So we have things like profile guided optimizations, and link time optimizations that have now become defaulted CPython.
05:49 Tell us a little bit about the profile guided optimization. So what you do is you run in a profiler, and then you somehow feed that back to the compiler?
05:58 So profile guided optimization is very critical since a lot of these runtime languages, they have like both large code footprint, and then they have a lot of branch mispredictions, right? And which essentially stall the front end of the CPU. And by profiling the code, then you're able to then relay out the code better, so that it's friendly to the CPU, and it's also more efficient. And that's a great idea.
06:28 So PGO is now default with Python.
06:30 With CPython. And the PyPy, the PyPerformance project, that is how they're measuring it now.
06:38 Wow, that's really cool. And Sergey, how about your involvement?
06:40 They have solved a really critical problem in making an interpreter or GTing really fast on Intel architecture. Intel distribution for Python, it also solves the problem of making numerical and machine learning running faster. And Python is known and loved for really nice numerical packages, NumPy, SciPy, Scikit-Learn.
07:06 All the stuff that we saw in the keynote today.
07:08 Yeah.
07:08 It's just like, here's why people that do numerical analysis love and use Python. And for those people listening who didn't get a chance to watch the keynote, you deserve to go on YouTube and watch it, right?
07:18 So yeah, absolutely. Those groups of people, the scientists, data scientists, it's great, right?
07:24 That's why we focus on this area and we optimize these numerical packages, not interpreter itself, but rather the packages. And for that, we rely on high performance libraries, native libraries that Intel develops for decades, Intel mass kernel library, Intel MPI, Intel data analytics creation library. These all good high performance libraries are used underneath to accelerate NumPy, SciPy, Scikit-Learn.
07:52 I see. So you take, let's say, NumPy. You take NumPy and you recompile it against these high performance Intel libraries. And that, because the foundation is faster, basically makes NumPy itself faster.
08:04 It makes as fast as, almost as fast as native code.
08:08 How much do you think the fact that you guys control the hardware and build these libraries that you can make them compile to exactly what they need to be or, or could anybody do this? Is it a big advantage that you guys control the chips and understand the chips?
08:20 Absolutely. I can tell you the example. I was with Intel mass kernel library team for 15 years. And we started optimizing MKL for new processor three, five years in advance of its launch. That's really huge benefit. So by day one of processor launch, we had MKL optimized for that processor. Same with Intel Python distribution.
08:42 We had Knight's landing Xeon 5 processor launch last summer. And by that time, Intel distribution for Python was already optimized for KNL.
08:52 I see. Because you guys have a long lead time. Yeah. I think that's a good, the other side of this is not just being able to, you know, have these libraries for if you're using those for scientific computing,
09:02 but there's a ton of usage of Python in the data center that is not scientific computing. You know, a great examples, the site Instagram, you know, any number of other sites that are out there that are using Python to open stack itself is implemented in Python.
09:16 So one of the things that in terms of working with the chip architects is being able to actually help them design the chip so it runs all Python better. Not just this live tuned library code, but all Python as well.
09:28 Right. And that's, you said there's some really interesting performance improvements that you got for running old Python. And we'll dig into that in a little bit. Because that's as fun as it is to look at the data science stuff and the machine learning performance and all that.
09:40 Most people are running old Python, maybe Python 2 stuff that they don't even want to touch much, much less optimize away. Right. So if somehow you guys can just magically make it run faster, like that would be good for us to do. Wouldn't it? I mean, it would make sense. Yeah, it would. It would.
09:56 So, I mean, we're talking about performance in terms of speed, but when you're optimizing like the data center, one of the major measures of efficiency in the data center is how much do I have to pay to run this in electricity and cool it?
10:08 So just like pure efficiency in terms of energy, right? Like how much of a difference have you guys seen in that?
10:13 That's really huge because part of the challenge in the data center is all the cooling costs and all the space costs and things like that. So Intel and Facebook work together to create a new server architecture, right?
10:26 That many of the Python programs kind of run in the data center on that architecture and that runs at 65 watts compared to…
10:36 Compared to… Yeah, give me an example. Like what is that relative to?
10:38 Compared to a server that runs at 150 watts. And so it's really efficient and then it has a lot of technologies that we are adding to the silicon itself to make it perform well at the same time.
10:52 Because people want both the power and the power.
10:56 Obviously you want the speed, but you can get the double density in a data center. So if you're AWS or Azure or Google or Facebook, you can have twice as much computing power and same amount of energy in and cooling out.
11:08 That's a real win. And not only that, that is something that we've observed, you know, an extra processor generation of performance improvement with, you know, some of these optimized software.
11:18 So that's something that's an advantage going that route. Yeah.
11:21 Yeah. And so what's really cool, I think, is some of this work that you guys are doing is being pushed upstream to CPython, is being pushed upstream to PyPy.
11:29 It's one thing to say, well, we have our own distribution and that one's really fast. So please use ours instead. But you guys are also giving a lot back by making Python itself faster for everybody or more efficient in energy terms or whatever.
11:41 It's really sort of not a one size fits all sort of philosophy. It's really doing data science. You're doing the using these libraries, Sergey was mentioning. That's the Intel Python distribution is a great one stop shop for all of that stuff.
11:54 If you're not using necessarily the libraries, then, you know, we're working in the sort of the upstream areas to make sure that any use of Python that you would download will run faster.
12:04 Yeah. Are there any notable specifics about what you've contributed to CPython or PyPy that you can think of off the top of your head?
12:11 Yeah. One of the things that's been interesting for us is making sure we have really customer sort of relevant, you know, workloads.
12:18 When we talk about workload, what this means is, you know, you have software just sort of sitting there, you install Python and well, that's not particularly interesting, right?
12:25 What's more interesting is if you can run some code that represents what everyone else is doing, right? And, and hopefully not like a just a simple sort of, you know, micro, right?
12:36 It's like something that's actually sort of realistic. And so one of the things we're really excited about is we just open sourced with Instagram, a new workload that will represents what not only Instagram is doing with Django, but also represents a lot of other Django usage out there.
12:51 And so that one is really by open sourcing it by both companies contributing to it. I think it's going to help everybody sort of drive performance better, right? We also do a lot of monitoring of the sources.
13:04 So for Python 2, Python 3 and PyPy, we actually do a nightly download of the sources, run a bunch of benchmarks and then report the results out to the community.
13:15 So anybody can go to languagesperformance.intel.com and see a complete readout of a bunch of different workloads with the different versions of Python and PyPy.
13:25 And so you can see exactly on a day to day basis how the performance changes. Now, the reason why this is important is someone can do a pull request that slows things down by 10 or 20%.
13:36 We've seen this in some cases where a single pull request, you know, will really slow things down, right?
13:41 And so we're not only monitoring this thing, we have engineers that are jumping on it and being able to see, hey, if we have a real regression in performance, we want to jump on it very quickly and get it back.
13:51 So this is one of the earliest things that we did in these languages to try to help with this.
13:55 That's a big deal because it's very hard to be intuitively accurate about performance, isn't it?
14:01 Or it could be your intuition might say one thing, but it might be absolutely wrong.
14:05 You go, well, this should run faster.
14:07 And it's like, wow, it only, you know, improved like half a percent or maybe it degraded, you know, 5% because a lot of the things that might have gotten pulled in or just assumptions that were missing.
14:17 Right, right.
14:18 The code looks tighter, but it actually does something different with memory.
14:21 Some example, I think that if you look at, say, a list comprehension versus a for loop that adds to a list, I think the list comprehension is faster, even though they're effectively doing the same type of thing.
14:30 Right.
14:31 These types of things are pretty interesting.
14:33 And by the way, if you're a programmer, I think I made the comment last year on the podcast, the best runtime in the world, the best libraries in the world and poor Python code, right, will still run poorly.
14:44 Right.
14:45 And so one of the things I think I'm really also very excited about is that we have a great profiler called VTune.
14:51 It's from Intel.
14:52 The group Sergei is from.
14:54 And there you're actually able to see where the hotspots are in your Python code.
14:58 And I think this is really powerful because, you know, I think both the runtime and the user code are really important to optimize or else you may not get nearly what you think you're going to get in terms of performance.
15:09 Right.
15:10 Even if you adopt the fast libraries.
15:11 Exactly.
15:12 If you have an O, if you have some sort of exponential order of magnitude algorithm, you're still in trouble, right?
15:18 Or an order log in or something like that, right?
15:20 Yeah.
15:21 Or order n squared or something like that.
15:22 Then you want to make sure you actually can identify some of those things and correct them.
15:26 Yeah.
15:27 So Sergei, tell us a little bit like how would I take a, let's say we're talking about Django.
15:31 Could I take a Django app and apply this VTune profiler to it and get some answers?
15:35 Absolutely.
15:36 This is what we suggest essentially as a first step.
15:39 You run your application on your architecture.
15:42 You want to understand what affects the performance and how I can improve this performance.
15:48 The first step is to run it with a profiler like VTune.
15:51 And VTune, this is, exists for many years.
15:54 This is product known for profiling native codes.
15:57 Yeah, I remember VTune from my C++ days.
16:00 Yeah.
16:01 The only challenge with that when you run VTune in old days with Python code, it didn't show
16:07 your Python specific code.
16:09 You saw these weird symbols.
16:11 You're like, this C eval.c is really slow.
16:14 It seems to be doing a lot of stuff in here.
16:16 It tells nothing.
16:17 So what we added to VTune, it now understands the Python code and you can show exactly the Python
16:24 function or Python line of the code, the loop, which consumes the most cycles.
16:30 So you can really focus on optimizing this piece of the code using a variety of technologies,
16:35 either libraries or PyPy or other technologies.
16:39 Or maybe just changing your code.
16:40 Oh, yeah.
16:41 As you were saying, Michael, a for loop versus a composition, right?
16:43 Yeah, exactly.
16:44 Exactly.
16:45 Yeah, that's pretty interesting.
16:46 Does it require some kind of GUI thing?
16:49 Can I make this like a CLI, part of my automated build?
16:52 You can write command line.
16:54 If you like nice GUI, you can write GUI like, yeah.
16:56 So either a CLI or a GUI.
16:58 Yeah, yeah.
16:59 So any of you guys can take this one.
17:01 Suppose I'm sitting, you know, I got my MacBook Pro here and I've written my code and it runs
17:05 a certain way.
17:06 And then I want to like push it out to some hosted place, DigitalOcean, Azure, AWS, whatever.
17:13 How much would I expect the performance to vary on say like one of those VMs versus say on like my native machine?
17:20 And could I use something like VTune to like test it there?
17:23 So I test it in its home environment.
17:25 I think it's a great question.
17:26 You know, so much code is being run in the public cloud now.
17:29 My recommendation on that and, you know, performance, here's the other thing.
17:33 There's nothing against any of the public cloud providers.
17:35 But one of the things you if you're sharing compute resources, you're not necessarily getting the purest performance.
17:41 There's some sort of performance trade off for the fact that you're doing a dedicated machine.
17:46 And it varies, right?
17:47 You don't know what you have.
17:48 You have an SLA.
17:49 Are they doing machine learning or do they have an unpopular website and they just have to pay for a VM?
17:52 Or even, you know, in some instances, we have a noisy neighbor.
17:56 You know, maybe you'll have some VM that's destroying the cache, right?
18:00 By the way, we have a feature that we've added our processor to detect noisy neighbors and manage them, which is a separate thing we're doing for cloud service providers.
18:07 But anyway, for Python.
18:08 So, yeah, I would recommend running it native and doing most of your tuning there.
18:13 By the way, I've noticed that not all cloud service providers would let you run VTune.
18:17 Oh, really?
18:18 Yeah.
18:19 Well, it's not that it's not running VTune.
18:20 And it's just they sort of sometimes mask some of the, you know, some of the registers that let you detect, you know, the performance.
18:26 And so that's some of the things I think you've got either a private cloud setup or, you know, an on prem.
18:32 It's much easier to really tune the performance and figure out what's going on.
18:35 Maybe if you're doing open stack, you control the thing a little better.
18:38 Exactly right.
18:39 And, you know, hey, give people the ability to actually monitor the performance of what they're doing and figure out how to make it better.
18:45 Right?
18:46 Okay.
18:46 And also, like our silicon has these advanced features called the performance monitoring unit.
18:50 Okay.
18:51 Which, like when you're profiling on your MacBook Pro, VTune can really take advantage of that.
18:56 And it can tell you where your cache misses are coming from, where your problems are coming from.
19:01 Whereas, sometimes, if you try to do it on a public cloud, it becomes harder for you to figure out.
19:06 Right.
19:07 So, we would definitely recommend like what Dave is saying to be able to profile and get your code optimized and then deploy.
19:14 Yeah.
19:15 I see.
19:16 Yeah.
19:17 So, maybe test both, right?
19:18 Yeah.
19:19 Because on one hand, you get the best, most accurate answers on the real hardware.
19:21 But it actually has to live over there.
19:23 So, you want to know also what it does.
19:24 Yeah.
19:25 Certainly see what the experience is.
19:26 Particularly if you're expecting some throughput measurement, you know, set things up.
19:30 By the way, for performance work, we sort of recommend that people have something that they can run their code against that's repeatable.
19:36 You get repeatable results and then just change one thing at a time to kind of see what the change is.
19:41 Use a very scientific approach, right?
19:43 As opposed to changing a bunch of things and, gee, things, a lot of change, but I don't know what it was that affected.
19:48 Right.
19:48 Make a hypothesis.
19:49 Make some measurements.
19:50 Exactly right.
19:51 It's the scientific method, right?
19:52 It is.
19:53 That we were taught in school.
19:54 Yeah.
19:55 I think Aristotle and those guys were on the subject.
19:57 They were on the subject.
19:58 That's right.
19:59 My previous manager used to say, "Measure twice, cut once." Yeah.
20:02 Yes.
20:03 Exactly.
20:04 Exactly.
20:05 Yeah.
20:06 Very much.
20:07 Perfect.
20:08 So, another area that you guys are working in that's, it seems to be like the last year or so this has become real, is AI and machine learning.
20:13 I remember thinking for like 10 years, like, yes, AI, machine learning, this type of stuff, especially AI, was like one of those always 30 years in the future sort of technologies.
20:22 People are working on it, but it doesn't ever seem to do a thing.
20:24 Flying car and jetpack.
20:25 Yes, exactly.
20:26 Like, as soon as I have my, you know, teleporter, I'll be able to do machine learning and stuff.
20:31 But over the last, I'd say two years, it has become super real, right?
20:36 We have self-driving cars.
20:37 We have all sorts of interesting things going on.
20:41 Lots of application of AI just in recommendation engines, facial recognition, all these sort of things that are just practical, everyday things.
20:49 Yeah.
20:50 It's going to have some interesting societal effects.
20:52 Absolutely.
20:53 I think in some very powerful ways.
20:54 Oh, social effects.
20:55 Yep.
20:56 We have a world need to think about what that means for us.
20:59 I totally agree.
21:00 I mean, I'm thinking of like breast cancer analysis.
21:03 We used to think radiology was like a super high-end job that like you're safe if you are a doctor.
21:09 And now it's like, well, or you feed it to this machine and it's actually a little more accurate.
21:13 You could talk about other social impacts like are you going to use past performance to indicate which is the best candidate to hire?
21:19 Well, if you did that, you might eliminate a lot of people of color or women because they haven't been as much in the workforce, right?
21:25 Right.
21:26 So you've got to be very careful at some of the social impact of these things.
21:29 However, I will say this.
21:30 One of the things we have been very, you know, there are a lot of systems on the internet, you know, that Intel's provided the chips for.
21:36 And there's a ton of data that's out there.
21:38 And so one of the things we did that's very interesting from a Python standpoint is since a lot of companies have this data accessible through Hadoop and Spark, what we've done, we recently just in March upstreamed our open sourced what we call Big DL.
21:53 Okay.
21:54 Big DL.
21:55 It's sort of a big deal.
21:56 Good.
21:58 Thank you.
21:59 I got to laugh.
21:59 Anyway, so Big DL has a Python interface.
22:01 So what it does is deep learning.
22:03 So when you're doing a training of a deep learning algorithm and then inference analysis, right?
22:09 What a lot of times that data that you're using to do the training on is accessible out of Hadoop and Spark.
22:14 So a lot of people have said to us, hey, we would like to be able to do deep learning on our Spark data lakes or, you know, Hadoop, right?
22:21 Big data.
22:22 It's like, yeah, so that's what Big DL does.
22:24 But it's like a lot of people said, we don't want to have to use Java to go into that stuff.
22:28 We'd like to be able to use Python.
22:30 So that's what one of the things that got released in March was our first Python interface to Big DL.
22:34 So this is one of the ways where a lot of organizations, they already have a big data lake already that they can access through Hadoop and Spark.
22:41 They can use Python and the Big DL project to do their deep learning experiments and then products.
22:47 Yeah, that sounds really, really cool.
22:49 And it sounds like you guys are doing a lot of almost reorganization of Intel around this AI research and work.
22:57 That's a very good observation.
22:59 In fact, we started up a new product group, the AI platform group.
23:04 Product group?
23:05 Platform group.
23:06 Yeah, AI product group.
23:07 Right.
23:08 We're reporting directly to the CEO.
23:19 So these are chips that they're making, that Nirvana is making, that actually does this deep learning inference, training and inference, much, much faster, order of magnitude better than anything else that's out there.
23:29 Wow. Okay. So I know about deep learning on CPUs and training and machine learning. That's pretty good.
23:37 You move it to a GPU and it gets kind of crazy. These chips, these are not just GPUs. These are something different?
23:43 Correct. Yeah. They're specifically designed for the problem set that deep learning presents to the CPU.
23:48 So it's not like, yeah, I mean, our main Xeon processors actually do deep learning pretty well compared to the GPUs that are out there.
23:58 But something that actually like turbo charge it and really take it to the next level, a chip that's specifically designed for that, not for that plus graphics or that plus something else.
24:07 Sure. Because traditionally graphics cards just coincidentally are good at machine learning.
24:11 Well, with a ton of effort, I remember, you know, the first time looking at, well, how do you get a GPU to actually do general purpose computing?
24:18 Let's see, if you do a matrix operation, right, it's a texture. And so let's see, we'll get a couple of textures as matrices.
24:24 We'll feed them into the GPU and then you can do texture, you know, lighting transform on the textures.
24:29 And it's like, well, that happens to be a matrix operation. Read out the resulting matrix.
24:32 And it's like, from a programming standpoint, you know, that's why you need a lot of libraries and things to help you through that process.
24:38 Can I express this general programming problem as a series of matrix multiplications?
24:42 Exactly.
24:43 That are essentially just texture, OpenGL texture processing and things like that.
24:47 So this is one of the things I think is very exciting about moving this into the mainstream in terms of either, you know, at the x86 Xeon processors.
24:54 And then as we bring Nirvana's chips, you know, we bring them into the Xeons.
24:58 We have, you know, actually FPGAs as well.
25:01 You know, these are special purpose.
25:03 You know, you can program to do a bunch of accelerations and they have multiple acceleration units built in.
25:08 And so we can actually accelerate a lot of things along with the CPU.
25:12 So there are a ton of options that we're bringing to the table that will really accelerate a lot of specific workloads.
25:17 Yeah, that sounds really interesting.
25:19 I want to dig into that some more.
25:21 This portion of Talk Python is brought to you by us.
25:25 As many of you know, I have a growing set of courses to help you go from Python beginner to novice to Python expert.
25:30 And there are many more courses in the works.
25:32 So please consider Talk Python training for you and your team's training needs.
25:36 If you're just getting started, I've built a course to teach you Python the way professional developers learn by building applications.
25:43 Check out my Python jumpstart by building 10 apps at talkpython.fm/course.
25:48 Are you looking to start adding services to your app?
25:51 Try my brand new consuming HTTP services in Python.
25:54 You'll learn to work with RESTful HTTP services as well as SOAP, JSON and XML data formats.
25:59 Do you want to launch an online business?
26:01 Well, Matt McKay and I built an entrepreneur's playbook with Python for Entrepreneurs.
26:06 This 16 hour course will teach you everything you need to launch your web based business with Python.
26:11 And finally, there's a couple of new course announcements coming really soon.
26:14 So if you don't already have an account, be sure to create one at training.talkpython.fm to get notified.
26:20 And for all of you who have bought my courses, thank you so much.
26:24 And I think it really, really helps support the show.
26:27 Just on the general machine learning stuff, Shuresh, you were working in the data center and optimizing that space, right?
26:32 Over the next five years, how do you see machine learning contributing to that?
26:36 Like, can you take a trained up machine learning system and say, "Here's my data center. Here's what we're doing. Can you make it better?"
26:44 And just ask it these questions. Like, is that something that could happen?
26:47 No, that's definitely happening because it's all about like, what are the inputs that you can take in?
26:52 And the more inputs you can take and learn some specific things, then you're able to start optimizing the system.
26:59 So we'll start seeing this kind of technology becoming more prevalent in a lot of things that we do.
27:06 It's very exciting time to be in this field.
27:10 It's every day I wake up going, "It's even more amazing than yesterday!"
27:13 So, same question to you, Sergey.
27:16 The big deal in this new area is cross-team productivity.
27:21 You cannot solve the modern complex problems without involving domain specialists, programmers, data scientists.
27:30 This is all new collaborative environments. So productivity is the key.
27:34 This is what we are trying to offer through Intel distribution for Python.
27:38 We provide out-of-the-box performance and productivity to our customers.
27:43 So they can focus on solving their domain problem in deep learning, in machine learning in general.
27:49 And then with Intel distribution for Python to scale this to real problem in data center.
27:54 How about parallel distribution, multi-grid computing type stuff?
27:59 What do you see out there and what do you see working for that in the Python space?
28:05 Yeah, I mean, I think one of the things that is, like I said, we have an array of things, so to speak, that you can bring to bear on different problems.
28:12 One of the ones that Sergey mentioned is something we call Xeon Phi, P-H-I, Xeon Phi.
28:17 And it actually, as opposed to maybe 18 cores on a chip, it might have up to 80, 90 cores per chip, right?
28:25 So think about that.
28:26 I mean, think about these all x86 compatible CPUs, all available to do a variety of things in parallel.
28:33 So that's an interesting model to think about.
28:36 It's like, if you have parallelism, you can express it a number of different ways.
28:40 You can express it in terms of the vector.
28:43 We have vector processing within the CPUs.
28:45 We have this parallel processing.
28:46 And I think Python has a lot of, you know, certainly some of the things that Sergey was mentioning in terms of these libraries that can be, make use of the vector operations within the CPU and really turn up the performance, right?
28:58 So, traditionally, Python has sometimes had a few challenges relative to, you know, parallel programming.
29:03 And so, one of the things that's really cool about thinking about one of these libraries like MKL that Sergey mentioned is it can automatically take advantage of the parallelism that's available, right?
29:12 And so, you know, if you have one of these, by the way, the Xeon Phi, if you go to the top 500 supercomputers, there's a significant number that you can look at and it says, oh, it uses the Xeon Phi as part of that, right?
29:23 So, the top, you know, supercomputers in the world are using this chip basically achieve incredible results.
29:29 It just keeps going.
29:30 It's really, really amazing all the stuff that people are doing there.
29:33 So, back to the AI chip.
29:36 It sounds to me like what you're telling me is you have this custom chip, which makes a lot of sense because, like, GPUs, they were meant to do a video process.
29:45 If you could make a special purpose chip for that, you're in a good place.
29:49 What about other things?
29:51 Do you guys have other specialized chips coming in addition to AI?
29:54 Is this a trend, right?
29:56 Yeah.
29:57 Going to have more specialized chips.
29:58 A couple of things I would talk about there.
30:00 One of them is you may have heard of a new memory technology that we've actually, it's incredibly revolutionary.
30:05 I say that, you know, as an Intel guy, but I got to tell you, it's just mind blowing is that it's memory that sits, you know, you think about DRAM, you know, your regular memory and your Mac or whatever.
30:14 Versus flash memory, right?
30:16 The flash memory is you can get a lot of it.
30:19 It's lower cost, but it's slow.
30:21 Main memory, the DRAM is like super fast, but it's expensive, right?
30:25 And volatile.
30:26 And volatile.
30:27 What if you could have memory that was non-volatile, if you want it to be, and sit in between flash memory and DRAM, right?
30:35 Okay.
30:36 And so we've come up with this, we call it 3D cross point.
30:39 It's a memory technology that's coming out in SSDs now.
30:42 And think about it from a Python standpoint, being able to make use of memory that's, it's actually chips in the DIMMs in the computer itself.
30:50 So when you power on the computer, it actually has, you know, this persistent memory already available without going to the SSDs, right?
30:56 So it's instantaneously available.
30:58 I see.
30:59 The choice previously has been better with SSDs.
31:01 I remember when it was not.
31:03 So the choice is, we've got this regular DRAM, and then we've got swap.
31:08 And that's like a hundred times worse, or something, to go to swap.
31:11 And if it's a slow spinning laptop, cheap disk, maybe it's way worse than that still, right?
31:15 But think about a data center where you have maybe a few terabyte of DRAM in a system, and then multiple terabytes of this.
31:22 It's just right as more, you know, memory DIMMs in the computer, right?
31:27 This is amazing, right?
31:28 And not only is it super fast in terms of latency, access latency, but it also can be used persistent.
31:35 So these are things which are, from a Python standpoint, we'll actually be able to make some of this stuff available to Python programmers when these products start rolling out.
31:44 So this is a very interesting future.
31:46 The other thing from a future chip standpoint that I think is very interesting is to look at, we're now, because we're partnering up with a chip designer.
31:53 You're talking about Intel controlling the chips, right?
31:55 One of the things we're able to do is, folks like Suresh, Sergey, are able to partner up with the chip designers and say,
32:00 "Let's take a look at how Python runs on the chips." Okay?
32:04 So you're running this stuff and you go, "Oh, hmm, looks like from the size of the code footprint, actually we're spending a lot of time just twiddling our thumbs in the processor,
32:13 because it's waiting for instructions to get fetched." Is that because it's too big to fit in the smallest cache?
32:20 Correct.
32:21 And this is true of a lot of interpreted languages.
32:23 If you look at PHP, Node.js, et cetera, they're all of these massive code footprints.
32:27 If we've analyzed the internal pipelines within the CPU, we see this idling effect, right?
32:33 And now with the next generation of chips that are coming along, they've actually taken a look at this and actually we're amazed at how much they've been able to improve on this instruction level parallelism.
32:43 So in fact, even with a single instruction stream without parallel instruction streams, they're actually able to run old Python code faster.
32:51 So if you think about it, if you've got a data center, I've got a bunch of Python running there, one of the best things you can do.
32:57 Now, you know, we as software guys would say, "Oh, we want you to use all of this good software goodness."
33:05 What are you running on this old version of Python?
33:06 Right, right, right.
33:07 Use the new upstream version or use the Python distribution, et cetera.
33:11 But the good news is as an IT decision maker, you can now think about, well, upgrading to the latest Intel CPU actually runs Python.
33:20 It's more than just like a, is it a different clock speed?
33:23 It's not the frequency that matters.
33:25 It's not even really the number of CPUs.
33:27 The CPU itself actually at the same frequency can actually process Python much, much faster because it's making use of more of the CPU.
33:34 Does that make sense?
33:35 Yeah, yeah, that makes a lot of sense.
33:36 Well, and you know, you make your comment about as a programmer, it's great to use all the new stuff.
33:41 I personally as a programmer would like to work on new code that is adding new value and not go, "You know that crummy thing that's been there for 10 years?
33:49 We need to rewrite that so we can save on computers." Like that is not where I want to spend my time.
33:54 Like you guys don't, right?
33:55 Right. Oh, yeah.
33:56 Yeah.
33:57 So if you can just make it run faster without me touching it, then I can go write stuff that I want to write, like that new REST framework.
34:05 By the way, I would say one of the things that's cool about either PyPy or the Intel Python distribution or the other upstream work that we're doing is those typically don't require code changes either.
34:15 So that's the other thing is that if you make, you know, that's sort of the goal.
34:18 We sort of feel like Python's a powerful enough language and an attractive enough way for programmers to work, productive way for programmers to work.
34:26 Why should they be hobbled by performance, right?
34:29 Why not provide something that will immediately give a boost?
34:32 Now, we'd sort of like to think you ought to get a new processor too.
34:36 I think that's a good idea.
34:38 I think all of us would appreciate that.
34:39 Yeah.
34:40 I think good.
34:41 But then, you know, some of these other things, our goal really is to make it so you, by taking a few actions, you don't have to change the code.
34:47 Now, there are some new things, by the way, if you want to get into your code to let me play with some new features, right?
34:53 That's where we've got some of these things like some accelerators or big deal, which will let you use Python to do more deep learning sort of things or maybe accessing this 3D cross point memory.
35:03 So there's a lot of stuff that's going to be very powerful to bring this stuff to bear.
35:07 If you want to change the code.
35:08 And if you don't, you know, we have these other things to help you out with.
35:11 Sure.
35:12 You know, if it's your core product, right?
35:13 If your Instagram and these are your APIs or whatever, like you probably want to spend some time to make those faster.
35:18 Yeah, absolutely.
35:19 Right.
35:20 Things like that.
35:21 Interesting.
35:22 So what about Cython?
35:24 Have you guys thought about how Cython works on the chips?
35:27 And for those people listening, maybe they don't know.
35:30 Cython is like Python language with a few little tweaks that compiles basically down to C or the way C compiles, right?
35:37 In fact, Intel Python distribution is making both Cython and Numba, which are a couple of these, you know, moving to C code, basically, right.
35:47 And then their trade-offs, as engineers know, their trade-offs for everything.
35:51 The nice thing about that is you can get optimized either Cython or Numba, you know, as part of that package, right?
35:57 Some people will go, well, I don't want to have to give up on the quick turnaround of being able to change code and have it interpreted, right?
36:03 So that's where some of those trade-offs go, right?
36:05 Python 2, Python, CPython, PyPy would tend to say, hey, you can still have the same development methodology.
36:11 Yeah.
36:12 Numba, Cython or more.
36:13 There's a build step, which is weird to all of us, right?
36:15 Yeah.
36:16 It's all about choice.
36:17 If we don't have Cython or don't have Numba, what choice do we have?
36:20 Going to native language or staying with Python?
36:23 So we're just providing choices.
36:25 People can make trade-offs to get what they need.
36:28 That's a great point.
36:29 If you choose any of these things, we want to make sure Intel is the best option to use for it.
36:32 Yeah, that's cool.
36:33 So let me ask you this, okay, about like maybe a workflow.
36:36 So I write my code all in pure Python.
36:39 Maybe run on CPython, right?
36:40 See how it works.
36:41 Maybe it's not quite as fast as I want.
36:43 Or maybe you just want to optimize it because it's better to have it faster.
36:47 Like you can scale it, put it more, you know, more density or whatever.
36:50 Then I run Vtune against it, figure out where it's actually slow.
36:54 That might be like 5% of my code or less, right under like a large application.
36:58 Like it's actually these three parts that kind of kill it.
37:01 Like if I look at my website right now, which is pure CPython talking to MongoDB,
37:06 the slowest part of the site is the deserialization of the traffic back from the database into Python objects.
37:14 Like that's literally 50% of workload on my website.
37:17 And so I'm not going to change that because that's not my library.
37:20 That's like a different ODM.
37:21 But if I did control that, like would it make sense to go write that and say Cython,
37:26 that little 5% and then somehow bring that in?
37:29 What do you think?
37:30 Optimizing last 5%, if you make it zero, even zero.
37:35 Yeah, the 5% that's spending where almost all the work is.
37:37 The 5% of my code base where I'm spending 80% of my time or 50% of my time.
37:41 Yeah, totally it makes sense.
37:43 Totally it makes sense.
37:44 Okay.
37:45 You really focus how do I optimize the biggest hotspot with minimum code changes.
37:51 Right.
37:52 5% is a nice, nice hotspot.
37:53 Right, right.
37:54 If I rewrote 5% of my code in Cython, but that's where it was mostly slow,
37:57 you could probably get a big bang for the buck, right?
38:00 Right.
38:01 It's like I was one day just lunchtime, I got this call on my cell phone.
38:05 It happens to be this Intel executive that I kind of know, an acquaintance, right?
38:09 And she said, "Oh, my daughter is working on this project in school with Python.
38:12 It's running really slow." This is hilarious.
38:14 How did you know that I was, you know, I heard you had something to do with Python performance.
38:18 And so, can you do anything?
38:20 I've got an insight at Intel.
38:21 I'm going to figure out why my code is slow.
38:22 That's it.
38:23 And, oh yeah.
38:24 Well, trust me.
38:25 You know, I've learned many things sitting down with people at lunches, like people who
38:28 created all manner of things in our world is like, "Oh, that's why that works that way."
38:32 Okay, interesting.
38:33 Anyway, so I said, "Well, have her try PyPy." As an example, it's a very easy step to try and, you know, see if it speeds things up,
38:41 right?
38:42 And so, I didn't hear back from her, so I suspect that probably either worked for her or she got
38:45 frustrated, who knows.
38:46 But I've talked to, there are plenty of like architects, CPU architects, and people who,
38:51 there are people who have this massive lake of instruction traces.
38:54 So we're actually able to take millions of instructions and record them and figure out
38:58 what's going on.
38:59 That's how we analyze future chips and analyze performance on them and running these existing
39:03 instruction traces.
39:04 And so, they will maybe have billions of instructions floating around in Python scripts
39:10 that will actually go figure out what's going on and categorize them and help, you know,
39:13 develop what's going on.
39:14 But if that stuff runs really slow, and it was actually one of those architects that mentioned
39:18 PyPy to me the first time, and he was like, "I think he's actually here.
39:21 He retired.
39:22 Lucky dog." And so, you know, I got to find him and thank him again for, you know, having helped us,
39:26 you know, get more insight into this stuff.
39:27 Yeah.
39:28 Yeah, that's really cool.
39:29 So, coming back around to your AI focus, do you guys see AI?
39:33 How can you design chips in the future?
39:34 That's a very interesting question.
39:36 I'm sure a lot of engineers that I've worked with might be considered artificial intelligence.
39:41 No, I'm sorry.
39:42 I am an engineer, so what can I complain about?
39:46 I think there's already a lot of machine learning being employed in the design of the chips.
39:50 We have a building.
39:51 There's a particular building.
39:52 I can't tell you where it is.
39:53 Is that in Portland?
39:54 It's an undisclosed location.
39:55 Okay.
39:56 I will say you, there is a building that's stuffed full of CPUs, and it's got the most amazing structure.
40:01 It was built really interesting structure.
40:04 But that thing is running, essentially using machine learning to analyze simulations of chips continuously, 24/7, 365.
40:12 Wow.
40:13 So, that place, it's really kind of fun to kind of think about all of that's going on, and I've actually taken a tour.
40:18 It's super cool.
40:19 It's super cool.
40:20 This portion of Talk Python to Me is brought to you by Hired.
40:23 Hired is the platform for top Python developer jobs.
40:26 Create your profile and instantly get access to thousands of companies who will compete to work with you.
40:31 Take it from one of Hired's users who recently got a job and said, "I had my first offer within four days, and I ended up getting eight offers in total.
40:39 I've worked with recruiters in the past, but they were pretty hit and miss.
40:42 I tried LinkedIn, but I found Hired to be the best.
40:45 I really liked knowing the salary up front, and privacy was also a huge seller for me."
40:49 Well, that sounds pretty awesome, doesn't it?
40:51 But wait until you hear about the signing bonus.
40:53 Everyone who accepts the job from Hired gets a $300 signing bonus.
40:57 And, as Talk Python listeners, it gets even sweeter.
41:00 Use the link talkpython.fm/hired, and Hired will double the signing bonus to $600.
41:06 Opportunity is knocking.
41:07 Visit talkpython.fm/hired and answer the door.
41:12 "You know, we have been using machine learning essentially to design CPUs and validate them.
41:18 A lot of what we're doing, by the way, is not waiting for the silicon to be baked before we figure out whether it works or not.
41:23 We actually have a lot of simulation that we're doing.
41:26 We have a little, we actually, you can actually buy it.
41:28 It's something called Simix, which we actually are able to produce simulations of all of the things that are going on in the chips, right?
41:34 And so we're actually able to run a ton of workloads and programs through this thing before the chip ever appears, right?
41:42 And so we're able to run essentially, whether it's Python, Java, you know, any number of things through these simulators.
41:48 So that by the time that the silicon comes out of the fab, it actually already runs all of this stuff.
41:52 So there's a lot of stuff that we're doing to, you know, accelerate the design of the chips.
41:56 Yeah.
41:57 I think it's going to be 10 years from now, we're not even going to predict it, the majority of the stuff that's happening, right?
42:02 Well, think about what happened 10 years ago.
42:03 It wasn't, you know, I mean, you know, Facebook or any of these other things in the internet.
42:07 Google, all these things are around, but it's like the concept of how they've affected our lives now.
42:12 Yeah, it was just the dawn of internet as a usable thing for everyone.
42:17 Yep.
42:17 Right.
42:18 And it's been fun to be a part of, you know, Intel to have really helped fuel this thing.
42:23 And now I think from our standpoint, one of the things that's very exciting is to, you know, say,
42:27 "Hey, how can we project the future better?" Because you talk about how to figure out how things run better in the future.
42:33 One of the things we're doing is a tremendous amount of work in the whole area of benchmarking and performance, right?
42:38 If you think about it, we talked about, you know, various things like this, this Instagram, you know, Django benchmark that we're working.
42:45 There are other various, you know, codes that we're working on for the Python distribution.
42:48 But one of the things that we're doing is kind of really looking at the whole area of AI as an area.
42:55 And it's like, how do you benchmark that?
42:57 Or think about big data.
42:58 Think about if you maybe have, you're standing up Cassandra and Kafka and Node.js and all of these things in a system.
43:06 How do I figure out what the performance is today?
43:08 And then how do I project forward performance on some of these things, right?
43:11 And so there's a whole area.
43:13 I'm incredibly excited about this is that you're going to start seeing more and more of this from us.
43:17 I think I'm working on a lot of it myself.
43:19 Of seeing us really take a much stronger position out there to try and help contribute some of this stuff to the industry.
43:26 And so you can take your, you know, Instagram, Python, Django benchmark, for example, and evaluate what is this going to work against, you know, this CPU versus that CPU or this vendor system versus that one, this public cloud versus that public cloud.
43:39 These are all things that I think are incredibly powerful to think about.
43:42 Well, the control now is with you as a user to figure out what kind of choices do I make?
43:46 So we're doing a lot in that sort of space because we sort of believe that in the data center, you know, performance is king, right?
43:52 It's like, and people have come to expect from us every CPU generation to have a good whatever it is, 30 to 40% boost at the, you know, right?
44:00 Same price point.
44:01 So performance is king as far as we're concerned in the data center.
44:04 And we're doing a ton of stuff to try and drive the future and use this whole area of benchmarking and workload.
44:10 So we would love, by the way, from the community standpoint, if they have representative sort of workloads that they'd like to work with us on, we would love to get involved with that because that's something we're incredibly excited about.
44:20 Yeah.
44:21 I think there's having realistic workloads makes a super big difference.
44:24 Take your MySQL, your website, right?
44:27 Yeah.
44:28 The data marshalling issue that you're going.
44:30 We'd love to be able to have that as kind of a standard piece of what we're looking at to make sure either the CPU runs it really fast, we can go in with the library providers and make sure that stuff gets accelerated, right?
44:40 So those are the kinds of things we absolutely want to stand up.
44:43 And we think there's a dearth of these things actually representative benchmarks that will help people visualize what's going to the data center today because it's not just like your old database.
44:53 You know, you know, you know, your big, you know, SQL databases, you know, running relational database transaction processing, all this stuff exists.
45:00 But there's a ton of new stuff in the data center today.
45:03 And we sort of believe that Intel will be contributing strongly to this area.
45:07 So you guys, I feel like over broadly across the industry, there's like a, a mind blowing opening into open and open source from where from all sorts of companies that you just wouldn't expect.
45:20 Right.
45:20 Right.
45:21 I mean, the stuff that Microsoft are doing, like Facebook with their, some of the open HHVM and the open data center project.
45:28 Yeah.
45:29 Yeah.
45:29 The data center stuff that people just, so you see Intel contributing more to these open source projects in order to make your story back at the data center better.
45:38 Absolutely.
45:38 I mean, Intel has been for the past few years, the top one or two contribute to each Linux kernel release.
45:45 So you go back in time, who, who are the top contributors to the kernel?
45:48 Intel has been like number one or number two for years now.
45:51 Okay.
45:52 For each kernel release.
45:53 So that in and of itself represents a very strong commitment to open source, at least at the core.
45:57 Right.
45:58 So all of the work that we're doing is on open source code, right?
46:01 So whether it's Python, whether it's open source databases, you know, this is a very strong commitment to open source.
46:07 Absolutely.
46:08 That's awesome.
46:09 All right.
46:10 So we're kind of getting near the end of the show.
46:11 I have two questions.
46:12 And I'm going to mix it up a little bit.
46:13 Uh-oh.
46:14 Because normally I have the same two questions.
46:15 I kind of biffed the last time in your standard question.
46:17 So, uh.
46:18 So the two questions are, Sarah, I'll start with you, is if you're going to write some Python code, what editor do you open up?
46:23 What do you usually write your code, your Python code in?
46:25 I usually don't write Python code.
46:27 I am.
46:28 I am.
46:29 You're analyzing how it runs.
46:30 I'm an outlook guy.
46:31 Okay.
46:32 Gotcha.
46:33 Speaking, I typically use Spider.
46:35 Spider.
46:36 Okay.
46:36 Yeah, sure.
46:37 Spider's good.
46:38 The continuum guys.
46:39 I don't know if they're here.
46:40 Sure they are.
46:41 I haven't been able to do the rounds yet, but that's a cool thing that comes with the Anaconda.
46:44 David?
46:45 Suresh.
46:46 I recently took a class at Hack University.
46:48 It's a local organization.
46:49 Mm-hmm.
46:50 I've been loving Jupyter.
46:51 Oh, yeah.
46:52 I've been working with Jupyter.
46:53 I've been working with Jupyter.
46:53 I've been working with Jupyter.
46:54 I've been working with Jupyter.
46:55 I've been working with Jupyter.
46:56 I've been working with Jupyter.
46:57 I've been working with Jupyter.
46:58 I've been working with Jupyter.
46:59 I've been working with Jupyter.
47:00 I've been working with Jupyter.
47:01 I've been working with Jupyter.
47:02 I've been working with Jupyter.
47:03 I've been working with Jupyter.
47:04 Jupyter is amazing.
47:05 Yeah, yeah.
47:05 David?
47:06 My fingers are programmed with VI.
47:07 I'm sorry.
47:08 I'm an old guy.
47:09 My fingers are programmed with VI.
47:10 It's the only way muscle memory works with me.
47:11 So, yeah.
47:12 There you go.
47:13 Awesome.
47:14 Then I guess I'll ask you the standard questions while I have one more.
47:16 Suresh, there's a ton of packages on PyPI, over 100,000 now, which is partly why Python
47:22 is such an amazing community.
47:24 Like, all these different packages you can just install and use.
47:26 Think of a notable one that maybe people don't know about that you've come across.
47:29 I should have prepared you guys for this question.
47:31 Yeah, yeah.
47:32 You did a good question.
47:33 I did a bad job, but it's good for you to be surprised at that.
47:35 Yeah.
47:36 I think some of these lightweight web development ones, like Flask.
47:40 Yeah, Flask is amazing.
47:41 Django is really popular, but people are using Flask for some lighter-weight things.
47:47 Yep.
47:48 A lot of APIs built with Flask.
47:50 We also have the Django REST framework guys here.
47:53 So, yeah.
47:54 For sure.
47:55 How about you, Dave?
47:56 I'm going to suggest people check out, I don't know if it's in PyPI or not, but big
47:59 big deal.
48:00 Yeah.
48:01 It's a great thing to check out.
48:02 Big deal?
48:03 Okay.
48:04 It's a big deal.
48:05 It's awesome.
48:06 All right.
48:07 So here, I want to throw one more in as a mix.
48:08 Since you guys have a special vantage point towards the future, predict something interesting
48:14 in the next, that will come out in five years that we would be maybe surprised by.
48:18 Like, just in computing in general.
48:20 Sresh, go left or right.
48:21 Yeah.
48:22 I think, I think AI is going to be like really, really pervasive.
48:25 Yeah.
48:26 Much more from your glasses to the clothes you wear to all kinds of things, the car you drive.
48:33 Yeah.
48:34 I can definitely see on automobile AI processing for sure.
48:38 Yeah.
48:39 This edge processing stuff.
48:40 Yeah.
48:41 David?
48:41 I'd like to see a more organic approach to computing.
48:44 You know, our artifacts are, you know, slick and carbonized or aluminized or what have
48:51 you.
48:52 I would actually like to see computers made out of natural wood cases with maybe some mother
48:57 of pearl or, you know, something that would just actually be more human.
49:00 I mean, almost a steampunk kind of approach or a more organic approach.
49:05 I'd love to actually see it become a more organic part of our lives as opposed to dehumanizing.
49:09 Sure.
49:10 Well, as it goes into this IoT of everything, and we have these little chips that run Python,
49:17 MicroPython and other things, it's much more likely that we'll have little computing things
49:21 that are more adept rather than beige boxes or aluminum boxes.
49:25 Sergei?
49:26 I think whatever direction industry will go, Intel will become, will stay relevant
49:32 and be at core of this transformation.
49:34 Yeah.
49:35 That's my tradition.
49:36 Yeah.
49:37 You guys will be there.
49:38 So here at PyCon in Portland, Oregon, you guys have a big presence here.
49:42 Just one quick fact that I think people might like to hear is how many Intel employees do
49:46 you guys have in this general area?
49:48 The exact number as of whenever your audience listens to this may be different, but it is
49:53 true that as you know, Intel is the biggest chip maker in the world.
49:56 Oregon is actually our largest site.
49:57 So we have sites really all over the world, but it's kind of a, this Oregon from that sort
50:01 of standpoint is, is we're growing not only the new fab processes, the new absolute micro things
50:07 that are going on into design of the manufacturing, making millions and millions of things that are a few nanometers big.
50:14 You know, it's amazing.
50:15 We also have kind of the center of a lot of our software work going on here, as well as the circuit design itself is going on here.
50:20 So there's a nothing against the other, you know, parts of the world where Intel does business, but it's a, we have, we have, we have a lot here in Oregon from that.
50:27 Yeah. It's like over 10,000, right?
50:29 I can't actually give a number.
50:31 I would probably be, I would probably be shot if I did.
50:34 So I don't know. No, no, no, no one would shoot me, but I couldn't tell you.
50:37 So I guess the point is it's really surprising. Like what a presence you guys have here, right? This is definitely in Hillsborough, Oregon, to the west of the West Hills from Portland.
50:45 You guys drive traffic jams. I'm sure with the, your workforce, we try and stay outside of the traffic jams if we can.
50:51 So yeah.
50:52 All right. Well, thank you so much for meeting up with me and sharing what you guys are up to with everyone on the podcast.
50:58 Thank you, Michael. It's been great. You have a great listenership.
51:00 I know of people who've come up to me amazingly. So, oh, you were on, you know, Michael shows.
51:05 I was like, I, I, here's a shout out to all the great Python programmers out there.
51:09 Really appreciate everything you're doing with Python.
51:11 David, Suresh, Sergey. Thank you guys. It's a pleasure as always.
51:15 Thank you for all your work that you're doing.
51:17 Yeah. Thank you. Bye.
51:19 This has been another episode of talk Python to me.
51:23 This week's guests have been David Stewart, Suresh Srinivas, and Sergey Medinov.
51:29 This episode has been brought to you by Talk Python Training and Hired.
51:34 Hired wants to help you find your next big thing.
51:37 Visit talkpython.fm/hired to get five or more offers with salary and equity presented right up front and a special listener signing bonus of $600.
51:46 Are you or your colleagues trying to learn Python? Well, be sure to visit training.talkpython.fm.
51:52 We now have year long course bundles and a couple of new classes released just this week.
51:58 Have a look around. I'm sure you'll find a class you'll enjoy.
52:00 Be sure to subscribe to the show.
52:02 Open your favorite podcatcher and search for Python.
52:04 We should be right at the top.
52:06 You can also find the iTunes feed at /itunes, Google Play feed at /play and direct RSS feed at /rss on talkpython.fm.
52:16 Our theme music is developers, developers, developers by Corey Smith, who goes by Smix.
52:20 Corey just recently started selling his tracks on iTunes.
52:24 And you check it out at talkpython.fm/music.
52:26 You can browse his tracks he has for sale on iTunes and listen to the full length version of the theme song.
52:32 This is your host, Michael Kennedy.
52:34 Thanks so much for listening.
52:36 I really appreciate it.
52:37 Smix, let's get out of here.
52:39 Smix, let's get out of here.
52:40 I'm dating with my voice.
52:41 There's no norm that I can feel within.
52:43 Haven't been sleeping.
52:44 I've been using lots of rest.
52:46 I'll pass the mic back to who rocked his best.
52:49 I'll pass the mic back to you.
52:50 I'll pass the mic back to you.
52:51 I'll pass the mic back to you.
52:52 I'll pass the mic back to you.
52:53 I'll pass the mic back to you.
52:54 I'll pass the mic back to you.
52:55 I'll pass the mic back to you.
52:56 I'll pass the mic back to you.
52:57 I'll pass the mic back to you.
52:58 I'll pass the mic back to you.
52:59 I'll pass the mic back to you.
53:00 Oh, no.