#318: Measuring your ML impact with CodeCarbon Transcript
00:00 Machine Learning has made huge advancements in the past couple of years. We now have ML models helping doctors catch disease early. Google is using Machine Learning to suggest routes in their Maps app that will lessen the amount of gasoline used in a trip, and many more examples. But there's also a heavy cost for training these machine learning models. In this episode, you'll meet Victor Schmidt, Jonathan Wilson, and Boris Feld, they work on the Code Carbon project together. This project offers a Python package and dashboarding tool that will help you understand and minimize your ML models environmental impact. This is talk Python to me, Episode 318, recorded may 19 2021.
00:51 Welcome to talk Python to me, a weekly podcast on Python, the language, the libraries, the ecosystem, and the personalities. This is your host, Michael Kennedy, follow me on Twitter, where I'm @mkennedy, and keep up with the show and listen to past episodes at "talkpython.fm" and follow the show on Twitter via '@talk Python'. This episode is brought to you by 'Square' and US over at Talk Python Training. Please check out what we're offering during our segments. It really helps support the show. When you need to learn something new, whether it's foundational Python, advanced topics like async, or web apps and web API's. Be sure to check out our over 200 hours of courses at talk Python. And if your company is considering how they'll get up to speed on Python, please recommend they give our content a look. Thanks. Boris Victor, Jonathan, welcome to talk Python to me. Thanks for having us. Glad to be here. It's great to be here with you, when you all are doing really important work. And I'm super excited to talk to you about it. So we're going to talk about Machine Learning how much carbon is being used for training Machine Learning models and things like that. And so cool tools you built over at "codecarbon.io". And that collaboration got going on there. But before we get into all those sides of the stories, let's just start with yours. If you want to go first, how'd you get into Python?
02:06 Sure. Yeah. Thanks for having us, Michael. My name is john Wilson. I'm the Associate Professor of environmental studies at Haverford College. I'm actually an environmental scientist. So I was brought in to kind of consult from the environmental side of this project. But I have a secret history as a computer science undergraduate, back in the dark ages, learning to code on you know, C, C++ and Java. And yeah, so I was brought in to kind of like, provide that environmental perspective on the project. And, you know, having a little bit of a coding background, you know, despite how rusty it is, has been pretty helpful at thinking some of these connections through between computational issues, and environmental issues.
02:40 Yeah, I can imagine. Did you find Python to be pretty welcoming? Given like a C background and stuff?
02:46 Oh, gosh, yeah. I mean, I, you know, I learned back in the bad old days of if not bad, old edition, say that scheme, things like that, you know, a little bit more challenging, you know, where to write?
02:54 Yeah, that's one of the first languages I had to learn for, like a few CS classes I took it was like, we're gonna start with scheme, like, anything with this, give me some mainstream, please. Yeah, sometimes I feel like, you
03:03 know, the, the sort of, like, older person, you know, telling the kids these days about how, you know, we had to walk uphill both ways in the snow, to learn programming. And it's much, much easier. And it's one of the things I like about Python is it's really accessible to people from different fields. You get into air from aspects of Natural Sciences, but even people who are like in the digital humanities are using it to, you know, for language processing, and things like that. It's super flexible, which is really neat.
03:27 Yeah, it's really impressive what people in those different fields are doing they how they can bring that in. Boris, How about yourself? How'd you get into by that
03:33 I actually discovered Python doing my master's degree. And I got a math teacher that introduced us to Python, because they use Python for his own thesis. And I had one stuff to do, which was implementing errors encryption, and I didn't want it because I want to do it because math was not my forte. So instead, I did some encryption inside images in Python, and I fell in love with Python. Fantastic.
03:58 Victor, I discovered this taking a data science class then, when was that? 2014? I think it was. That's really I really entered the Python through the data science perspective. And then I took a course in general web development, and we use Django hurrah nice . It's been Yeah, so it's been really like the the main language I've been using. I was taught CS with Java and so on. So I was I was never a computer science fan. Like, I liked math. And I found Python to be really flexible in that regard. Like you can do maths very easily without like getting lost in translation.
04:36 Yeah, one of the things that just came out is one of the new Texas Instruments, the TI-84 Calculator, they have a good now program at with Python. So that's kind of interesting now that it's one of the old, old calculators that everyone's probably used to going through math and whatnot is now sort of, you know, more in the modern space.
04:56 I think my first program was on my TI because I was board in my course, and I cannot follow. So instead I bought some products.
05:04 Yeah, that's not a terrible way to spend your time. Alright, so let's go ahead and talk about the main get to the main subject here. So let's start by just setting the stage, there's an interesting article here that came out. This is not even that new. It's your 2019. And it's in the MIT Technology Review. So, you know, that gives it probably a little more than just some random blog says this. And it's got a picture, big picture of a data center. And the title of the article by Karen Hoe, is bringing a single AI model can emit as much carbon as five cars throughout their lifetime. So that sounds pretty horrible. But we also know that Machine Learning has a lot of value to society, a lot of important things that it can do. So here's where we are. And this is seems like a good place to start the conversation. For what you all are doing,
05:55 I have a mixed feeling about this article, I think one of the great things that did is raise a lot of attention and awareness about this topic, I think a lot of approximations were made. And they was not the goal here is not to criticize others, but rather to say that, in the meantime, things have evolved. And our understanding has been a little more precise. Or maybe because people are building tools that measure it
06:19 said or estimate. So there's that
06:21 definitely. And we hope that helps. But it's also like one of the things you need to put in perspective is that the kind of model they're looking at is not necessarily your everyday model someone can just train on their local computer or even like, even in academia, it's hard to get your hands on such large clusters and the number of GPUs used and so on. So like, I just want to put that in perspective that even if those number were accurate, and they are not, but I'm in the ballpark, it's not like every data centers you will meet, and every AI researcher you'll meet is going to have something in that like level of complexity that they train every day. And right.
07:02 So I have over here, a sim racing setup for some sim racing, I do and it has a GeForce 2070 in it, it would have to run a very long time to emit this much carbon, right? Like, you got to have the necessary money and compute resources to you know, even
07:20 get that right , no, like the number I don't remember exactly the set of that we're looking at in this paper. But like, typically, the modern language models that you hear about, like, oh, open AI has a new NLP model and like to give you three and it's that number of billion of parameters, like, they train those things on hundreds and hundreds of machines for a very long time. This is not something you can do easily. It costs millions of dollars and investment upfront, and then just using those things is super expensive. So while I think we should be careful, it's not like the whole field is like that.
07:56 Yeah, that's a very good point. I recall, there was some cancer research that needed to answer some big problem. And there was an article where they spun up something like 6000 virtual machines across AWS clusters for an hour and had it go crazy to answer, you know, some protein folding question or something like that, that would use a lot of energy, but it's extremely, extremely rare as well. On the other hand, you know, if you create that model, and it solved cancer, well, you know, people drive cars all the time for less valuable reasons. And so, you know, curing cancer. Yeah, I
08:27 think just to build on what Victor said, You know, I think there's something really valuable about this article coming out, you know, I think it for a long time, there's been attention that's been paid to the sort of environmental toll of supply chains in computing. You know, people have talked a lot about where minerals come from really,
08:42 and stuff like that, right.
08:44 Yeah. And one of the things that was really interesting about this article was, you know, with the approximations, it caught people thinking about the question that kind of animates our collaboration, which is, when you're doing any kind of energy intensive computational issue, you might want to think about where your electrons come from, you know, what, what's actually powering the hardware that you're using to do this. And I think this, this article did a really nice job of like, focusing attention that there are some really energy intensive projects that in particular ways, particularly if they're located particular locations, they can have a really large environmental cost that isn't really transparent to the user, or the person training. Yeah,
09:20 yeah. Yeah. Well, I don't want to go down this road just yet. I want to keep talking about the moral high level a little bit. But you know, if the people who did this very expensive model, if they just said, I'm going to pick the closest AWS data center to me that rather than let me find the data center by just flipping a switch and say, No, no, maybe the one up in Springville, Oregon by the dam would be better than the one by the coal factory, for example, right? Like that's something they could easily do and it maybe doesn't change anything for them, right.
09:51 It's always the case because when you have, for example, health data and so on, like they have legislations, but definitely like if you can, and there's more than just the money at stake here. And it's probably going to be a marginal change because the prices tend to be not equal but kind of uniform still. I think it's another decision item.
10:12 Yeah. Boris, you got any thoughts in this article? Before we move on?
10:15 Yeah, yeah, I think it came with a good two, I like a sales issue. And if people want to improve it, as most of my manager told me, if you want to improve it, you must measure it before. So we are there to give us a start of an answer to and give them action or to improve the good carbon emission on training models, hopefully to not train less models, but train better models.
10:43 Yeah, I think, for example, this is part of what the recent Google paper about, led by David Patterson, on training, neural networks and their environmental impact. This is one of the things they say and it's quite a dense paper, and a lot of metrics and so on. But one of the things they say is like, if you don't want people with caricaturing your numbers and putting approximations out there, well, you better publish those numbers yourself. Right? And, yeah, any tool to do that, so if you're Google or if you're close to the infrastructure that you use, it's easier to have, it's even better if you have access to the plugs. But that's not the case of everyone. Right?
11:21 Right. So you're saying if you could put something to actually measure the electricity going through the wire, instead of some approximation, you're in a better place to know the answer? Definitely.
11:32 That's where I think and we might be a little ahead of your schedule. But we might go there scale now, which is like that's where it could carbon comes into play. Right? This is why we want to create this tool. This is this is a user facing product, right. And I think it's very important to highlight that it is not intended to be the solution for a datacenter. This is not something that we think should be deployed as a cloud provider. Or if you own your infrastructure, if you want to have centralized number, there are alternatives out there. Yeah, things like Stefander, I don't know how to say that it's a French word. Anyway. It's, it's out there on GitHub, you can find it but just looking this way. But like, the goal here is like as a user, what can you do? If you don't have those numbers? Do you do nothing? Or do you try to have to at least have the start of an estimation, and maybe start the conversation with your organization or your provider? Yeah, fantastic. And
12:27 you guys are putting some really concrete things out there for Python developers to quick, high level comments. One, Corey Adkins from the live stream says, Would it be the same or worse for quantum computers?
12:39 Okay, I'm going out of my depth here. So the first like, the best answer I can give is, I don't know. And then to go beyond that, like my understanding of quantum computers is that they do very different things. And you can't just compare the computations made on classical computers with the things quantum computers are intended for, I think, intrinsically by because of the state of that topology, it is extremely energy intensive, just because you usually have to pull things down to a few Milli Kelvins, or something like that. So that may be transit transitory, I'm not sure I don't
13:14 know about that, either. I was thinking the same thing. You know, just yesterday, Google had Google IO. And they talked about building clusters of 'QUBIT', sort of supercomputer type things. And apparently, they've got to cool it down so much, that it's some of the coldest places in the universe inside those. So on one hand, if quantum computers can do the math super quick, it doesn't take a lot of time to run them to get the answer. But on the other, if you've got to keep them that cold, that can't be free.
13:42 But it's a very particular kind of map. Right? And not all problems are translatable from the classical formulation to a quantum compatible formulation. I'm not sure I think there are problems that we can solve easily on our classical computers, that would be very hard, if not theoretically impossible to run on quantum computers. And it's like it's a different tool, and it's not intended for the same problems. So I think it's hard to compare.
14:08 Yeah, you still have some part of the model training that won't run on quantum computers because that doesn't make sense, like pre processing data, getting data from your different data source, mapping them to a common format, exporting your model or creating Docker images servers, there will still be part of the model training process that won't run on those quantum computers. Yeah.
14:34 This portion of talk python to me, is brought to you by Square. Payment acceptance can be one of the most painful parts of building a web app for a business. When implementing Checkout, you want it to be simple to build secure and sleek to use. Squares new web payment SDK raises the bar in the payment acceptance developer experience and provides a best in class interface for merchants and buyers. With it, you can build a customized brand payment experience and never miss a sale. deliver a highly responsive payments flow across web and mobile that integrates with credit cards and debit cards, digital wallets like Apple Pay and Google ACH Bank payments and even gift cards. For more complex transactions, follow up actions by the customer can include completing a payment authentication step, filling in a credit line application form or doing background risk checks on the buyers device. And developers don't even need to know if the payment method requires validation. Square hides the complexity from the seller and guides the buyer through the necessary steps. Getting started with a new web payment SDK is easy. Simply include the web payment SDK, JavaScript blagging element on the page where you want the payment form to appear. And then attach hooks for your custom behavior. Learn more about integrating with Squares web payments SDK at "talkpython.fm/square", or just click the link in your podcast player show notes. That's "talkpython.fm/square". Another thing I think it's worth pointing out is it's the training of the models that is expensive. But to use them to get an answer. It's pretty quick, right? That's pretty low, low cost,
16:09 it depends on what you're using. Right? So if you have a user facing model that's going to serve 1000s of requests per second, yeah, then deploying it for a year might be more energy intensive than training it for three days. We all know like, the machine learning lifecycle is not just like you train one model, and you succeed and well, right, right, you usually have a lot of iterations. So building the models, looking for hyper parameters and so on. But even if that takes six months, and the models, your model stays online for months, or years and serving 1000s of people, it's so obvious, it might might even be worse, not worse, more energy intensive. In that day, I
16:49 guess it depends how many times you run it. Another thought, you know, there's a lot of places creating models, like you talked about was a GPT-3 , GPT-3 and whatnot that are training the models and the letting people use them. Do you see that as a thing that might be useful and helpful is having these pre created pre trained models, like I know, Microsoft has a bunch of pre trained models with their cognitive services, and Apple has their ML stuff, like baked into their devices that you don't have to train you can just use are the problems being solved. And the data being understood, usually too general or is that something we can make use of
17:22 Async pre trained models as the advantage to keep as you are training a model once the cost emission of the model during training is amortized. For each usage. So more user you have, the more part of the training, emission is low. But usually you still have to tune the model a bit. So you still have to train it and you're using energy even for prediction. So
17:51 yes, or no, I'm gonna also do the transition to throw the ball to Jonathan for something I think is we shouldn't forget when we talk about like these gains and efficiency, which is like Jevons paradox, and like the fact that if you create something that is cheaper to use, and more people use it, then the overall the overall impact is it lower, like and I think this is something we tend to forget when we talk about like massive improvements, or not even massive, just like, this is something that is, I think, hard to grasp, and anticipate when you think about technological advances under the constraint of climate change. But this rebound effect, is something we should plan for and not just think, well, if you have cheaper models, but more people can use them. And it's it's not that obvious that it's an overall gain in terms of an entry like then you can talk about all those sorts of societal consequences and the advances in cancer research. It's really hard to have a definite event.
18:46 Yeah, just to just to build on what Victor's what Victor said, it really is a difficult. I mean, this is a classic environmental conundrum, right? When you know, the classic example of the Jevons paradox is you adding more roads leads to more traffic, because more people believe that there's more space for them to drive. And so we've seen this over and over again, in all sorts of different contexts that when you build these tools, more people will use them, and that can end up costing more than not building them in the first place. So I think this is something to really be aware of, you know, as we're democratizing these kinds of tools. There's a real pro here, there, there's some real strengths of having these tools, you know, easily accessible and that can be used, but one has to worry about the potential costs of you know, having all these tools being employed, and in particular, being employed in all sorts of different kind of sub energy grids around the world. Not all grids are, you know, connected up to solar panels, you know, many are connected to coal fired power plants, and that can outweigh that cost. Yeah, we can help but it's not.
19:44 It's not today, is it?
19:45 Yeah, not yet. Not yet. One would help but maybe soon. One last thing
19:49 about quick train model is usually they are trained for more diverse usage. So I would think that they tend to be larger than model train experts. Especially for single use agents, I have single company with a single type of that. So I will say they're likely bigger. So they use more energy to train and to use. But how much high , I couldn't say Yeah,
20:11 well, I think the paradox you all are speaking about, you know, one of the ways we could see that is just the ability to use machine learning to solve problems is so much easier now that what used to be a simple if else runs in a microsecond, is now a much more complicated part of your program. And so yeah, there's got to be a raising of the cost there. Now, before we make it all sound like machine learning bad for the environment. 100% there are good things, you know, there Google, like I said, Google IO was yesterday. And they were talking about doing the navigation show that taking into account things like typography, speed, and whatnot to actually try to optimize minimize gas consumption with the directions they give you. Right? And if they could do that with a little bit of computer code to save a ton of CO2 out of cars, like that's a really big win for them. Well,
21:05 definitely. And I think the reason why we're like, we have started with the online code carbon emissions on our site. I mean, I'm not real, and Jonathan and colleagues at Haverford with the energy usage. And then we came together for good government, it's not to say that machine learning is bad, like, just as most technologies, it's technology. And it depends on how you use it. But where are we going as societies under the constraint of climate change? Can't leave any field out of questioning themselves of how they use their resources? So yeah, it's something you can't leave out of the picture, which doesn't mean that you can't use it. It's like, you have to think about it. And it's, we can't have a single rule for everyone. It's just you have to take that into account. And you can very well make the decision that it is worth it. In many cases, it will be sometimes maybe not, but be conscious of it. Yeah, for sure. Alright, so
21:58 I think that brings us to your project code carbon. You've mentioned it a couple of times. So for looking in from the outside, it seems to me like the primary thing that what you guys have done is you've built some Python libraries, a Python package that lets you answer these questions and track these things. Right, and then a dashboard and data that will help you improve it. Is that a good elevator pitch? Very good. Fantastic. All right. So tell us about Code Carbon . Do you want to work for us? Yeah, sure. not busy enough yet? We don't have any money, though. Oh, yeah. But I think it's a great cause. And so tell everyone about it. Thank
22:33 you for the opportunity. I think one of the reasons we came together was like, we all know that in the machine learning lifecycle, a lot of the computations you just forget about because there are so many experiments that you run, like, say you have a project, and you're going to work on it for like, 3,6,12 months, how many experiments are you going to run? How many hyper parameter searches are going to run? It's a very important problem. I think this is also something that is central to commit that ML , which is the company where various works, and they manage experiments. And I use that in my daily, daily work. And I thought, well, we need something similar to track the carbon conscious be about metrics, it can't just be about the images you generate, because you're turning the gun, for instance. So how do we go about this? And well, because Python was, I think, they'll go to language, or AI research and development also, although in very optimized settings, you might want to go away from that. But we thought, well, we need to do something that is going to be plug and play. So it has to be Python, it has to run in the background. It has to be like, and it has to be also something that is versatile in that it is not only about you're just getting yet another metric, but it's also about understanding what it means it's about also education, it's about education for yourself, but also maybe for other members of your organization, the people who might say like you work in a company, and you're you're thinking, Well, I have hundreds of data scientists like this is not marginal, I want to have an estimation. And if estimations are not good enough for you, we'll contact your provider and maybe you have a watt meter plugged in somewhere where it matters. My expertise, but that's basically the idea.
24:19 Yeah, well, I think plugging in a watt meter somewhere, that used to be a thing that you could do, but now it's not Amazon or Azure, or Linode, or whoever, they're not gonna let you go. Plug it into their data center. And if you did, there's probably a bunch of other things happening there. Right. direct access to the compute resources is just hard to come by. Yeah.
24:40 This is a very big constraint for us. I think I expect we'll get into a little more details about that. But this is why you need to understand carbon as a tool to estimate things and have approximations and we use heuristics and basically, if a consultant is having you pay for carbon offsets based on those kinds of numbers, you shouldn't pay because That's not the point. Yes,
25:01 it's really about giving you the information. One things I like about what you're doing is you, you can recommend other areas, like we talked about, like you could switch to this data center, and then it would have this impact right now,
25:11 I think it's part of the educational mission is like, we all know, artists, I wish we all knew, or we want everyone to know, I don't know how to put that. But that its climate change is a very serious threat. And being conscious about your energy usage. And your consumption of resources, in general, is one thing, it's very important. But then I think your consciousness people are there with that feeling of guilt. And there has to be actionable items, changing your region is probably the easiest thing you can do, especially in the age of the cloud. And at a time when basically moving your data across continents is about taking a few checkboxes on a web interface. Right. Yeah,
25:55 I think just just to pick up a little bit on what Victor said here is that, the educational part of this is very important part of the code carbon project, because, you know, we know, as we have been involved in this, that answering this question, you know, like, what's the CO2 footprint of, you know, my computational work is actually very, very difficult question to answer. And it's opaque for a variety of reasons. It's opaque, because, you know, the way that the energy industry deals with a CO2 emissions is pretty opaque, unless you know, the language of how they express this. And it's also difficult to understand, you know, when you are able to make the calculation, well, what does that mean, you know, what's the one gram of CO2 emitted readily relative to, you know, say, everyday activities. So one of the things that we've tried to do as part of this dashboard is simplify those two steps for people, because we've been approached by people via email via via slack. And we know, we're not the only people concerned about this. And so this is just a way to help make these approximations, both visible, but also, you know, kind of comprehensible and put it in the context of human activities,
26:57 right, there's a lot of layers, and you know, companies that run the clouds, they are trying to be more responsible with their energy, but you don't know, a lot of times you don't know if this data center US East-1 in AWS, how much energy from different sources, is that actually using? How much have they actually, you know, built their own solar and wind? We don't know, right? But you get a better sense using your tool, got better data than the random person or just kind of estimates? Well, there
27:28 are some stuff, so it must be fine. Another important thing I think, is and Jonathan is much more expert in that than I am but like not emitting is very different from offsetting in whatever way or whatever your your emissions and our atmosphere, our climate has inertia and the expected compensation in 5,10,20 years of your current emissions, those are two very different things. Right. And it's much easier to put carbon in the atmosphere than then taking it away from it. And so I think it's not just because you read Google and others, Microsoft are carbon neutral, which comes from compensations of many forums, doesn't mean no carbon was emitted. Right?
28:13 Yeah. There's just to build on Victor's point, again, you know, there's there's decades of research in sort of, would you call it you know, environmental psychology, explaining to people that the consequences of in action or you know, diffuse environmental costs to a particular action causes long term behavior change? And I think, you know, one of the things that's been really exciting about seeing the, you know, the machine learning and AI community is kind of, like grapple with this question, in a very public way, is, if we started to see articles of, you know, pressure being put on organizations to, why don't we have more green energy infrastructure, you know, undergirding our work. And so, you know, the speed at which this has become, you know, a public conversation is really heartening to know somebody who's been working in the environment for quite a bit of time.
29:00 Yeah, I would say it does seem to be getting a lot of attention, which is good. It's a big problem. But attention instead of just head in the sand is really big deal. Like we've been driving cars for a long time. We've been flying planes for a long time. And there's, there's a lot of like, raise trucks with super big wheels with dual coal stack pipes on them, right. Like, that's, I can't speak for everyone that gets a truck like that. But I feel there's a lot of times when we have these conversations, people are just like, well, this is so horrible, and so vague, that I'm just going to live my life and enjoy it because I seem to not be able to do anything anyway. So I might as well have fun, instead of not have fun while things are going wrong. Right? Like that's kind of that psychology, right? And so, I don't know, how do you all deal with that? It's also
29:40 something that like when it's not before your eyes, it's much more difficult to answer to understand and to respond to and I think this is part of what we're seeing today with decades of activism is like, but hard facts are not enough to convince to convince humans. And some extended, it's also a good thing. It has, I think, value in our recognition. But it also has this, this downside that it's just because you say a number to someone, if they don't understand it, if they don't see it for themselves in their everyday life, it's going to be very hard to understand. So until you have use something like good carbon, like I code every day, I train models every day, and I train a model for five days on a GPU in Quebec. That's my daily life, basically. And from my planet, though, I like it. Anyway. That's not the point. I mean, until you have that, and you're like, Oh, this is what I do. Those numbers start to make sense
30:43 Yeah it's an issue of numbers for your day to day that doesn't seem that much like 1000s of grams or kilograms. But once you sum all the emission, for all the model train for all the machine learning teams are always in machine learning data scientists, for a company for a year or even for academia, that start to get funding on your sides of the company. Of course, that's start to get sizable. And you might want to, to take a serious look at it. Yeah,
31:10 I want to dive into the code and talk about this. But I guess, maybe speak really quickly to I work at a company, we make choose, they want to use ML to figure out how to get, you know, better behaviors out of the materials for track runners or whatever. So I do that, that company, how do I get that company to say, Yes, we should measure our scientific in science work, and we should offset it. There's a lot of layers between the people who care about shoes, and sales and people care about machine learning carbon offsets my personal
31:45 understanding of this situation is that empowering individuals with tools and numbers to convince organizations is part of our mission. So if what the person in charge whatever their role in the organization thinks that in order to have an estimation of their permanent bank, they have to find a consulting firm, pay people for five weeks, who they think this is the process, they're going to be reluctant. And I can understand why. If you have a plug and play tool that even you as the Evan analyzer, if that's the word that you get the idea that it doesn't cost you much to try it, the way we want to build this thing is just like your one important learn.
32:30 Yeah, so yeah, so let's talk about the code. And I think maybe the answer, maybe an approach that you could have there's war run something like this on all of the training that we do. And then we're gonna report up our division, and this company generates as much carbon. So if you care about carbon, you need to take that into account.
32:47 And that's a good starting point. I think, as we can see today, together, I mean, those conversations are hard and long, and it's not easy to understand all that. All that matters. And you may need that consulting firm in the end to help you understand what's at stake in our whole value chain. And we have to start somewhere, right? And if you're an individual and you want to change your organization, well, I think if you want to have an impact, those kinds of tools should be easy to start with. And then as we've said, it's not enough and it's not precise enough. And then there are other steps we can you can take, need to get the conversations going and started. Yeah.
33:23 And to start that you got to start measuring. So in order to do that, let's talk about the code. It's literally four lines of code. It's all you got to do you "pip-install-codecarbon", and then from code carbon import emissions tracker, pretty one, record out start do your training tracker.stop. And that's it. Right?
33:39 Right. And with the decorado solution, it's 72 lines of code.
33:42 With the decorator, you can just put a decorator on a function, and then basically any training that happens during that will be measured and then save to a CSV file. Right?
33:54 That's correct. And if you think a context manager should be implemented, well, you're welcome to create a PR, it's going to be super easy. Everything is already
34:03 Yeah, exactly. It's already see it in my mind with the missions tracker, you know, as as tracker, so it creates one of these CSV files, and then what so there are a bunch of things
34:13 that happen if you want to, like the, the two big steps are one, you look for the hardware, you understand, in like, cabinet understands, then you try those, you measure the energy consumed. And so you have that basically, you measure the energy and then next step is well, how, how much carbon does energy has emitted as this energy emitted and so you need to map the code to your location,
34:41 right and you do that by just like a get location from IP address type of thing or
34:45 exactly something like that. You can either do that or provide the country ISO code for a couple of countries Canada, the US we have regions below national level. Another thing that you can actually do is not going to help you with the location but is going to help you with the the carbon impact is we can bring CO2 signal, which is an API that has been developed the electricity map, initiative, group, organization, company, whatever their status. And so that's going to help you an exact estimation at that moment in time, depending on what data you have to have those those computations. Otherwise, we need your country code. And we're going to map that to historical data,
35:25 this co2 signal, this is new to me, what is this? I think
35:27 you better look at them the Exactly. The map. Exactly, yeah. So they have products, they have predictions of carbon emissions, and so on. But that's basically, it's an initiative, the organization is called 'Tomorrow', there. The goal here, at least with the electricity map is to gather data about carbon intensity and the energy mix of countries through the forest of API's and standards countries have or countries and companies and have for that kind of thing. So you can see that level in Canada, not all are in the US. Not all regions are going to work in Italy, and some regions might not even provide that kind of data in the open.
36:12 Yeah, that's too bad. Yeah. But it's really different, depending on where you are it just even the US right, like in the Pacific Northwest, I think it's like very high levels of hydro, South East, a lot of coal still, like, it's not just what country, it's even, like maybe a little more granular than that right now, at least for large,
36:32 large places. And also, as you can see, I think this is also a very interesting map, because you can see that energy grids are very different from the grid that you know, like, nations, they, whatever is after that right county city, they see all of that, like you can see, for example, the one that spans something like Iowa. I was thinking, Oh, yeah, EDR like this thing that goes from Texas or something.
36:57 Yeah, that doesn't look like any state I learned in school.
37:00 No, but it's probably it's unified grade for some reason, because providers got together for under some constraint.
37:06 Yeah, exactly. So when I'm looking at this code here,
37:09 I run that and then I get this, this map, and it can give you recommendations on regions, right and where you might go. So for example, see central one, this is something that might that we might want to change, right? The the UI of this thing might not be obvious, but you're on the left, or this, just want to specify clarification, because I even I sometimes forget, I'm like, where did this where was this thing, where it's actually you run on the left, and we show you how it could have been different? Interesting. Yeah, so if I pick, say, US-West-1 for AWS versus EU-West-3, you can see the relative carbon offset or production, that's how bad that was, or how good it was. And these are all comes from those reports generated out of that CSV file. So that's great, I just want to be clear about how the data was gathered. That's a very important topic. So we still need to update the data for GCP Google Cloud Platform, because they recently released those numbers. But for most of those locations, we had to make an assumption, the assumption was that the data center was plugged to the local grid. So if a data center is in Boston, we assumed the data center uses the same energy as Boston's grid, which might not be the case, right? many providers would not now have their own solar panels and whatnot. So that might not be the case. And so but unless they release those numbers, and I can kind of link I will share with you, unless we have those numbers publicized by the providers. I mean, there's only so much we can do. So
38:43 Yeah. Well, here's here's a call to action for those who haven't released it.
38:46 Get on it. Right. I think it was part of Jonathan's message earlier, which is like, there's so many layers, and so many of them are opaque. That's part of the what I think our responsibility as users Yeah, I don't like to put too much weight individuals shoulders, and I think structural changes have a much wider, well, much, much more potential, but I think it's it's still interconnected. And if you can do something about it, well, you should.
39:11 Yeah. Let's talk a little bit about running it. So when I go over here, and I say start and then stop, like, how do you know how much energy I've used? I know once it leaves the computer, like there's a lot of assumptions and various things like we talked about, but how do you estimate how much that code is taken?
39:32 That's a very good question. Victor, can I answer this one? Oh, yes, sure. Okay. When you are training and you're using running a machine learning program, you're mostly using GPU. And you're mostly using Nvidia GPU. And thankfully, and videos and nice SDK to get at a given time, the current estimated plus or minus 5% energy usage of the GPU. So we get
39:58 Oh, really, okay. So it's not like you're saying, Oh, it's a 3070 Super, and it's must be pinned CPU eye GPU eye So let's just assume this much time times this kind of computer, it gives you more narrow, exact measurement. Yeah.
40:13 Okay, fantastic, we still get the energy consumption from all GPU. So if you're trying multiple models, we might get higher or lower energy estimation, I'm not sure there. So that's for GPU for CPU, we are supporting Intel. And we have several ways of doing that, as we get a measurement at the beginning of the training and the end, and we get the total energy usage between the two so we can get the difference. We can also regularly get the image to night I think we are working to add memory usage. Because even if GPU and CPU are the top most resources that you use for multi gigabytes, it's tend to be not negligible. And not you get probably desk
41:01 As well yeah, everything takes energy. The goal is to focus on what takes most of the energy and oh is it is it is to get that consumption. So you get all of that is during training frequently, or at the beginning and the end. In addition, we get situation. And we detect if you're running in a desert, in sorry, that doesn't or not. So in case we don't have access to anything like you're running on AMD GPU on an AMD CPU on Windows, we can still give you an estimation based on the duration on your estimated location. Or if you're running inside that doesn't specific location, you can also get a more precise estimation for you. And we are measuring what, like energy usage, and then we can use our data to estimate again, it's estimation of estimation, the CO2 emitted for that use. I don't
41:59 know how deep you want to go into how it works. But I do want to point out that that will give us a little look inside Yeah, given that it's one of the most difficult area, I think I also want to use your platform to call for help, which is it's actually the low level inner workings of CPUs are hard to understand, at least for me, I have a math background and I'm a researcher, right? So it's, it's an area where we
42:25 need help, not necessarily a hardware specialist and writing on hardware. Yeah,
42:29 exactly. And so for example, like, the way we read the energy consumption of Intel CPUs, I mean, the GPUs, as we said, have at this Nvidia CPUs have this driver that we can being an Nvidia semis and all the time VNL package is very useful, because we can just think this and not care about how it's done and trust Nvidia and and use that number. But for the CPUs, it's much more complicated. The reason is that what happens under the hood is modern Intel CPUs under the right settings, right? Actually, their energy usage, many joules to text. It's the repple interface, and they write to textile, the number of millennials they have consumed since I don't know when since they were condone, or the first of January 1970, or whatever other random data, what matters is that and we look at the difference in but those numbers are written by the CPUs. Okay. So let me give you an example. In the in the academic setting where I work, we have shared clusters, I can request part of a node, and I'm going to request one GPU and 20 CPU to do my computations. But what I saw looking at the rebel files is that there are two sockets or 40 CPUs, like we have 80 CPUs per node, and two sockets are 40. So there's no way to read from
44:00 the granularity is 40.
44:01 Yeah, like the CPUs that are allocated to me might change over time, maybe not depends on the resource manager you serve. And those CPUs will be split across those two sockets. And so we have that level of problems too, right. So the high level, you're working in a dedicated environment, and it's only your program, then rapel is perfect, and we couldn't have jumped for something better. But it does not allow us to go to the core, let alone the process, granularity of wealth of power consumption. So I just want to put like, a big warning here, and one of the things that we need to look into and it's very hard and do with numbers, maybe even what I'm going to say what makes sense, but the only the only solution we have left is there some kind of heuristic to map the CPU utilization to the energy consumption, basically because otherwise you will never going to be able to attribute your processes and sub processes, CPU usage to what basically, because of this rappels setup that is written by a socket. And I've talked a little to to people who understand this way better than I did. And they thought this endeavor was a risky, and there was pessimistic, but I'm in and you're like what I have, what else do we have to work with? Yeah. And so I think next thing is like, we're going to need to get our hands on hardware, have it run and see how bad it is. And it's going to be one setup with one mode for the complete compilation of the math libraries, I'm going to use to benchmark and whatnot. Like, there's only so much we can do if the hardware providers don't give, don't tell us more,
45:48 it would be nice to see operating systems, and then the hardware providers as well allow you to access that information, right? Like how much, you know, how much voltage Am I how much am I currently consuming with just this process, right? You don't want to profile it. Because if you profile it, you'll be like 50% of the problem, or slow it down. And people won't want to touch it
46:09 and talk to people the power API and pursue initiatives. And even they, from what I remember, explain that. Even if you had total control on the hardware on the software, or the other, both of both of those things are going to be so dependent on the way you compile the libraries you use, and a number of other bags that it's even the very definition of those things that we're looking for is not obvious, whether it's Linux versus macOS versus windows, it's got to make a difference. It all matters. And the reason why I think it's still worth looking for an approximation through CPU utilization, even if it's a bad proxy, is it's all bad proxies. So why is it? Like, it doesn't matter if you're precise to the main issue on your CPU if your uncertainty around carbon emissions is this huge, right?
47:04 Otherwise, you end up with something as much carbon is five cars. And so we can get it down to 2.1 or 2.2 cars. Come on. Let's go with that. No,
47:14 it's it's it's really a very complex endeavor.
47:16 Yeah. Yeah, absolutely. So it sounds like it runs on multiple platforms. We're trying to atleast Oh, yeah. So Windows, Linux, macOS. I'm sitting here recording on my Mac Mini and one apple silicon. And I run it here.
47:32 Yeah, you would need to install the Intel power gadget restart your computer allowed specific security permissions. So I'm guessing you will be back to the simple heuristic based on the duration we realized actually, the Intel power gadget also tracks some AMD CPUs, like we had a user said like you don't seem to support AMD. And then they still installed Intel targeted. I don't know why. But they did. And then it worked. So I'm like, I'm not sure how this thing works.
47:58 So Intel, AMD, but maybe not Apple silicon? I don't think so. Okay, well, probably most people won't be doing training on that
48:06 the apple silicon platform as a dedicated course for machine learning. No,
48:10 it does have I think, 16 ML cores. Yeah. So yeah, maybe maybe they are, they're coming out with the Mac Pro, which is supposed to have many, many cores. So maybe that that'll be where people do it. more of
48:21 it. So if Apple is hearing you Yeah, pretty is hearing us. Send us a Mac Mini and we'll work on the tracking of
48:28 Mac minis. For all three of you. I have a whole team. Come on. Let's make it out
48:33 of my Twitter handle. Send me a message.
48:35 Fantastic. All right. Let's see, before we move on quick question from Brian and live stream other than moving to a different data centers, what are some of the highest impact changes people can make different training methods and so on, by the way that also leads exactly into where I was going. Thank you for that a very timely question like patterns and things you can do. Let's talk about that.
48:54 One of the things that we wrote in the contract that carbon emissions of machine learning paper on the website is well, there's hyperparameters searches, like one of the worst thing you can do, both in terms of pure ML performance and carbon emissions is research. So maybe just don't do that there are, if you lazy just do a random hyper parameter search, or if you don't have a good metric, or use Bayesian optimizers, and so on to look for those hyper parameters. Another thing that is not mentioned in that paper, but I think is still very important. And that cycles back to the one of the first questions about insurance versus trading is there are many methods out there pruning, distillation, quantization of like old Zoo of tools and techniques and algorithms to optimize your your model. And if you're happy with your current model, chances are there are many techniques out there that can reduce its size and computational complexity by multiple factors. So if you're going to put the product out there with hundreds Hundreds 1000s millions of inferences, maybe just think about that. I expect people who deploy such tools to think about that. If you're like deploying a tool for millions, I mean, it's in your interest to think about it, because it's also going to be cheaper.
50:16 They probably think more about it in terms of just time. Yeah, time to train, time to get an answer. But that also is exactly lining up with energy consumed. So, you know, co2 reduction comes along for the ride. Yeah,
50:30 it's often the case that if you invest in ecological solutions, they are going to end up being economical. Jonathan, is it's more of your area, maybe that's another boat for you.
50:42 I couldn't underscore that more. I think, you know, something that I think that that is come out of our results that we've that we've seen is that there's not a strictly linear trade off between energy usage and accuracy. For example, there's often there's a shoulder usually there, and finding that shoulder using code carbon to figure out, you know, if I throw this fraction of a kilogram of CO2 at this problem, I'm actually going to get a lower accuracy than if I had stopped beforehand. So using the tool to figure out where that is, I think is very helpful. And so just being aware of the of the impact of it and trying to maximize for accuracy, and not just energy usage,
51:17 right. Yeah, one of the things you called out is more energy, which means more emissions is not necessarily more accuracy.
51:23 Yeah, I think on a more practical solution, for example, when you're doing doing a hyper parameter search, which is basically, are you doing the combination of numbers of variables and try to find the best combination to get best results by its more precise model, or whatever metrics you're optimizing for, you will likely or most of the machine learning libraries have an option to do early stop, like, instead of doing training, new model for four days for all, and of command doesn't have combination, you train 10 of them for one day, and then you see Oh, it evolve. And if you take only two best of them, and try again, you can reduce your training time and emission, by a lot of percentage. And on also protocol, you can also move or known also code that doesn't need GPU to run something somewhere else, like for CPU, the new storage on disk, it's still emitting less emission than not using the GPU, you're on your server, and try to use your GPU data driven by training model on the same GPU or changing your model to be more efficient to
52:34 train in less time, one of the things that we've also advocated for and it can sound a little naive, but as Jonathan said earlier, like they still has been moving fastest to publish and being transparent about those things. And I think if the community shows interest, and shows that it is one of the broader impact features that they look for, when they think about the systems, they create and deploy, I think it's also something that can spread. In other areas interest, you're very specific niche research, for instance, I'm, I'm thinking about the research group here, but that's my environment. But I think it's also the case in the industry.
53:14 Another thing that you all talked about is if you're computing locally, so maybe at your university or in house these days is probably where you are, the local energy infrastructure matters, right? It does,
53:26 does like for example, Quebec has an average of like 20 grams of co2 per kilowatt hour or something, which is probably 40 times lower. And some other regions, like you can check the Quebec doesn't share the data with any customer. It's a shame. But you can see other like, if you just compare the results in Europe, for instance, and you look for France, which is,
53:52 Which is Germany, yeah, it has a nuclear electricity grid, mostly funds, right. So if you compare France to Germany, it's gonna be very different for that 95%,
54:00 low carbon, that's well done France, good job for us.
54:05 I think if you click on Germany, what you'll see is you might have a time series somewhere for the last 24 hours, at least
54:12 have this nice breakdown over here and you can move at, yeah, there's your time series, right.
54:17 So you can even see that during the day. It's not the same and just like your electricity provider will charge you differently for different times of usage, like high demand or lower demand times of the day, and like carbon emissions are also going to have that kind of variation that you could care for.
54:34 Yeah, one thing I wanted to give a quick shout out to I don't know about different locations here in Portland, one of the options we have is to choose a slightly different energy choice. If we pay $6 more per month, or $11. As a small business, it will basically be wind and solar. And if your local grid offers something where you literally pay $6 and it can dramatically change it like do the world a favor. opt in,
55:00 we have the equivalent in France also same in the United States, there's a lot of there's a patchwork of different state laws that that mandate that these options are made or made available to people. So yeah, take it definitely take advantage of it.
55:11 Yeah. Yeah. It's I mean, it literally is a checkbox. Do you want to have this yes or no and a small fee. And they probably, you know, honestly, probably what's happening when you check that box, like some of that energy would have just gone to the general grid. And now it's, it's promised to you, but soon as enough people check that box to go beyond the capacity, then that's going to be an economic driver to make more of it happen, right. So hopefully, hopefully, we can get there. Although I suspect data centers are where the majority of the computation happens. It doesn't.
55:38 I mean, I'm not backing this by any knowledge here. I'm just, it's just my personal perception. But I feel it's like, it's too little. Like this is too cheap. How come? Like, it's so cheap, right? So so many things in our data lab should actually be more expensive if we knew how much energy and resources and how much they cost environment. So it feels like it's a no brainer when it's so easy. And it's so cheap in this case, but like how many other areas of our daily life and consumption I have those those biases, like would you pay
56:11 three times as much to fly, right? Would you pay $3,000 ago from France to Portland rather than than 1000? or whatever, right? That's a harder thing than checking a $6. Box. Yeah, and probably a harder solve a problem. But luckily, we're talking about computers and ML and not air transportation, so don't have to solve it here. We'll do that next time speaking to solving to hear you know, what's next? What are the things going for you all in the future?
56:33 So I'm a PhD student at Milan Manuel's with QUEBEC's AI Institute. So I feel like I'm gonna stay there for at least two or three more years until I PhD. And then we'll see. Yeah,
56:44 but the other two, where are you going with this project?
56:47 Yeah, I think that we've got some things on the horizon. And one is that the other part sort of under the hood, that's kind of complicated, is deriving the energy mix and getting the CO2 intensity of the energy grid from the energy mix. So figuring out, you know, okay, if you know, you have X percent natural gas, X percent coal and X percent oil, you know, how does that translate into CO2 emissions? That's actually an extremely complicated problem to answer. Because we have different chemical compositions of coal around the world, for example, you know, coal that comes out of Kentucky has a different CO2 impact per, you know, Joule from combustion than coal that comes out of Wyoming, for example. So we've got all these different layers to figure out if
57:26 you've got like oil sands of Canada versus exactly Arabia, or whatever, exactly,
57:31 yeah, and these and all of these sort of chemical differences, you know, matter. And they reflect different efficiencies. And that's not even getting into the difference in in hardware in different power plants. So what we want to do is we want to actually dive in a bit deeper and get out some of these regional differences in carbon intensity, and plug them into the into the data set here. So that, you know, we can refine our estimates as much as possible. And
57:55 shout out to the cloud providers provide more data, yeah, to the CPU providers, provide more hooks, things like that
58:05 i/o puzzle project is that in few years, we don't need this project anymore. Because we are doing estimation of estimation of estimation. There are better people in history, cloud providers, and hardware vendors that are suited to get more precise data. But until then, I have the product can companies be aware of the mission, take action on that and follow product to be more precise, and give some estimation range for everything we are measuring
58:33 feel like there are 5 to 10 companies in the world that can control all of the information you have. So we've got Intel, AMD, Apple for the chips, we've got AMD, and Nvidia for the cards, and Azure, AWS ,GCP. If they all provided more information, then this would be not much of an estimate more of a measure. couple comments from the live stream. Corey Adkins says thank you all. I've recommended this package to my ML team, which is awesome. And right. Yeah. And and Ryan Clark says the efficiency varies widely between countries and whatnot. But you sort of addressed that already with your your comment about like trying to work to understand the different sources and how they, even though they both look like coal, for example, they're actually not the same?
59:20 Yeah, it's a really great question. And it really is something that you know, we've relied on data from the US because we have the the highest resolution of this and have the CO2 impact per energy consumed. And we have the most transparency about the numbers. It's not just a number, you know, in a, you know, in the end of a report or a footnote of report, we can actually trace it and do some due diligence on that. So we've we've used those numbers, but if anybody, you know, listening to this or hearing this has a connection with any of those companies, the hardware companies, or knows how to get more energy data, we are always looking for collaborators and contributors. So please reach out to us. Yeah,
59:56 fantastic. All right. I know we're pretty much at the end of our time together. So let me just ask you one really quick question, I have a thing I want to model when I train. So I'm going to fire up a Docker image, maybe a set of them on Kubernetes and kick them off, let them go do their thing, then I'm going to come back next week, have another idea that I'm gonna train up some more things. Maybe my colleague is doing the same. This is going to generate a bunch of emission .CSV files. How do I correlate? How do I put these all together? So that I can see like, as a team, this month is here we are, is that something that happens?
01:00:27 So it's, I'm really like, glad you asked this question. Because this is something we're working on. So currently, I think you can just some of those CSV files. I think that's Yeah,
01:00:38 it's up to you to keep track and like say, Okay, here's, here's when we're sending this run CSV
01:00:42 files have a lot of downsides. It's less object oriented. And you could have a JSON file that would be more structured and so on, but at least it's very easily. You can just concatenate them. Right?
01:00:54 So I think it's totally good. Actually, it's more about the, it's going to be transient files in lots of places. How do I put the exactly one place? So I see it as a whole?
01:01:02 Yeah, what we've been working on lately with a team of volunteers in France with the data for good FR initiative is to create an and deploy an API in a database. So we want to create this online storage of the time series and not just the final sum, and has that in a hierarchy of ownership from the organization, to the single run through teams and projects, that requires a lot of work that requires deployment that requires fans and sponsors to host that thing that requires a lot of engineering. And I'm glad you asked that question, because I also wanted to have one word about open source. And who's doing this, and it's all about volunteers, and no one is paid for that. And companies like Kermit, or the Boston Consulting Group, who have been a partner for more than a year, do dedicate some software engineering time. And we need more collaborations. And if you think this tool is great, and you want to use it, I think we would really appreciate if some of you had time to help, because it's a small team of volunteers. And it's the it's I mean, it's the same for most open source projects out there. And they need collaborators and contributors. And and I think it's one of the things, the things you can do also is help out and most of it is pure Python, right? So chances are, you're going to be able to help and, or if you if you don't know Python, there's some data collection issues. And it's just about writing to a CSV file, just got to find the time to go fetch those numbers, our data visualization issues to help improve the dashboard and so on. So like, it's, it's never ending so everyone can help. Yeah.
01:02:47 Sounds like a really great project to get involved in if people are looking to find an open source thing to
01:02:52 work on. And we were willing to onboard you, which also is sometimes empty in open source projects. And I think I mean, it's hard. And there are hundreds of guides of how to contribute to open source projects out there. I think, yeah, we want people so like, if you were going to help you help us,
01:03:10 yeah, cool. I encourage people to do so. Alright, final quick question. Before we got to here, Brian hermsen says I noticed to be a big ask. But I would love to see this as a built in profiler for CPU intensive ML libraries. Yeah, that
01:03:24 we just told you CPU super hard. Yeah, I mean, it's, I think this is going to go beyond what we know and can do. But now I agree with you. It should be part of more decisions in computer science in general. And so for engineering, and hardware, engineering, and everything. Alright.
01:03:43 Let me ask you all the final two questions really quickly. Since there's three of you, you know, write some Python code. What editor Do you use
01:03:50 VS Code me too sublime, Sorry, guys. I used that. yesterday. Got is good now. Yeah,
01:03:57 I feel like a lot of the sublime people have moved on to VS Code, but sublime, still popular as well. Notable PyPI package, maybe some cool library that works with some of the things you're all interested in, just maybe people haven't heard of.
01:04:11 I like Rich, which is rich, super flexible, colorful, versatile tool to print stuff instead of writing again, and again, the same quirky print functions,
01:04:25 they call it A to E, A to E is that. A to E is a terminal universe. Right? Yeah.
01:04:31 It's so good. It's cool. It's incredible. Yeah. And I just want to also give a shout out to computational open source libraries like NumPy scikit, learn and matplotlib, pandas and so on, because like, those things, run the data science world and it's all open source nonprofits, and it's a few maintainers and they deserve a lot of the credit for the for the recent advances. They
01:04:57 definitely do Boris or john either want to give a quick shout at anything,
01:05:00 if I will do some Chinese marketing, I will say the Comodo python is okay. But more, I will say, as a FastAPI or some basis library that I will use requests and does the Python standard library. I know some CPython core developer, they're doing a tremendous job. It's a thankless job. So thank you to Yeah, for sure. to them. Yeah.
01:05:23 Everything Boris and Victor has have recommended are great stuff.
01:05:26 Yeah. Fantastic. All right. Well, final call to action. People out there are listening. They're doing machine learning. They want to be able to use this to measure their work and maybe make some change. What do you say use it,
01:05:38 contribute, evangelize Share, I think most of the thing you can do about climate change is spreading the spreading awareness discussing those things and challenging the status quo. Yeah.
01:05:51 Fantastic. All right, Boris, Victor, John, thank you all for being here. It's been really great to have you. Thanks so much for your time. Thanks for having us. Have a great day. or evening. Exactly. Bye. Bye. This has been another episode of talk Python to me. Our guests on this episode have been Victor Schmidt, Jonathan Wilson and Boris Feld. It's been brought to you by Square and US over at Talk Python Training. With Square your web app can easily take payments seamlessly accept debit and credit cards as well as digital wallet payments. Get started building your own online payment form in three steps with Squares Python SDK at "talkpython.fm/square". Want to level up your Python we have one of the largest catalogues of Python video courses over at talk Python. Our content ranges from true beginners to deeply advanced topics like memory and async. And best of all, there's not a subscription insight. Check it out for yourself at "trainingtalkpython.fm" Be sure to subscribe to the show, open your favorite podcast app and search for Python. We should be right at the top. You can also find the iTunes feed at '/itunes', the Google Play feed at '/play' and the direct RSS feed at '/rss' on "talkpython.fm". We're live streaming most of our recordings these days. If you want to be part of the show and have your comments featured on the air, be sure to subscribe to our YouTube channel at "talkpython.fm/youTube". This is your host Michael Kennedy. Thanks so much for listening. I really appreciate it. Now get out there and write some Python code