#351: Machine Learning Ethics and Laws Panel Transcript
00:00 The world of AI is changing fast, and the AI ML space is a bit out of the ordinary for software developers. Typically in software, we can prove that given a certain situation, the code will always behave the same. We can point to where and why a decision is made. ML isn't like that. We set it up and it takes on a life of its own. Regulators and governments are starting to step in and make rules over AI. The EU is one of the first to do so. That's why it's great to have Ines Montani and Katherine Jarmal, both awesome data scientists and EU residents, here to give us an overview of the coming regulations and other benefits and pitfalls of the AI ML space. This is Talk Python to Me episode 351, recorded December 17, 2021.
00:54 Welcome to Talk Python to Me, a weekly podcast on Python. This is your host, Michael Kennedy. Follow me on Twitter where I'm @mkennedy and keep up with the show and listen to past episodes at talkpython.fm and follow the show on Twitter via @talkpython.
01:10 This episode is brought to you by Sentry and Signal Wire. Please check out what they're offering during their segments. It really helps support the show.
01:17 Catherine, Ines, welcome to Talk Python to me.
01:20 Hey, it's great to be back.
01:22 Hi, Michael.
01:23 Yeah, it's great to have you here, even if you've been here a bunch of times. Catherine, have I had you on before?
01:27 Yeah, I think so. A while ago now.
01:29 I think so as well. But it's been a really long time, hasn't it?
01:32 Yeah.
01:33 It's great to have you back. This is a very Berlin focused podcast today.
01:38 Just unrelated, both of you happen to be there. So that's really cool. Thank you for taking time out of your evening to be part of the show, of course. Yeah. All right. Well, we're going to talk about machine learning, some of the rules and regulations coming around there, especially in Europe. We're going to talk about fairness. We're going to talk even a little bit about interesting indirect implications like Google, GitHub, Copilot, and these types of things will sort of go through the whole ML space and talk about some of these ideas. But you both are doing very interesting things in the machine learning space. Let's sort of just get a little bit of your background. Katherine, tell people a bit about what you're doing these days.
02:20 Yeah, I'm here in Berlin and focused on how do we think about data Privacy and data security concerns in machine learning. So for the past five years, I've worked on the space of how do we think about problems like anonymisation, differential Privacy, as well as how do we think about solutions like encrypted learning and building ways that we can learn from encrypted data. So it's been really fun, and I'm excited. But first to publicly announce here that I'll be joining ThoughtWorks in January.
02:51 Yes. As a principal data scientist. They're focused exactly on this problem, which they've been noticing here in Europe is how do we think about data Privacy and data security problems when we think about machine learning? It's a growing concern, so it should be pretty exciting.
03:07 Yeah. A company like Thought Works is one of these companies that work with other companies a lot and sort of the consulting side of things. And I feel like you can have a pretty wide ranging impact through those channels.
03:20 Yeah.
03:21 Do you think that being in Germany, there's more focus on these kinds of things in Europe, but especially in Germany, it seems like than apparent in the US.
03:33 Does the US have more of a yellow attitude towards Privacy and machine learning stuff?
03:39 Yeah, I think just from a regulatory aspect, since we saw a passage of the GDPR, which is the big European Privacy law in 2018, that it went into effect, we definitely saw kind of a growing trend here in Europe. But overall, I would say, like, actually France and the Netherlands have done quite a lot of good work. Even Ireland at questioning, let's say, larger tech usage. But the activism, I would say on the ground activism here in Germany from the Chaos Communication Club and other types of activists that are here has been quite strong, which is exciting to see. And therefore, I think Leeds kind of ends up being in the headlines maybe a bit more internationally.
04:23 Yeah. Also actually a fun fact I always like to tell America is that if you go on Google Street View here in the Berlin, it's an awesome time travel because the day to day, it's like over ten years old now. So you can be in Berlin is heavily gendered by it now. So you can really say, wow, how did my neighborhood look 10 12 years ago? Because Google did it once and never came back because everyone wanted their buildings pixelated. And they were like, okay, practice Germany is too difficult. We're never going to send our cars.
04:51 I definitely encourage you to use Google suite. We are in Berlin. It's really fun from historical perspective.
04:57 How funny. So you can go and basically say, I want my house fuzzed out so you can't see details about my personal buildings will look like that. Yeah. If I went to my place on Google Maps, you can see it evolve over time, like, oh, that's why I still have that other car before it broke down or crashed or whatever.
05:16 And I could sort of judge how old the pictures are by what season is it, what's in the driveway or what's the porch look like? What kind of chairs do we have? There's all sorts of detail, none of it's obscured. Right.
05:32 There's a fun fact that some researchers worked on in the US of could they do the census just via Google Street View? And they found there was a heavy correlation between census data and the makes of cars that people had in their driveway.
05:47 Oh, my goodness.
05:48 It's an interesting paper. Yeah. I think actually Timid Gabriel might have been on that paper as well. The kind of very well known machine learning ethics researcher who is now running her own organization in the space. Anyways, it's a really cool paper. I'll see if I can find it and send it to you for the show notes. Michael.
06:04 Yeah. Put in the show notes. Awesome. All right. Well, Congratulations on the ThoughtWorks thing. That's really cool. Ines, you tell people about yourself as well. It's been almost a year, I think, maybe since I had you on talkpython.
06:16 Yeah, I think it was the year in review. I was in Australia at the time. It was summer in Berlin. Now it's winter.
06:24 I'm still the co founder of Explosion. They're probably most known for our open source library, Spaces, which is an open source library for natural language processing. And one of the main things people do with our stack is build NLP and machine learning applications. We also published an annotation tool called Prodigy, which allows creating training data for machine learning models. And all our work and everything we do is very focused on running things on your own hardware, data Privacy. And that's also something that's very important and something that we see our users and customers do. So people want to train their own models and actually think about how do I create my data? What do I do to make my model go? What do I do to make my application work? And so this all ties in quite well with other questions about, like, okay, what should I do? How should I reason about my data, which we also see as a very important point. And I actually think this can even prevent a lot of problems that we see. If you actually just look at your data, think about what do I want my system and my pipeline to do? How do I break down my problem? And that's exactly what the tools were building, hopefully helping people to do.
07:30 Yeah, fantastic. So you have Spacey and then you have some that's the open source thing. You also have some products on top of that, right?
07:36 Yeah, exactly. So we have Prodigy, if you scroll down a bit here.
07:42 Exactly. Prodigy, that's an annotation tool, and we're currently working on an extension product for it that is a bit more like a SaaS cloud tool, but has a private cloud aspect. So you can bring your own hardware, you can run your code, your data on your own cloud resources. So no data, nothing has to leave your service. And that's something that people already find very appealing. Prodigy. You can just download it, run it. It doesn't send anything across over the Internet. And that's also what we want to keep doing. Yes.
08:12 I love that you all are embracing that, because there's such we'll get into this later. Not the first topic, but it's related. So I'll just talk a bit about it. I really like that you're not sending the people's data back because if they're going to trust your tools, they need to know that you're not sharing data. That is either part of their competitive advantage or they have to protect for Privacy reasons. I recently started got into the GitHub Copilot trial and I installed that and it said or preview whatever it's called. And they said, oh, you just have to accept this agreement where it says if you generate code from us, we're going to get some analytics. And like, all right, that's fine, whatever. I ask it how to connect theSQLAlchemy from me because I forgot it'll. Just tell me, oh, and if you make any edits, we're going to send those back. I'm like, wait a minute, what if one of the edits is put my AWS access key in there because it needs to be there or for a little not things that I'm going to publish, but it's still going back, right? So there's a lot of things and I just uninstalled and I'm like, you know what? No, this is just too much risk for too little benefit for me in my world.
09:17 Yeah. I think we also see a lot of it is also kind of pointless. I think there used to be this idea that like, oh, you can just collect all the data and then at some point you can do some magical AI with it. And I think for years this used to be the classic pitch of every startup. Like, I don't know, it's almost used to be we were pitched in some weird way. It's usually like, oh, we do X and then we collect data and then it's AI and then it's profit.
09:43 People like, map out their business. And I think this has changed a bit. But I think you can still see some of the leftovers where companies are like, we might as well get as much data as possible because maybe there's something we can do with it. And we've always seen working in the field that I don't want your annotations, absolutely. Literally no advantage I get from that. So I might as well set up the tools so that you can keep them.
10:07 That's perfect. And Catherine, Mr. Hypermagnetic says, oh, I thought KJ Mistan was a hip new tech stack.
10:15 No, it's my company name.
10:20 It's a good inside joke that lives on many decades later.
10:24 I love it. All right, well, let's kick things off with some regulation, and then we can go and talk more things about maybe some other laws. We can talk about things like copilot and other stuff. But I did say this was European focused. At least kick off to the show here. So I think one of the biggest tech stories out of the EU in the last couple of years, it's got to be GDPR. Right.
10:50 I still heard just yesterday people talking about, well, this because of GDPR and because we're doing that. And this company is not.
10:57 Right.
10:57 There's still just so much awareness of Privacy because of GDPR. And I do think there are some benefits and I think there are some drawbacks to it, but it certainly is making a difference. Right. And so now there's something that might be sort of an equivalent on the AI side of things. Right. I mean, not exactly the same, but that kind of regulation.
11:18 Yeah.
11:19 It's interesting because it's been a work in progress for some years, like the GDPR was. So the initial talks for the GDPR started, I think, in 2014, 2015, didn't get written until 2016, went into effect in 2018. And still a topic of conversation now, many years later, some pluses some minuses.
11:39 Right.
11:40 We can talk about how GDPR was intended versus how we've seen it rolled out across the web, which is quite different than what was intended, obviously.
11:48 I think that's always a problem with a lot of regulations. And you in general, you see, like I'm a pro regulation. I think the idea of GDPR is great. But of course, once a large organization like the EU rolls things out, it can kind of go a bit wrong in there.
12:05 Let me set a little foundation about why I sort of said I think there are some negatives. I think the Privacy stuff is all great. I think the ability to export your data is great, the ability to have it erased to know what's being done with these are all good things. I feel, though, that it disproportionately was difficult for small businesses, but was aimed at places like Facebook and Google and stuff. So, for example, for my courses and stuff, I had to stop doing anything else for almost two weeks and rewrite a whole bunch of different things to comply. To the best of my knowledge, I'm doing it right. But who knows? Whereas Facebook didn't shut down for two weeks, they had a small team who did it. Right?
12:47 Yeah. Well, no, they had quite a bit internal engineering work.
12:51 Only as a percentage. Only as a percentage of total employees. I mean, small.
12:54 But they actually had to shut down several products that are no longer available in Europe, that are available in other jurisdictions. And also, when we look at who's been fined, it's been predominantly the faangs and other large operators.
13:08 I do think the enforcement is focused on the Faang side of things.
13:11 Yes.
13:13 Which is basically what most folks said when it went into enforcement is like, yes, we believe these are things that everybody should be doing to better look after the security of sensitive data, regardless of the provinence, so to speak. But also, we intend to employ this legislation to look after these problems.
13:34 Right.
13:34 And everything that Max Graham has been doing, he's based here in Germany. And he's been filing quite a lot of amazing lawsuits against a variety of the FAANGS and been getting some interesting rulings, let's say, from the European courts.
13:50 Yeah.
13:50 Good. Ines, how was the GPR for you before we get to the next law at explosion? Was it a big deal?
13:56 Not so much, because I think actually already our standards were such that we weren't really doing anything that was violating or we tended to violate what then became GDPR. Like, I think it was just we had to go through some things to make. I don't know.
14:11 We've always intended to not have any cookies on our site, so we do in the first place. And then there is actually a lot of work to make sure that nothing you use tries to sneak some cookies in there and then you're like, it's the wrong ul here. Now, I have all these YouTube cookies again, but in general, we were already even before it came up or before GDPR really came out, we realized that we actually quite compliant, or we already aim to be compliant. So we didn't have to do very much.
14:41 I think I was too, in terms of principle, but not exactly in practice. There were those types of things like, for example, I had to discuss comment section at the bottom of all the pages. And then I realized they were putting double click cookies and Facebook cookies and all sorts of stuff like, wait a minute, I don't want people who come to my page to get that, but I'm not trying to use it. It's like this Cascading chain and like embedding YouTube videos. We go to a lot of work to make sure that it doesn't actually embed YouTube. It does a thing that then allows you to get to the YouTube without that kind of stuff. Right. But still, I think it's good. I think it's pretty good in general. But let's talk about machine learning and AI stuff. So I pulled out some highlights here. Let me maybe throw them out and you all can react to them. So we've got this article on Techmonitor AI called the EU leaked AI regulation is ambitious but disappointingly vague. New rules for AI in Europe are simultaneously bold and underwhelming. I think they interviewed different people who probably have those opinions. As you can see through the article, it's not the same person necessarily. It holds both those at once. But this was leaked in April 15 of this year, and I think seven days later or something. The actual thing was published. So it's not so much about the leak, just that the article kind of covers the details. Right. This is still not unknown, is it?
16:07 No.
16:07 The full text is available and there's been a lot of good kind of deeper analysis from a variety of perspectives.
16:17 This portion of Talk Python to Me, is brought to you by SENTRY. How would you like to remove a little stress from your life. Do you worry that users may be encountering errors, slowdowns or crashes with your app right now? Would you even know it until they sent you that support email? How much better would it be to have the error or performance details immediately sent to you, including the call stack and values of local variables and the active user recorded in the report with Sentry? This is not only possible, it's simple. In fact, we use Sentry on all the talk python web properties. We've actually fixed a bug triggered by a user and had the upgrade ready to roll out as we got the support email. That was a great email to write back. Hey, we already saw your error and have already rolled out the fix. Imagine their surprise, surprise and delight your users. Create your Sentry account at talkpython.fm/sentry and if you sign up with the code talkpython all one word. It's good for two, three months of Sentry's business plan, which will give you up to 20 times as many monthly events as well as other features. Create better software, delight your users, and support the podcast. Visit talkpython.fm/sentry and use the coupon code 'talkpython'. Catherine, you want to give us a quick summary of what the goal of this is?
17:36 Yeah. So I think when I first got wind that this was going to be happening, I was talking with some folks at the Bundesministerium in town, which is basically the intern German administration here. So if we had in the US, sorry, US centric in the US, if we had an Office of Homeland Security and the Interior, and they were all together and they also did FTC like things, that's what it would be anyways. And they have a group called the Ethic Commission, which is Data Ethics Commission, and they had built several large reports on thinking and analyzing about the risk of algorithmic based systems and algorithmic based decision making, which has been a topic of conversation obviously for a long time. Eventually all of that. What I found out was that they were talking then with other groups in the EU about forming a regulation like this. And if anybody wants to read the German Ethic Commission report, which also is available in English, you can see that a lot of the ideas are kind of taken and transferred there, which is basically like when we think about AI systems, can we analyze the level of risk that they would have, let's say, in use in society? So you can think of very high risk being like bombing people and very low risk bombing people like drones or self flying planes or absolutely.
19:06 We have drones that bomb people. That's the thing that happens in the world. But what is less common is that you just send the drone out and say, go find the quote bad people and take care of them. There's still usually a person somewhere that makes a decision.
19:22 And so I don't think we want a world where we just send out robots to take care of stuff and just tell them to go.
19:27 Some people want that because it can be very nice to absolve yourself of that responsibility. If you're the one pressing the button, you have to answer for that and you have to take accountability. If the machine did that. Well, I mean, it's kind of like a problem of, okay, whose fault is it? It's a self driving car kills, not pedestrians.
19:48 Yeah.
19:49 There's a really great psychological therapy around that, too, called the moral crumple zone, which is basically talks about how the nearest human to an automated system gets blamed.
20:03 I was like, well, why didn't you do something? I don't know. The computer said, yes, it's interesting psychology that we use to judge people. Like, you should have done something.
20:15 Yeah. And I do think actually in the sales targeting Python, but I think a lot of people would actually say, yeah, the developer who built that system that made the decision to drive forward and not stop is to blame. So that does check out.
20:28 It could be. I really like the crumple zone analogy. I don't know if I'm receiving it correctly, but if you're in a car crash, the radiator is going to get smashed straight away. That's the first thing when it came in. But the driver back in the middle might be the one who did it. In the software world, maybe the equivalent is, yeah, the developer made that choice, but they made that choice because the CEO and the manager said, we are optimizing for this and we don't care. We want to ship faster or we want to make sure this scenario is perfect. And they're like, you know what? That's going to have a problem when it gets snowy. And we can't see this line like, you know what, this is what we're aiming for. They don't necessarily make them do it, but they say, this is really where you got to go. So crumple zone. I like it.
21:10 Yeah.
21:11 And actually, I think that just to make it clear, the crumple zone was like that the driver would get blamed rather than the company that produced the software or the operator of the radiology machine would get blamed rather than the producers, because you kind of create this inherent trust. Like, well, they're building a self driving car. Clearly, it's not their fault. It's the driver's fault. Why weren't you paying attention or something like this?
21:37 Right, yeah, absolutely. So really quickly, I just want to say none of us are lawyers, so don't take any of this advice and go do legal things, talk to real lawyers. But I do want to talk about the law. And so one of the things the article points out is these rules apply to EU companies and those that operate in the EU. And then what is way more broadly for tech companies or impact EU citizens. Right. So if you have a website and EU citizens use it or you have an app and EU citizens use your API and it makes decisions, probably this applies to you as well, I'm guessing.
22:13 Yeah. We'll see in practice how it gets rolled out, but yeah, it's always about the case law afterwards, but in theory, yes.
22:21 And it's mainly about the documenting risk is I guess what I would say, like documenting and addressing risks. So one interesting thing about it, and I'd be curious to hear both of your thoughts around it is kind of bringing to the forefront the idea of auditing AI systems and what should be done to better audit and document problems in automated systems like AI systems or machine learning systems that I find quite interesting would be curious to hear you all take.
22:53 Yeah.
22:54 Even fundamentally, I think that's also something I pointed out in the article is that there's already this problem of like, how do you even define where an AI system starts? Where it ends, what is the system? Is it just the component? Is it the model? By itself, the same model can be used in all kinds of different use cases, and some of them can be bad and some of them can be good, but it has to be the larger component. But then even where does AI, you could have a robbery system that does the exact same thing. It's not like even if the outcomes are pretty much identical, I think that's already where it gets pretty problematic and where I think we see a lot of people being able to get away with this.
23:34 Yeah. The law seems to try to characterize the behavior of the software rather than the implementation or the technology of it. Right. It doesn't say when a neural network comes up with an outcome or something along those lines.
23:49 And they talked about how the idea is to try to make it more long lived and more broadly applicable, but also that could result in ways to sneak around it.
24:01 Technically, it's still doing what the law is trying to cover, but somehow we built software that doesn't quite line up yet to get back to this auditing question.
24:09 I do think this is definitely a very interesting part of it, and I do think that it also shows that stuff like interpretability will become much more relevant and interesting, even more relevant going forward, because I do think if we end up with laws like that, companies will have to be able to either just explain internally why did my system make a certain decision? Or maybe even this has to be communicated to the user or to citizen in a similar way of how with GDPR, you can request your data. Maybe you can be able to request more information on why a certain outcome was produced for you or which features of you the system is using and I think that all it does require.
24:52 The feature seems completely doable. It seems entirely reasonable to say, well, we used your age, your gender, and your income and your education level to make this decision. I think maybe more tricky is why did it make it a decision?
25:08 You know, better than I do, but it seems really I can sit down and read code and say, well, there's an if statement that if this number is greater than this, we do that. But it's really hard to do that with neural networks. Right.
25:18 Yeah. Machine learning gets tricky because machine learning is always cold in data and yes. So it's kind of it's this software 2.0 idea where even testing a system is much more difficult. Like if you're just writing straightforward piping code, you can then write a test and you can say, oh, if this goes in, this should come out. And if that's true, then, yeah, you have a green test. And then.
25:36 Exactly.
25:37 But I mean, it kind of is a testament to the fact that we don't really test machine learning systems today very well. Like, we have very early in the whole ML up side of the equation. And I think one of the things, first off, is a lot of these audits people think they're going to be self assessments. So leave that to question Mark of how a self assessment is going to work in any type of scale. But then also they put forward one thing that I really liked about this is they actually put forward things that people should be testing for, like security of the models, like the Privacy of the models, like the interpretability of the models and so forth. And I would say that most places that are throwing machine learning into production today do not test for any of those things.
26:25 No. I think it's going to be like some accuracy school and then maybe some feedback from users, a reasonable outcome.
26:33 Okay, good. It's working.
26:34 Yeah.
26:36 We see this kind of disconnect. If you go from academia to industry where academia works differently, you have very different objectives, and that's great. You're trying to explore new algorithms, you're trying to solve problems that are really hard and then see what works best on them and rank different technical solution. In industry, you're actually shipping something that affects people. And yeah, if you just apply the exact same metrics, that's just not enough.
27:00 What do you think about testing by coming up with scenarios that should pass or not pass? For example, if you're testing some kind of algorithm that says yes or no on a mortgage for a house or something like that to say, okay, I would expect that a single mom would still be able to get a mortgage. So let's have a single mom apply to the model and see what you come up with. These scenarios go like, if it fails any of these, it's unfair and try to give it examples. Is it possible?
27:31 I don't want to call it cute.
27:33 It's a very idealistic view and it's all very nice. But I see two problems with this. One is that a lot of AI systems are usually quite different. Not everything is as straightforward as, like, oh, I have a pipeline here that predicts whether you should get a mortgage, just option, like, lots of different components. Every company tries to solve very different problems. So you can't easily develop a framework where you have one input and one output. Usually predicting the thing is one tiny part of the much larger application. And then also if you have like, it seems like, should someone get a mortgage, should a private company give you a mortgage or not? I think a lot of companies would find that, oh, maybe it's up to us whether you get a mortgage or not.
28:14 There's no general framework.
28:17 Maybe with the mortgage it's a bit different. There are so many applications where, yes, you can say it's really unfair, but it's still in the realms of, like, what a company would argue is up to their discussion. And I'm not defending that. I'm just saying it's very difficult to say, oh, you become paid.
28:33 It's not as straightforward as my naive example.
28:37 At least in the US, there's actually laws for this. So like equal treatment or disparate treatment, we would say. So there's actual mathematical it's a statistical relationship, like 70 30 or 80 20, I think that you can show that there's disparate treatment. So, for example, if you could prove that there's that much of a difference, if you're a single mother, let's say, versus other groups and you actually have a legal court case, you can take the bank to court and you can sue them. So there's some precedents for equal treatment, at least in some countries and some jurisdictions, I think. But from thinking about the mathematical problem of fairness in all of the research that a lot of really amazing, intelligent researchers have done is they've shown that fairness definitions and the choice of fairness and how you define it can actually be mathematically diametrically opposed. So depending on what definition you choose, and there's a whole bunch so Arvind Narayanan and his group in the US have been doing a ton of research on this. There's a bunch of folks that have been doing research for more than a decade on this. All the ML people, the Fairness, accountability and transparency and ML conferences that run every year have been doing, again, two decades, nearly of research on this stuff. But it's not a solved problem. Even if, let's say you choose a fairness definition mathematically, you measure the model, you have met that requirement. It doesn't mean that actually what you're trying to show in the world or what you're trying to do in the world from how we humans would define fairness is what you've meant, right?
30:12 Yeah. Statistics and intuition are not necessarily the same for sure.
30:16 Yeah.
30:16 You had an interesting comment about fairness is not the only metric before we started recording.
30:24 Oh, yeah.
30:26 The question that I like to ask people is, let's say you're building a computer vision system to bomb people, to identify people and bomb the right targets. If you said it performed fairly, let's say, in relation to gender or in relation to skin color, to the darkness of your skin color, would that be an ethical or fair system?
30:48 Yeah, it's hard. It's certainly not an easy answer. Yeah, there's more to it. So one of the things that's interesting about this law is that it talks about high risk AI systems, and it refers to those pretty frequently through there. So high risk AI systems include those used to manipulate human behavior, conduct social scoring, or for indiscriminate surveillance. Those are actually banned in the EU, according to this law. Right.
31:18 I guess you could always read by reading. You can read who this was written for and who they had in mind when they wrote that. I think it's quite clear what types of applications and what types of companies.
31:32 Yeah. The social scoring stuff is really creepy, but yeah, indiscriminate surveillance also. And then it also talks about how special authorization will be required for remote biometric identification. This is, I'm guessing, types of biometric identification that you don't actively participate in. Right. You don't put your fingerprint on something, but you just happen to be there. They call out specifically facial recognition. But I've also heard things like gate, the way that you walk and weird stuff like that. So it's not banned, but special authorization will be required. Oh, yeah. You're typing, too, right?
32:08 Even your typing pattern is quite or is more unique than you think. Yeah.
32:14 Actually, even more sort of old school fingerprinting. I'm always, like, amazed at what can be done to uniquely identify you on the Internet, even without having any personal identifiable information.
32:28 This portion of Talk Python to me is brought to you by SignalWire. Let's kick this off with a question. Do you need to add multiparty video calls to your website or app? I'm talking about live video conference rooms that host 500 active participants, run in the browser, and work within your existing stack, and even support 1080p without devouring the bandwidth and CPU on your user's devices. Signalwire offers the APIs, the SDKs and Edge networks around the world for building the realest of real time voice and video communication apps with less than 50 milliseconds of latency. Their core products use WebSockets to deliver 300% lower latency than APIs built on rest, making them ideal for apps where every millisecond of responsiveness makes a difference. Now you may wonder how they get 500 active participants in a browser based app. Most current approaches use a limited but more economical approach called SFU, or selective forwarding units, which leaves the work of mixing and decoding all those video and audio streams of every participant to each user's device. Browser based apps built on SFU struggle to support more than 20 interactive participants, so Signal Wire mixes all the video and audio feeds on the server and distributes a single, unified stream back to every participant. So you can build things like live streaming fitness Studios where instructors demonstrate every move from multiple angles, or even live shopping apps that highlight the charisma of the presenter and the charisma of the products they're pitching at the same time. Signal Wire comes from the team behind Free Switch, the open source telecom infrastructure toolkit used by Amazon, Zoom, and tens of thousands of more to build mass scale telecom products. So sign up for your free account at Talkpython.fm/signalwire and be sure to mention Talk Python to me. Receive an extra 5000 video minutes that's Talkpython.fm/SignalWire and mentioned Talk Python to me for all those credits.
34:18 Another thing that stood out to me I thought was fun is that people have to be told when they're interacting with an AI system. So you have to explicitly say, hey, this thing that you're talking to here, this one, you're not talking to a human right now. You're talking to a machine.
34:36 Yeah, we'll see if it gets rolled out like cookies.
34:41 It's like a big blob of text that says there may or may not be automated systems that you interact with on this product or something.
34:49 Yeah, I think it's almost like, I don't know, like a classic Disclaimer, because I think the way it's written probably makes people think more about conversational AI. But I do think this also covers everything else. And you use some components, it does some arbitrary predictions in order to add this can go like 20 level speed, then you need this Disclaimer on it. And then Unfortunately, I think the side effect of that will be that, okay, people are less likely to notice it because it will have to be on everything, like, even really small features you might want to have on your website that do use AI in some form or another.
35:22 It seems totally reasonable, maybe unnecessary, but certainly reasonable. But yeah, you're right. It's going to be like the cooking notices.
35:30 So if you go to, say, on Netflix and you go to watch a movie, well, that list of recommended for you. Do you have to? Okay this I'm not sure if I can find it.
35:41 It will just be a Netflix like terms and conditions. When you accept those terms and conditions that most people probably don't free, you will accept that, yes. Everything you interacting with is AI.
35:49 By the way, here's a very long 20 page document about how we may or may not use automated systems in your use of this website.
35:58 Exactly.
36:02 Have fun. Let me know how the reading goes.
36:06 I said I thought there was a lot of stuff that came out of the GDPR that was pretty good. This website may use cookies. That to me the worst is the worst. It's like do you want to be able to log in or do you want to not be able to log in? I want to be able to log in. Okay, we've got to use cookies. So I actually got this thing called I don't care about cookies as a browser extension. And if it sees that, it tries to agree to it automatically, like every side just to cut down on the cookie. Okay.
36:35 By the way, this was by no means the intention of the law. Just to make it clear to everyone.
36:41 It's important to bring that up because especially, I think from the European perspective, I'm generally a fan of GDPR and then often people go like all these cookie pop ups. It's like, yeah, no, that's not GDPR.
36:55 Weren't the cookie pop ups predated the GDPR, didn't it?
36:57 It was mainly some people did it before, but it was deeply rolled out for GDPR because it's all compliance. Now a tip for folks is somebody got sued. It was Google or Facebook that it was too hard to just do the least possible. So now if you haven't installed this extension, there's usually a big button that says legitimate interest and you can just press that one. It's the least amount. Yes. Does it usually now involve two clicks rather than one?
37:30 Yeah. I wonder if this extension does it, because I think as far as I know, it's also, if you get to a set, if it offers you to go to Settings page, everything has to be unchecked by default.
37:40 Exactly.
37:41 Yeah. It's actually quite convenient. You just go to the button that's not accept all and then you accept whatever is there and then you get nothing.
37:49 It might be this big and the same color as the page. So just be like really hard.
37:58 Which just talks about I don't know if you've talked about this on the podcast recently, but I've been reading a lot about dark patterns. And like, dark patterns and Privacy are like in a very deep relationship on the Internet of like, no, you really want to give us all your data. You're going to be so sad if you don't.
38:17 Yes. The dark patterns and the lack of Privacy.
38:21 The same color, foreground and background, that ties into another compliance thing, which is accessibility. And at least if like in the US, you can get sued for having a not accessible website, even companies will at least not do this, even if they don't care about anyone accessing their website. And all they care about is not getting sued. You won't have buttons with the same foreground and background color anymore.
38:42 Yes, indeed. And I'm okay with cookies little clicker thing because I also have network level tracking blocking.
38:54 So that if they say sure, well, fine. Here's your Facebook cookie. It's like no, it's blocked. So it's my weird setup.
39:02 Vincent out in the audience. Just to mention, Rasa, a Python tool for open source chat bots, took the effort for writing down some ethical principles of good design. And one of those is that it lists a conversational assistant should identify itself as one.
39:17 Hey, Vincent, Stephanie browser's is a great open source library as well. We kind of friends with Spacey. It's kind of the same ecosystem.
39:27 I think this is a good principle, actually. Often when I use the bot or whatever, and I'm not sure it's a chatbot. There are like a lot of things and ways of things. You can write to check if it's a human or not, because there are certain things that usually these things are quite bad at resolving references. So if you use like a pronoun or to refer to something you previously said or a person or something, there are a lot of things that often these things are quite bad at. And if you vague enough, a human will always get it.
39:55 But like a machine might not use your text processing, ML skills and experience for good use. That's right.
40:04 Yeah. Because there was a case where I was like, this agent is so incompetent. It must be like a machine. And then actually it would be a pretty good chatbot because the chatbot passed as an incompetent human. But no, it turned out it was just an incompetent human.
40:19 Oh, no.
40:21 Yeah, that's true. The chatbots are very bad at retaining, like building up a state of the conversation. It's like they see the message and then they respond to it. It's like you ask a question and then you say, what exactly is this about? Well, I said it was about this above. So what do you think? It's just those kind of tie in that they don't carry on.
40:42 These systems are getting better at this, but there's just some like if you really try to be as vague as possible, you can trick them and then you find out if it's a bit of a lot.
40:52 Yeah, exactly.
40:54 So let's see some more things about the law is over here. The military AI in the military is exempt, so it's not a surprise. I mean, there's probably top secret stuff. How are you going to submit that? I don't know.
41:07 Yeah. But then it's like, oh, a lot of the worst things that happen happen in this context.
41:14 For example, for a long time at exposure, we've had this policy that we do not sell to organizations who are primarily engaged in government, military or intelligence national security, because our reasoning for that has always been that, well, in the free market, you have a lot of other ways that companies and applications can be regulated by regulations like this, but also just by market pressures and by things just being more public. All of these things you do not have the workers military intelligence or certain government work. So we see that as very problematic because you have absolutely no idea what the software is used for, and there's absolutely no way to regulate it, ever. And then we'd say, okay, that's not what we want to sell off the road. But the other use cases, some government things are fine. We'd happily sell to the IRS and equivalents or Federal Reserve. There's a lot of things that are not terrible. That a government adjacent or just a lot of research labs as well.
42:12 But military, that's quite obvious when you think how many companies that work on machine learning today that focus on selling explicitly into the military. And it's like, well, are they exempt? Because basically it's palantry exempt from this.
42:30 Interesting. Right. Because the law would otherwise apply to them, but sort of indirectly. So you're asking about the transitive property, basically.
42:40 Yeah.
42:41 Well, it's only in use in military use, so it's probably. Okay. It's probably exempt or whatever.
42:47 Yeah. Well, I guess if you could make the case that it's classified, that's probably what companies like that will have the means. They would make sure that every project they're taking on is classified in some way, and then they get a around that..
43:00 Yeah, that's probably true. All right. Another thing that I thought was interesting, so all the stuff we talked about so far is sort of laying out the details on the imprecision and subjectivity side. What are the quotes? Was one area that raised eyebrows as part of the report, which reads, AI systems designed or used in a manner that exploits information or prediction about a person or group of persons in order to target their vulnerabilities or special circumstances, causing a person to behave or form an opinion or take a decision to their detriment. Yeah, that sounds like a lot of big tech, honestly. Like a lot of the social networks, maybe they even suggest maybe that's even like Amazon shopping recommendations. Right. Encourage you to buy something that you don't need or whatever. What do you think about that?
43:47 Yeah, I guess it's quite vague and it's like, okay, how do you define we have to wait for the actual cases to come up and someone making the case that like, oh, I don't know. My wife divorced me because it happened and that was the outcome and it's clearly that company's fault. And then someone can decide whether that's true or not.
44:07 And of course, these are not cases that this was designed for or written for, but it is baked to this extent where, yes, this would probably be a legit case that the judge has to decide over and maybe the person would win.
44:19 Wouldn't it be great if they gave examples?
44:21 I didn't want to accept the cookies. So I'm suing under the new law.
44:26 Exactly.
44:26 Yeah. That made me feel bad.
44:28 But I think some of it is like, really feel like the conversation here, and I'm curious to the opinion on the conversation US around kind of the political ad manipulation. And the amount, let's say, when we think about topics like disinformation and misinformation, the amount of kind of algorithmic use of, let's say, opinion pieces to kind of push particular agendas.
44:55 When I read this, I'm guessing that's, like one of the things they had in mind.
45:00 I had misinformation and fake news and all that kind of stuff is what popped my mind when I saw this.
45:07 Yeah. And I was also thinking of recommendation systems and I don't know, not even fake news, but like, okay, you can manipulate people into joining certain groups.
45:19 Exactly.
45:20 You're a relatively just normal person. Yeah. And then you read some posts, they suggest you join a group. Three months later, you're in the wilderness training with a gun or something. It's so easy to send people down these holes. I think on a much more relatable note, I would say, even though I really love YouTube, one of the sort of scenes I think I heard it somewhere. I don't know where it came from, so I can't attribute it is. But you're never extreme enough for YouTube. If you watch three YouTube videos on some topic, let's suppose your washer broke. And so you need to figure out how does my Dishwasher work? And so you watch several videos to try to fix it. Well, your feet is full of dishwasher stuff, and if you watch a few more, it's nothing but dishwashers. There's a lot of other videos Besides Dishwasher. So any little it's almost like the butterfly effect, the chaos theory effect of like, I wash a little bit this way and then you end up down that channel.
46:19 One of the interesting things I think about that is I've been talking with a few folks where a friend's family has been kind of like radicalized around some of the topics that are very radical online right now. And they're like, I just don't know how it happens. And it's kind of like, well, the Internet that they're experiencing is incredibly different than the Internet that you're experiencing. And so kind of like when we think about Lockdown or where the Internet is going to be like a major source of people's life, and then their Internet is just a completely different experience than yours, based off of some related search terms across maybe four or five different sites that have been linked via cookies or other types of information.
47:06 Yeah. You can say, well, I have this experience, but if your entire world online was different, maybe you wouldn't have the same experience. I think it would be very hard to say how you would think and feel if your entire information experience was completely different.
47:23 Don't make me think about weird alternate realities of myself. What if one decision was made differently? What world would you be in? It could be really different.
47:32 No, you wouldn't even necessarily know. And I think but I think that's also kind of a problem in that sense. I do like that it's relatively vague. And I think laws can be vague because you don't know what's going to happen. And you might have people who are in a situation where they don't necessarily feel like, oh, I've been tricked or treated badly here. And maybe the outcome, maybe the outcomes of their behavior are bad, but maybe what the platform did wasn't necessarily illegal. That's also the problem. It's like a lot of the content you can watch on YouTube, it's legal. And it's your right as like a free citizen, especially in the US, where people take this even more seriously to some degree than people in Europe. You can watch like antibact's videos all day, and that's your right. And nobody's not good for you.
48:19 You can do it.
48:21 No, it's not good for you.
48:24 Otherwise, I think terms that are maybe less fake in that respect. It would be much harder to actually go after cases where, yes, it's clearly the platform is clearly to blame or the platform should be vacated, which obviously it's very clear that this is what I had in mind.
48:39 Right? Absolutely.
48:40 I really like the part here that's, like exploit information, target vulnerabilities, because it's kind of like, okay, I know these what we saw with Cambridge Analytica and then a bunch of the targeted stuff after that was like, we can figure out exactly how to target undecided voters of these different racial groups in these counties, and we can feed them as many Facebook ads as possible. And it's just like, wow, okay. I don't think people realize that that was so easy to put together and do given like a fairly small amount of information about a person.
49:16 And it's not personal information. Right. Because usually it's what we would call profiles of individuals. So you fit a profile because you like these three brands on Facebook and you live in these districts and you're this agent in this race or you report this race, or we can infer your race because of these other things that you've liked. It means it adds a lot of information that I don't think most people know that you can get that specific in the advertising world.
49:48 Yeah.
49:49 How do you ladies feel about the whole FLoC thing Chrome was doing to replace cookies? I mean, we wouldn't even have to have those little buttons or my ad in the world.
50:02 Yeah. There's been a lot of important writing about flocs and vulnerabilities.
50:09 Sorry if they don't know.
50:16 Federated learning is essentially a tool that can be Privacy preserving, but doesn't have to be. And it basically means that the data stays on device and the things that we send to a centralized location or several centralized locations are usually gradient updates. So these are small updates to the model. The model isn't shared amongst participants and the process repeats. The exact design of how flocks was rolled out and is rolled out is, I think, not fully clear. And in general, I'm a fan of some parts of Federated learning, but there's a lot of loopholes in Floc design that would still involve the ability for people to both reverse engineer the models, but also to fingerprint people. So to take your cohort plus your browser fingerprint you combine the two, it becomes fairly easy to reidentify individuals.
51:13 Yeah. And I think the more underlying problem is also that, well, are you going to trust something that comes out of Google that's marketed as, like, oh, we will preserve your Privacy, be like, really great for you and the Internet. And I mean, that's just like screams of red flags.
51:29 Yes. To me, it feels like we've been presented a false dichotomy. We could either have this creepy cookie world or because we must still have tracking, or we could have this floc. It's like, or we could just not have tracking. That's also a possible future. We don't have to have tracking. And here's a better tracking mechanism. We can just not have tracking. How about that?
51:53 I was reading a wonderful article about IE6.
51:57 Okay.
51:58 I don't know that's something young children.
52:01 I'm sorry, I'm bringing up ancient history, but there was a browser once, and it was called Internet Explorer Six. It was the bane of every web developer's existence for a long time. But one thing that I didn't know about it until recently is it actually had Privacy standards built into the browser. You could set up certain Privacy preferences, and it would block cookies and websites and stuff for you automatically.
52:30 There was this whole standard called P3P that with the WC3 put together around, like, everybody's going to have your local stored Privacy preferences, and then when you browse the web, it's just going to automatically block stuff and all this stuff. And I was like, we think it is out during IE6.
52:48 What happened?
52:50 Yeah.
52:50 Just let you know a little bit of history. Look up P3P.
52:55 Absolutely.
52:56 All right. I guess to close out the floc thing, the thing that scares me about this is if I really wanted to, I could open up a private window and I could even potentially fire up a VPN on a different location and go visit a place. And when I show up there, no matter how creepy tracking that place happens to be, I am basically an unknown to that location. Whereas this stuff, if your browser constantly puts you into this category, well, you show up already in that category. There's literally no way to sort of have a fresh start, I guess. All right. So one thing maybe you could speak to this, since you're right in the middle of it, is they talk about how one of the things that's not mentioned in here is basically they say the regulation does little to incentivize or support EU innovation and entrepreneurship in this space.
53:43 There's nothing in here in this law that specifically is to promote EU based ML companies, I guess. I think, well, does it even belong there or is it okay or what do you think?
53:56 I don't know. Actually, I was a bit confused by that. It does remind me of like, well, in general, for a long time, a lot of people have said all the EU is like a bad place for startups. I think actually regulation is a big part of that, which is sort of goes full circle. Like a lot of people find that, well, the EU is more difficult. You have to stick to all of these roles and people actually enforce them and you're less free and you can't do whatever the fuck you want. So you should go to the US where people are a bit more like chill and it's a bit more common to like, I don't know, ask for forgiveness later.
54:26 I think that is definitely kind of a mentality that people have. So I'm like, honestly, I'm not sure what incentive EU entrepreneurship could be for me personally.
54:40 For me, it was a very conscious decision for us to start a company in Berlin. And the EU was like a big part in that. I know that maybe I'm not the typical entrepreneur and we're doing things quite differently with our company as well. We're not like your typical startup. But being in the EU was actually very attractive to us. And even recently, as we sold some shares in the company, it was incredibly important to us to, say a German company and be a company paying taxes to the country that we actually incorporated in and not just become a US conference. But I know that's not necessarily true for like everyone.
55:14 But are you maximizing shareholder value now that leads to so many wrongs that this short side, I think. And I think that's great that you have principles about the stuff.
55:25 Yes, but capitalism, capitalism.
55:29 I say that as someone who is also participating in capitalism, sure.
55:36 I don't know. I do think Europe is becoming more attractive as a location for companies. I think the location is becoming more attractive as a location to be based in and start a company. But it is also true that there are a lot of more general things that make it harder to actually run a business here, especially if you directly compared to the US. And yes, a lot of that is also the bureaucracy. It is a lot of the structures not being as developed. It's also if you are looking to get investment in your company, it often makes a lot more sense to look in the US for that, which then causes other difficulties. If you especially a young company and you can't have as many demands, like in our case, we could be like, okay, here's what we got to do. If you're not in that position, you can't do that. I agree with the problems here, but I don't know how this law in this proposal.
56:28 Yeah. What was it supposed to do? Right?
56:31 Yeah. I mean, say, oh, you exempt from some of the things if you are, like, coming to the EU.
56:40 Here's how it advantages the EU. All companies that are not EU based have to follow this law and no rules for the EU base.
56:48 That wouldn't be with the principles of it.
56:53 You don't only have to follow half of these things and then everyone's, like, back in, like, I don't know, having their mailbox companies all over Europe.
57:02 Yeah, exactly. I guess one other thing I just want to touch on with this law, speaking of what is absent, and this also surprises me a little bit is that there's nothing in here about climate change and model training and sort of the cost of operation of these things. Does that surprise you? Would it belong here? What do you all think?
57:24 I mean, is it a high risk? This is why I asked when I saw it wasn't in there at all, not even lightly mentioned. I was like, how many carbon emissions do we have to go until it's high risk? But evidently they're thinking of human side of high risk, actually, climate change also human side of the AI going to kill me eventually or tomorrow.
57:51 Exactly.
57:54 Is it like just 30 years from now when it floods or something?
57:59 Yeah, I think I was definitely curious to see that they didn't include it, despite all of the kind of work here from the Greens and other parties like them for climate change awareness. When we talk about what is a risk. Right. Obviously, it was a huge risk for the entire world, right?
58:17 Yeah. But I guess it also seems like maybe it was too difficult to implement in terms of how do we police that would this then imply? I don't know what AWS have to report to the EU about. Like, who's using what compute?
58:32 I don't know if the compute exceeds a certain limit so that then you can be audited. Like, these could all be potential implications, which again, then tie into other Privacy concerns, because I wouldn't necessarily want AWS to Snoop around my compute, but maybe they have to deal with it.
58:53 Wait a minute. We just had to reveal that this company did $2 million worth of GPU training, and we thought they were just a little small company. What's going on? Right. Like something like that could come out. But I don't know. Something I had in mind is maybe if you create ML models for European citizens, those models must be trained with renewable energy or something to that effect. Right. You don't have to report it, but that has to be the case. I don't know.
59:21 I don't know.
59:22 It's an interesting question, because the thing is, if you had too many restrictions around this, this would encourage people to, like, I don't know, train less, which then in turn is quite bad. I think actually what's quite important is that like if you are developing these systems, you should train, you should care about like what you're training. You shouldn't like constantly train these large language models for no reason. Just so you can say, oh, look at my model that's bigger than yours. But it is on a smaller scale. It's very important to keep training your model, keep collecting data, and to keep improving it, and to also train models that are really specific for your problems and not just find something that you download off the internet or that someone gives you by an API that kind of sort of does what you do and then you're like, oh, that's good enough because that's how you end up with a lot more problems, like being able to creating data and being able to train the system for your really specific use case. That's an advantage. That's not like a disadvantage that you're trying to avoid.
01:00:23 Yeah, that's a really good point.
01:00:25 It could be absolutely in contrast with some of the other things. Like it has to be fair, but if it uses too much training, that's going to go over the other one. So let's do less trainings. It's kind of close enough to be in fair, right?
01:00:39 Yeah. And then that encourages people to use, I don't know, just like some arbitrary API that they can find, which again is also not great or like, yeah, I don't know. I think the bigger takeaways or a very important takeaway from these really large language models, in my opinion, is not necessarily that like, wow, if we just make it bigger and bigger, we could get a system that is pretty good at pretty much everything, considering it's never learned anything about any of these things. I think the takeaway, I think that many people are still seeing it this way. And I think the more reasonable takeaway is if a model that was just trained on like tons of text can do pretty good things with stuff it's never seen before. How well could a much smaller, more specific system do if we actually trained it on very small subset of only what we want to do and that will be more efficient. And I think we should stop hoping that there'll be one model that can magically do, I don't know, your arbitrary accounting task and also, I don't know, decide whether Michael should get a mortgage or not. I think that's kind of this weird idea. It's like you want a specific system that requires training. I think training is good.
01:01:47 Yes. Put a good word in with the mortgage AI for me, will you?
01:01:52 Catherine, I think you want to have a quick comment on this and maybe we should wrap it up after that.
01:01:56 Yeah.
01:01:56 I mean, I guess I was just going to reference the opportunity and risk of foundation models, which I think touches on some of these things. So it's this mega paper and some of the sections are about exactly this problem of like, why do we believe that we need to have these foundational models with these large, extremely large, even larger than the last largest models to do all of the things with. It also has all these other implications, environmental factors being one of them, because obviously when you train one of these models, it's like driving your car around for like ten years or something like this. So there's big implications. And I think the point of can you build a smaller, targeted model to do the same thing? And then the other point of if we need these big models, are there ways for us to hook in and do small bits of training rather than to retrain from the start, from the very beginning?
01:02:51 These are like the hard problems that I think need solving, maybe not always building a better recommendation machine. So, yeah, if you're looking for a problem, solve some of these problems.
01:03:02 Yes. Fantastic. This is a big article. The PDF is published. People can check it out. We'll link to it in the show notes.
01:03:08 Yeah.
01:03:11 I just wanted to say sorry, I've been referring to these as language models. I've been trying to train myself to use the more explicit term because I think foundation models are much better way to express this. And I'm so happy this term was introduced because it finally solves a lot of these problems of everything being a model that I think cause there's a lot of confusion in talking about machine learning.
01:03:30 Yeah, excellent. Haley in the audience asked, what does climate change have to do with this? The reason I brought it up one is because Europe seems to be leading, at least in the consensus side of things, trying to address climate change. I feel like there's a lot of citizens there where it's on their minds and they want the government to do something about it and stuff. The government do a lot there. So as a law, I thought maybe it would touch on that because, Catherine, you pointed out some crazy number. Do you want to just reemphasize some of the cost of some of these things? It's not just like, oh, well, it's like leaving a few lights on.
01:04:05 It's a lot huge.
01:04:08 Yeah. And they keep getting bigger. I forget who released the newest one. I don't know if you remember in this, but they keep getting bigger and bigger. So some of these have like billions and billions and billions of parameters. They sometimes have extremely large amounts of data, either as external reference or in the model itself. And yet Timid Gabrus paper that essentially she was basically fired from Google for researching was around, or one of the parts of the paper was around how much carbon emissions come from training these models. They've only gotten bigger since that paper. And yeah, I may have the statistics wrong but it is almost as bad as driving a car around with the motor on every day with your normal commute for like ten plus years to just train one model. And it's really absurd because some of these models are just trading to prove that we can train them.
01:05:09 The artifact isn't even as useful, where it's like, okay, with a lot of the Bird models, we can at least I think it's good that we just reuse these weights. And I think often in practice, that's what's done.
01:05:19 You take some of these weights that someone else has trained or use these embeddings, and then you train something else on top of that.
01:05:26 Like transfer learning or something like that.
01:05:27 Yeah. Even just like, you use these embeddings to initialize your model and then you train different components using these embeddings. And that is efficient. But it also means that, okay, we're kind of stuck with a lot of these artifacts that are getting stale all the time.
01:05:44 Yeah. So the comment in the audience was I could train one on my laptop and use electricity. True. But it's like 50,000 laptops.
01:05:54 Exactly. And I think training on a laptop is great. Like, for example, we recently did some work to be able to hook into the Accelerate library on the new M1 MacBooks, which made things a lot faster in Spacey. And that was quite cool to see. And we want to do a bit more there because if we optimize this further, you can actually train a model on your MacBook, and this can be really accurate. And you don't necessarily need, like all this computer power.
01:06:21 Laptop is good.
01:06:23 It is if you could do it. But a lot of the ones that we're actually talking about using these huge modules that take a lot. So you can say you don't really care about climate change or whatever. But if you do, the ML training side has a pretty significant impact. And I was unsure whether or not to see it. But, yeah, I guess it makes sense that it's not there. Who knows?
01:06:44 They said this is a foundation for potentially future AI laws in Europe.
01:06:50 Yes. I also appreciate that. Okay. They didn't want to tie everything together.
01:06:54 Even I think from a political perspective, if you are proposing this pretty bold framework for regulation, tying it into too many other topics can easily, I don't know, distract from the core point that they want to make. So I think it might have actually been like a big decision.
01:07:10 Yeah, absolutely. All right, ladies, this has been a fantastic conversation. I've learned a lot and really enjoyed having you here. Now, before we get out here, maybe since there's two of you more sort of over time, I'll just ask you one question of the two. So if you're going to write some Python code, what editor are using these days? Katherine?
01:07:28 First, I'm still in Vim. Am I old? I think I'm old now.
01:07:33 You're classic. Come on.
01:07:37 Classic model.
01:07:41 No, I'm quite boring. Visual Studio code. I've been using that for years.
01:07:46 It's very nice.
01:07:47 I think it's probably the most common answer you get and it's quite certainly using them is a lot edgier and cooler.
01:07:55 Maybe for that reason alone. Actually to like.
01:07:59 Even a window. It just appears on this black surface.
01:08:05 He programs that way and I'm like, okay, if it makes you happy. Yeah, you do. You some people just like to suffer.
01:08:13 That's okay.
01:08:17 No offense. Like, I don't know. This was a joke. No offense to anyone who's talking.
01:08:29 All right, Catherine, Edis, thanks for coming back on the show and sharing your thoughts.
01:08:34 Yeah, thanks for having me.
01:08:36 Thanks for having bye.
01:08:39 This has been another episode of Talk Python to me. Thank you to our sponsors. Be sure to check out what they're offering. It really helps support the show.
01:08:47 Take some stress out of your life. Get notified immediately about errors and performance issues in your web or mobile applications with Sentry. Just visit talkpython. Fm/sentry and get started for free and be sure to use the promo code 'talkpython' all one word. Add high performance multiparty video calls to any app or website with SignalWire. Visit 'talkpython.Fm/SignalWire' and mention that you came from talk.python to Me to get started and grab those free credit. Want you level up your Python. We have one of the largest catalogs of Python video courses over at talk Python. Our content ranges from true beginners to deeply advanced topics like memory and Async. And best of all, there's not a subscription in site. Check it out for yourself at training.Talkpython.Fm. Be sure to subscribe to the show, open your favorite podcast app and search for Python. We should be right at the top. You can also find the itunesfeed at /itunes, the GooglePlay Feed at /play and the Direct rssfeed at /rss on talkpython.fm.
01:09:49 We're live streaming most of our recordings these days. If you want to be part of the show and have your comments featured on the air, be sure to subscribe to our YouTube channel at 'talkpython.fm/Youtube'. This is your host, Michael Kennedy. Thanks so much for listening. I really appreciate it. Now get out there and write some Python code.