How can we reach international consensus on AI regulation?
In this episode of International Horizons, RBI director John Torpey interviews Gabriele Mazzini, a lawyer and officer of the European Commission and expert in AI regulation. Mazzini discusses the means through which European countries have found agreement on the definition of AI and how to regulate it. Moreover, Mazzini stresses that the fears of an apocalyptic AI revolution taking over humankind are not well-grounded. He also comments on the United States case and how it differs from Europe when it comes to regulating AI, acknowledging that there’s been big progress in legislation in this area.
John Torpey
Artificial Intelligence, sometimes called artificial general intelligence or AI seems to be in the news everywhere. It’s awash with both triumphant claims of new capabilities and fears of novel threats to our well being. Indeed, the current issue of The New Yorker magazine asks whether AI will “extinguish the human race,” these apocalyptic worries have, of course led to calls for government regulation. But many people doubt that our elected representatives know enough about AI to regulate it effectively. The Europeans however, have begun to articulate a policy, but what is to be done? My name is John Torpey, and I’m director of the Ralph Bunche. Institute for International Studies at the Graduate Center of the City University of New York. Welcome to International Horizons, a podcast of the Ralph Bunche Institute that brings scholarly and diplomatic expertise to bear on our understanding of a wide range of international issues. We’re fortunate to have with us today Gabriele Mazzini, an officer of the European Commission since August 2017, who has focused on legal and policy questions raised by new technologies. In that role, he contributed to shaping and implementing the policy and regulatory initiatives of the European Commission on Artificial Intelligence since its inception. With the adoption of the communication on artificial intelligence for Europe in April 2018. He co-authored the White Paper on Artificial Intelligence in February 2020, and designed and co-drafted the proposal for the Artificial Intelligence Act in April of 2021. He holds an LLM from Harvard Law School, a PhD in Italian and comparative criminal law from the University of Pavia, and a law degree from the Catholic University in Milan. And he’s qualified to practice law in both Italy and New York. So thanks for joining us today. Gabriele Mazzini
Gabriele Mazzini
Good morning. And thank you so much, John, for having me. It’s a pleasure to be with you.
John Torpey
Great, thanks. Thanks for doing this. So maybe you could just begin by telling us what we should all understand under the term artificial intelligence. I mean, everybody’s heard it by now. But who knows? You know, how many people really get what it’s all about? Could you clarify that a bit for us?
Gabriele Mazzini
Yeah, that’s certainly one of the most challenging questions that I think we had to deal with when when we started thinking at the European Commission about regulating AI. As you as you may know any piece of legislation fundamentally has to define its scope of application. So that’s a really key choice to be made. So definitions are essential. And, of course, when we venture into this process, we realize how difficult to to us to define artificial intelligence. Of course, there’s tons of definitions, and many people have different views and different perspectives. Actually, even within the scientific community, there is not really a consensus of a what is AI, what is not AI. And as you as you know, also, the concept actually, has been around for so long. I mean, the term was used for the first time in the 50s. And of course, since the 50s, the technology has evolved tremendously, so much that of course, the concepts that were referred to with this term back in time could not be the same as today. So this was the background of where we were stood when we had to think about definition number one, artificial intelligence, how do we define it? And certainly there, we made a clear choice that, of course, any definition that is in the legal framework is a legal definition for the purposes of the act is not meant to be, of course, a definition that would hold true for the scientific community, or for our communities, or like even the general public, its definition for the purposes of regulating certain uses of this technology in the AI. And at the same time, of course, although this was there for a fairly limited objective, if you want given the impact of legislation. We also wanted to make sure, and notably the impact of the outside the EU, we wanted to make sure that we will take as much as possible it concept and the definition of AI that will be aligned, or would gather some sort of international consensus. So that is why we started by looking at the work of the OECD. So the OECD in 2019 had developed some principles for artificial intelligence. And they had the definition there. And therefore, that definition, although was meant primarily for a policy document, which is not binding, but received already consensus beyond the EU. So that’s where we started. But we made some tweaks to that definition. Notably, for instance we added the reference to content as the specific output of the AI system. The initial definition on the OECD referred to the contents of an AI systems, output of an AI system should be recommendations, decisions or predictions. So we felt for instance, that we need to make sure that we include generative AI. So for instance, things like ChatGPT that just produce text should also be considered AI. We also realized that that definition was lacking the legal certainty needed for legal text. So we added, for instance, an annex, referring to certain type of techniques that need to be used for a certain type of system or a software to be considered AI. Notably, we refer to machine learning, we refer to logic and knowledge based approaches, and to statistical approaches. So I have to say, carrying on with the story short that the objective that we had was to classify as AI systems that were not developed, according to traditional programming techniques. This was very clear for us, we didn’t want to regulate through the act, any sort of automated system, but only those systems that would preserve a certain degree of complexity. And because of that complexity, because perhaps of that capacity, because of that inability to actually understand whether the traceability of that system, we needed, therefore, more rules. So while the regulation of AI focusing on certainly certain types of automated systems, it does not intend to cover all of them. And ultimately, the definition that we’re going to end up in the final text remains very much aligned with the definition in the OECD, which has actually just been reviewed just a few weeks ago, to align with the text in the act.
John Torpey
So, I mean, there are these, as I said, in my little introduction, I mean, there is this kind of apocalyptic reaction to a lot of this technology. And, I mean, what you just referred to as sort of statistical outputs and text, it all sounds pretty mundane. So, is AI going to, what did I say, extinguish human life on Earth, or whatever the New Yorker is asking? I mean, I can’t think of a technology in the recent past that was seen as so dangerous. I mean, other than, like, nuclear weapons. So maybe you could just describes kind of, sort of what has to be regulated, what are the fears about this technology that are leading people to be so concerned?
Gabriele Mazzini
So I have to say, in this respect, I’m fairly skeptical, actually. I would say. I’m very skeptical about these claims around the risk for extinction. And this is something we have never really considered as we went through the process of the act, so the act was born out of the need to address very concrete risks for individuals around safety, around health, around governamental rights. So just to be very, very concrete. We want to make sure that if we deploy an AI system, in a car or in a drone, or in a medical device system, we want to make sure that that system performs consistently, properly, safely for the patient for the driver and so on. So, here we’re really talking about the legally protected interest is about making sure that this the system is safe, people don’t get into accidents, their health is protected. Same when it comes to fundamental rights. So we want to make sure that as banks, for instance, deploy systems for assessing the creditworthiness of individuals, or city deploy algorithms to determine which school kids should be sent to or you know, AI systems that may be used in the workplace, to select resumes or applications of candidate, we want to make sure that those systems again consistent formal key do not discriminate people based on protected grounds, because ultimately, this system are assisting human decision making, in some cases, you actually may be performing certain functions completely autonomously, but we know that there may be not working very well. And hence, that is where we wanted to come in, and ensure that this systems are properly trained, properly documented and traceable, transparent. So in that sense, we never considered the risk of this system getting out of control or leading to some sort of existential threat. And it is true, that certainly this matter has at some moment entered into the debate around the AI act. I would say, mostly in relation to this latest developments around foundation models, what actually we call general purpose AI models. So these models that have increasingly become known after the development of ChatGPT, big models that really almost seem to perform, seems to show a level of intelligence that has not been seen before, like coherent expressions. That can express themselves in a human like fashion, having conversations with people. So this certainly attracts a lot of our fears about this systems being able to do probably more than they can. But in fact, we still believe that these are tools that by themselves do not lead to these sort of existential risks, that maybe some controls are needed as the systems become more powerful, but certainly I find that actually pushing the discussions, especially in the general public about this technology, possibility to existential risks, prevents us from having a more grounded conversation around the actual risk, and actually the potential benefits. And I think we need more AI to solve some of the challenges that we have, rather than too much fear around it.
John Torpey
So I mean, I don’t want to spend all our time talking about this, but I’m sort of curious, why you think this these apocalyptic fears have emerged? I mean, what is it about the technology that makes people think that these machines can be smarter than we are?
Gabriele Mazzini
I think that there may be, to be frank, maybe two camps on this topic. So certainly, there are some scientists. So you may be aware, there was a letter signed, signed by scientists, I mean, very reputable scientists, who express concerns about this technology became very powerful. And they call for an halt on the development of those technologies. So and I think these claims need to be taken seriously, although certainly other scientists disagreed as well. So again, we didn’t have very clear, clear sets or exactly consensus on what should be done. But I think, certainly those deserves some attention. On the other side, I think there was also some positioning by industry leaders around those same topics, saying that, for instance, indeed, this technology is can be extremely dangerous, and therefore we need very tight controls. And this is where I’m personally a bit more skeptical in the sense that it’s a bit like saying, “You know, really, you know, trust me that I will be able to control this technology and we should not actually disseminate this technology too much. Because this may lead to risks that we cannot control.” So, when I hear some of these claims, I am a bit afraid around certain companies wanting to use these fears to create some sort of a regulatory moat around the technology. So we need more rules, or even exam to certification system, like for instance, licensing of this model, before then releasing on the market, to essentially create more difficulty for other players to come in. So this is a bit my feeling, I feel we had a bit of both. And, and ultimately, also the claims whether this, this technology is really such to really represent an existential threat. I think they’re not really clear to everybody. So, I mean, it’s something that certainly we should further explore. But this should not prevent us from taking action now about the risks that we see. And also, keep monitoring what’s happening, but not not taking excessive action or preventing us from developing a technology that we think can also be beneficial.
John Torpey
Right. So I mean, in response to some of these fears, and in response to the general concerns that you, you know, outlined at the beginning, the EU just this past week, has adopted, I guess, this AI Act, which if I understand correctly, you had a role in drafting earlier on, it’s in its life. So you might you may know more about it than just about anybody. So maybe you could tell us, you know, what has the EU done? What’s in this AI act? And, you know, what is it sort of tell us about what we should be doing in the future?
Gabriele Mazzini
Yeah, so the act is a fairly complex piece of legislation is really, really quite complex, in the sense that is a horizontal piece of legislation, meaning that it doesn’t only concern one sector, it concerns basically all sectors of society, and economy. So from law enforcement, to financial services, to employment, to education, to product, so really broad. And, in that sense, other than creating substantive rules on what certain things, what companies should be doing, it also creates a governance system. So how those rules should be enforced, who should be responsible, taking into account of course, the fact that the EU, we are a supranational, regional organizations that relies also on the legal order to the member states for for enforcement, I would say, one of the main idea behind the act is the risk based approach. The risk based approach means that the law doesn’t want to regulate the technology as such, so AI as a technology but wants to directly regulate the specific uses of the technology. And depending on whether on how the technology is deployed and used, the regulatory response should be adapted to the type of risk that that technology process. That’s why we have developed a sort of a three type of three layers of risk and three set of rules. The first type of risk is when the use of technology generate interest that can be considered unacceptable in the context of the use, of course, considering our our value sets, our principles. And examples of that will be a social scoring system. So system that will be used to create social scores for individuals should be banned. We don’t think that that kind of systems would have any benefit for that society. So essentially, when the risks created by certain systems are considered unacceptable, they should be prohibited. One notable example also is the use of biometric systems for identification of persons in publicly accessible spaces by law enforcement authorities in real time, that is specific use case that is prohibited, but with some exceptions. So, there are cases when, for instance, the use of real time, facial recognition will be allowed by law enforcement authorities in public accessible spaces. For instance, in the case of a terrorist attack, or something like that, things like that. But it cannot be used without those kinds of restrictions, there are there situations where the risk is considered high. So that’s why we call we talk about high risk AI systems. In those cases, the use of the system, while creating risks also lead to some benefits in terms of efficiency, in terms of enabling, supporting human being in performing certain functions. In those situations, the systems would be considered allowed. So it’s possible to use them, but they are subject to certain requirements. Those requirements relate, for instance, with the datasets, we need to have good quality data sets that relate to the transparency of the system, the human oversight, the cybersecurity, the robustness. And these are generally requirements that are fairly, or which there’s quite a large consensus in the community for AI to be trustworthy. So basically, what we’re asking for is that when a system is classified high risk, an example will be a system that is part of a medical device or a system that is part of used by banks to give loans or by schools to assign kids to, to educational institutions, those systems must comply with those requirements. And at the same time, they must be certified. So before they are placed on the market, so there is an exam day, we call it exam test system or certification, which guarantees that once the system is deployed and used, that system is safe. And then we have a third category of risk, there are related to the transparency or I would say lack of transparency of certain systems. Here, the obligations are much lighter, essentially, to disclosure obligation. Users of deployers have to inform, for instance, that I’m interacting with a chatbot. And not with another human being, it’s a matter about human dignity, I need to know if I’m, you know, interacting with a human or a chatbot. It’s quite simple. But we need to put in place disclosure obligations. Another example would be about generated content. As we know, in the last the last few months and years, with the advent of generated AI, and this ability of AI to really reproduce content that really resembles real content. It’s important to distinguish what is real and what is not real. So we want to make sure that that content is specifically labeled, and even maybe having in certain cases, tools to develop to detect whether the content is actually artificially generated or not. So essentially, going beyond a labeling exercise, but also establishing although the technology there is not 100% reliable, but we want to encourage also tools to ensure that there is a possibility to verify through technical means whether for instance, certain type of image is being artificially generated or not.
John Torpey
So you use the term early on in that answer, I think you use the term that this was a horizontal piece of legislation, and that it basically affects AI, kind of across the board of the economy or people’s lives, or however you would say it exactly. And it reminds me of an economic historian named Robert Gordon was writing about growth in modern society in the United States, in particular, and made the point that growth has, relatively speaking, slowed since the creation of what I think he called these general purpose technologies like electricity, and fossil fuels, which really are what powered in a way the industrial revolution. So, so I’m sort of curious, I mean, do you see AI as having that kind of all pervasive kind of significance for our lives? And in that sense, maybe itself a kind of general purpose technology that’s going to transform the world we live in?
Gabriele Mazzini
Absolutely. I think AI is a general purpose technology. With no, no doubt about that. And this is why we wanted to take, among other reasons. I mean, there are a number of technical reasons. But also, I would say from from this angle, we wanted to take a horizontal approach, because we want to make sure that as much as possible, this technology is regulated equally across sectors, because of the fact that it can impact so many sectors. So, we want to make sure why should we regulate, for instance, AI in education differently than AI in the workplace? If our goal is the same, so ensuring that if, for instance, AI does not discriminate, or AI leads the same productivity gains? So absolutely, I agree with that statement, that AI is a general purpose technology. And, but also for that reason, I think it was for us quite clear that we don’t want to focus on the technology, per se, but indeed on the use case, and only tackle those cases where AI creates a risk. For instance, if I think about systems for predictive maintenance, or spam filters, this is also AI. But from our perspective, these types of systems didn’t require any specific additional rules than what we already have. And actually, probably most of these systems are quite regulated. Today. If I think about predictive maintenance, I cannot imagine there are rules applicable to this. So exactly, there are an extra, we believe this is the majority of use cases where AI is, in fact, not regulated by the act. But indeed, we are talking about us that we can’t even imagine probably today.
John Torpey
Fascinating. So now I’d like to ask you a sort of comparative question. I mean, I know as a European union official, you’re not eager to tell the United States what to do. But I’m curious what you would say about the differences in approach so far. I mean, as you know, more or less a lay person with regard to all this. I mean, one hears stories about the United States as a kind of wild west and unregulated and that sort of thing. And I wonder what you think you’re doing and how it differs from what’s going on in the States. You don’t have to recommend to us what to do, but just explain the differences as you see them.
Gabriele Mazzini
Yeah. Thanks. Thanks, John. Yeah, I appreciate that caveat. No, I think you you may be aware that the US adopted in October or November last year, and executive order on AI, which certainly was a very interesting development happening in the space of AI regulation. Not only in the US, but I think it really attract a lot of attention worldwide. And we certainly also, at the time, we all well, selves were in touch with the colleagues in the US to really understand better what we’re thinking about, because we’re still in the process as well ourselves about finalizing the act. So if I have to contrast with the AI Act, and the executive order, I will definitely say that the first and most important difference is that the AI Act is a law passed by our legislative authorities in the EU, and is a binding law. As I said before, with substantive rules, a whole governance system, and effective sanctioning mechanism to ensure that the rules are complied with the executive order in that sense, while of course, expressing authority by the President, so in that sense, binding, is not a law of Congress. So I’m not an expert in US constitutional law. But certainly, the binding force is not the same as that of a law but Congress let’s say the soul of this of this order is essentially a policy direction that is given to federal agencies when it comes to deal with AI in the context of their mandates and limits. So essentially, my understanding is that this executive order is an encouragement to federal agents to explore how they can leverage existing legislation in the US to AI system so to how to that applies to AI systems. They’re also, of course, other something much more than that. Like, for instance, there is an instruction to NIST to issue guidance, for instance, around testing of AI systems. But there is also an interesting also additional element, which certainly influenced also our thinking, as we were discussing the act around. And this is a part of the executive order, which relies on the defense production Act, to compel essentially companies developing certain type of foundation models, so called dual use foundation models. So these are models that can have an impact on national security, national economic security and public health and safety, to essentially disclose to the government, that they are actually performing some training grants on this powerful models, and to disclose the results of the testing done on those models. So red teaming, or safety tests. So this is an area where, in fact, the the executive order, if you want, goes beyond directing action to the federal agencies, but indeed, direct actions also to private actors. And I said this, while I mentioned this, because in fact, as we were discussing the final stages of the negotiations of the Act, a new chapter has been added to the initial risk based approach of the Commission, which I just discussed, which is about establishing rules for general purpose and models. And these are exactly that same type of technological developments, that in the in the executive orders are called foundation models. And we set similar type of rules, in part similar in part a little bit different, in the sense that we create a more articulated set of rules. So instead of just requiring model providers to inform authorities about trading rents, and disclosing test results, we established two tiers of rules are two sets of rules, one for all foundation models, essentially around transparency. And then a second set of rules for those models get that are trained beyond the suit compute capability, to actually engage into more extensive risk assessment and risk management. So I think it was, it’s interesting, perhaps to contrast, also, not only the general overall approach, one, in one case, we have law and the other case, we have an executive action, but also going a bit deeper on those areas where there may be some overlaps. To contrast, also, the differences between what is the EU approach, which also in the case of foundation models, tends to be a more comprehensive type of legislation, visa vie, the US approach, which is, I would say, more more limited in some respects. So, in closing this point, I would say indeed, this reflects also a bit the different spirit with which the two jurisdictions and continents look at regulation, and notably, digital regulation. I think the US is not shy, about coming up with fairly comprehensive piece of legislation. Whereas in the US, this seems to take more time. And, maybe have a more wait and see approach. I think both have advantages and disadvantages in some respects.
John Torpey
Very diplomatic. So let me just wrap up with a final question, which is simply so what’s going to be the next big challenge in this world in this area? I mean, what do you think is the next thing that’s really going to require a lot of our attention?
Gabriele Mazzini
So in fact, to think from the point of view of the EU, I think we need to focus on implementation. As I just said, the EU in the last five years have come up with quite quite a large number of legislation in the field of digital. And some industry representatives talk about tsunami of laws in the last five years. So, because on top of the AI Act, which we just talked about, we have legislation, for instance, on Digital Services Act, we have legislation on digital markets act, we have legislation on on data sharing, we have legislation on cybersecurity. Not to mention, of course, what we had before, which is a fairly also comprehensive piece of legislation and very impactful for the digital sphere, which is the German data protection regulation. So think that, for instance, in the US, you have probably have no notice. So in the EU, we have come up quite a long way it put it together all these pieces. Now it’s time for us, in my view, to really look at how all this legislation interact with each other, and how indeed, they should be properly implemented. Because ultimately, I mean, as a lawyer, I’m very keen on on rules, certainly, but I’m also keen on very clear rules. Why because the moment rules are many, and they may be overlapping to some respects, it’s the risk, what is the fundamental risk is that essentially, we create a degree of uncertainty that is not useful, certainly not useful for obtaining the type of protections that you want to achieve. So I think it’s important to, so of course, it’s important to have rules, but in my view, it’s also equally important to have rules of quality. So in my view, this is where I think we should focus when it comes to the EU in the next in the next few years. At the same time, I mean, more generally about AI. Again, I think we need to probably focus also on more international alignment. This is a process that is already happening. I mean, the G7 has been working a lot as well on AI, with through code of conduct. So they’re they’re thinking about developing code of conduct, notably on generic AI. And I think as we want to look at the big the next big challenge is I think it’s important that the governments also keep talking to each other and, and, and really find common lines on trying to govern this technology, which certainly going back a bit to, to one of your initial questions certainly may pose some concerns even you know, serious concerns by certain actors. But also is is a reality we have to deal with and I think we we don’t want just to run away from it or or be too scared to be realistic. And the best way to be realistic is to engage with those who develop them to study to understand what are the best solutions to look at the evidence and and ideally do this collectively and together in line fashion.
John Torpey
Right. Well, thank you. It sounds like the old adage the only way around is through applies here as it does in so many areas. So that’s it for today’s episode. I want to thank Gabriele Mazzini for his insights about the European regulation of AI. I also want to thank Oswaldo Mena Aguilar for his technical assistance and to acknowledge Duncan Mackay for letting us use his song international horizons as the theme music for the show. This is John Torpey, saying thanks for joining us and we look forward to have you with us for the next episode of International Horizons.