Language selection

Search

Artificial Intelligence Is Here Series: The Global Effort to Regulate Artificial Intelligence (DDN2-V25)

Description

This event recording maps the notable steps taken around the world in regulating the use of artificial intelligence, from the European Union's introduction of the world's first comprehensive proposed legislation on AI, to efforts made by global standard-setting bodies to develop guidance on its responsible use.

Duration: 01:29:29
Published: July 12, 2022
Type: Video

Event: Artificial Intelligence Is Here Series: The Global Effort to Regulate Artificial Intelligence


Now playing

Artificial Intelligence Is Here Series: The Global Effort to Regulate Artificial Intelligence

Transcript | Watch on YouTube

Transcript

Transcript: Artificial Intelligence Is Here Series: The Global Effort to Regulate Artificial Intelligence

[The CSPS logo appears on screen.]

[Neil Bouwer appears in a video chat panel.]

Neil Bouwer, Canada School of Public Service: Good afternoon, colleagues. I'd like to give you a warm welcome to this learning event today on behalf of the Canada School of Public Service. My name is Neil Bouwer. I am a vice president at the school. It's my pleasure today to be moderating this learning event. My deputy minister sends his regrets. Before I do anything else though, I am in Burton, New Brunswick today, which I challenge you to find on a map, but one thing I can tell you is that I am on the traditional and unceded territory of the Wabanaki and the Miꞌkmaq Wolastoqey peoples, and so I think it's important to reflect on that. Normally I'm in other parts of the country, but today that's where I am and so I just wanted to take a moment to recognize that, and I'm sure many of you are at different places in the country and also on a traditional indigenous territory, so please take the time to acknowledge that.

Today's event is actually part of a series, our Artificial Intelligence is Here series. It's not a series about artificial intelligence is coming or artificial intelligence is cool. It is artificial intelligence is here because the reality is that artificial intelligence is coming into our lives whether we like it or not, our personal lives, our professional lives, so artificial intelligence really is here. We are offering this series with the Schwartz Reisman Institute for Technology and Society. It is a research and solutions hub based at the University of Toronto and it is dedicated to ensuring that technologies, like artificial intelligence, are safe, responsible, and harnessed for good, so we're very privileged to have the support of Schwartz Reisman.

The event so far in the series have covered the basics of artificial intelligence, citizen consent, decisions about when to use AI in government, the economic impacts and issues around artificial intelligence, as well as importantly, bias, fairness, and transparency, and all of those are available on the YouTube channel of the Canada School of Public Service so I encourage you to look for them there. You can also find them on the learning platform of the Canada School.

We're going to start the event today, like some of the others, with an overview presentation, and that's going to come to us from Phil Dawson. He is a policy advisor at the Schwartz Reisman Institute, and he is an expert in artificial intelligence. He's going to map the steps and introduce us to some of the steps that are taken around the world to regulate artificial intelligence, as well as the issues at stake, both positive and negative, and some of the instrument design and compliance challenges around artificial intelligence, so really looking forward to that.

Before we go there, I just want to say that simultaneous interpretation is available, so for those of you that would like to take advantage of that, please do so. You can do that by following the instructions that were in the reminder email for this event and that includes a conference number so you can listen to the event in the official language of your choice.

Also, we will be soliciting your input. After Phil does his introductory remarks, we're going to have a panel, and we're going to open it up for audience questions. Throughout the event, you'll see on your web interface there is a button in the top right hand corner with a little hand on it, and don't worry, you won't be identified or called out or put on the spot, but you can click on that button to raise the little hand button and then you'll have a text box and you can type your question and that will go to the organizers of event, and I will get to your question, so I'm hoping we'll be able to do that.

Let's start it off with that overview from Phil. Let's take a look at that. It's only about 20 minutes long, so let's let it roll.

[The video panel fades into a video titled "The global effort to regulate AI."]

[The words fade to Phil Dawson, Policy lead, Schwartz Reisman Institute. He stands in front of a blue background, with representative images, and graphics appearing at his left.]

Philip Dawson: Hello. My name is Philip Dawson and I'm the policy lead at the Schwartz Reisman Institute for Technology and Society.

[Text reads "How should governments regulate AI?"]

I'm pleased to be here to talk to you about the global effort to regulate AI. Today we will begin to unpack some of the leading initiatives by international organizations, multi-lateral bodies, and more recently, by governments, that seek to better understand AI, the opportunities and challenges it brings, and to design ways of ensuring its responsible use in society. So, why did this become a global effort?

[An image shows a sheet of computer chips. Text reads "AI can create significant benefits for social and economic progress."]

Well, we've come to realize that artificial intelligence can create significant benefits, both for social and economic progress.

[As Phil lists AI's uses, they appear on screen.]

AI's predictive power can be used to enhance drug discovery, it can be used in precision agriculture to improve farming, some companies are using AI to decarbonize manufacturing of cement and steel. AI can also help create more efficient energy systems for buildings or sustainable transit in cities. AI based personalization of financial services can enhance financial inclusion, yielding new opportunities for the un-banked. International organizations believe that AI can play a critical role in accelerating progress on advancing the UN Sustainable Development goals. These are just some of the positive developments associated with the commercial use of AI.

[Text reads "AI advances are likely to produce a 26% increase in global GDP, or $16 trillion USD, by 2030."]

In fact, a recent study by PricewaterhouseCoopers noted that AI advances are likely to produce a 26% increase in global GDP or $16 trillion US Dollars. So, as you can see, there's a tremendous opportunity associated with investments in AI research and commercialization.

[A new image show a precarious stack of geometric shapes. Text reads "AI can also create and perpetuate harms." As Phil lists harms, the appear as text.]

At the same time, we've also become more aware of AI harms. Some of the most prominent harms include the proliferation of harassment, disinformation, and divisive and harmful content online. AI also has the potential to reinforce discriminatory outcomes in society where biased data or models, risk excluding protected groups from financial services, hiring process, access to public services, or even the administration of justice. AI is also changing the future of work and the skills that individuals will need in the future to participate meaningfully in the digital economy. Other companies have made progress developing large, powerful language models, such as GPT3 which can behave and evolve in unpredictable ways and controls for these systems are lagging behind.

There are a number of larger scale harms and dangers of global importance that we will not spend a lot of time discussing today related to state use of AI, such as state sponsored surveillance or the use of lethal autonomous weapons. These are some of the very serious concerns that the malign use of AI also triggers.

[An image shows dots and lines connected in a large frame around the globe. Text reads "Principles and frameworks for AI regulation."]

Global efforts to regulate AI seek to address one fundamental question.

[Text reads: "How can we guide the design, development, and use of AI systems to harness its benefits while eliminating harms and minimizing risks?"]

How can we guide the design, development, and use of AI systems to harness its potential benefits while eliminating harms and minimizing risks? This will be a familiar question for some of you. It is the subject of many workshops, conferences, and academic papers that regulators and stakeholders across the world have been trying to unpack for years now.

[A dense graphic shows 50 countries' flags, and tiny text beside each. It's titled "50 National Artificial Intelligence Policies."]

To give you a sense of the scale of this effort, since 2017, over 50 countries have developed national AI strategies. Many of these strategies aim to drive responsible development of AI through the coordination of policies, investments into research, talent development and training, and public discussion about the ethical impacts of AI on society. Canada was actually the first such country to develop a national AI strategy with the Pan Canadian AI Strategy in 2017. More recently, countries developing national AI strategies have also focused on what regulatory approaches for AI might be necessary to protect the considerable investments that both the public and private sectors have made into AI research and commercialization.

[A new graphic reads "International and multilateral AI initiatives." It shows logos for Unesco, OECD, GPAI, digital cooperation, and Freedom Online Coalition.]

At the same time, there have been a growing amount of international and multilateral AI initiatives. These are just a few of the key international organizations who have played leadership roles developing international norms and soft law that will help guide domestic efforts by governments.

[As each organization is mentioned, its logo fills the screen.]

UNESCO, a specialized agency of the United Nations comprised of 193 countries, recently adopted a recommendation on AI ethics. The Freedom Online Coalition, a multilateral alliance of states, which Canada currently chairs, has published a number of statements and calls to action to guide the responsible development of AI systems, both on social media and in the marketplace at large. In 2019, the OECD developed its own recommendation on AI to ensure its trustworthy design and development. Through the G7, the governments of Canada and of France spearheaded the founding of the Global Partnership on AI, a new multilateral alliance of 25 countries now committed to implementing the OECD AI principles through large scale applied research projects.

[A new graphic shows the OECD's values-based principles. Beside each principle sits a representative icon. They read: "Inclusive growth, sustainable development and well-being, Human-centred values and fairness, Transparency and explainability, Robustness, security and safety, Accountability." As each is mentioned, it highlights.]

Let's take a closer look at the OECD AI principles. They talk about the need to ensure inclusive growth, sustainable development, wellbeing, human-centred values and fairness, which are described as including privacy and data protection or respect for human rights, the rule of law and democracy, or principles of transparency and explainability, the right for individuals to know when they're interacting with an AI system or to understand how the AI system has been used in a decision that impacts their rights, financial wellbeing, or health. Other principles highlight the importance of the robustness, security, and safety of an AI system. Can it withstand adversarial attacks? How well does it perform across different data sets and domains? What are the potential impacts of the system on physical safety? The OECD AI principles were the first series of principles signed onto by governments.

[Text fades in, reading: "First series of principles signed onto by governments. Endorsed by the G20, OECD member states, and other countries."]

 They were endorsed by the G20 and beyond OECD member states, a number of other countries across the world have adhered to these principles, including Argentina, Brazil, Colombia, Costa Rica, Peru, and Romania.

[A new graphic shows "AI" sitting in a circle of stars. It reads "Independent High-Level Expert Group On Artificial Intelligence, set up by the European Commission. Ethics Guidelines for Trustworthy AI."]

In parallel with the development of the OECD AI principles, the European Commission established a high-level expert group to develop a series of ethical guidelines for trustworthy AI. The ethics guidelines for trustworthy AI set out three components for trustworthy AI.

[A new graphic shows a complex framework for trustworthy AI under three main categories: Lawful AI, Ethical AI, and Robust AI. As each element of the framework is mentioned, it highlights.]

AI must be lawful, it must be ethical, and it must be technically robust. The guidelines are founded on four ethical principles and seven key technical requirements, which really mirror the content of the OECD AI principles. But the European Commission effort went a little bit further in developing an assessment list designed to enable organizations to implement the guidelines in their internal governance policies and procedures.

[A new graphic shows a series of colourful dots forming a word map.]

We've just reviewed two of the most significant AI principles and ethical frameworks efforts undertaken internationally to date from the OECD and from the European Commission High Level Expert Group.

[Text reads "Since 2019, there have been over 170 sets of AI principles developed by governments, academia, industry and civil society."]

Since 2019, though, there have been over 170 such efforts from governments, academia, industry, and civil society.

[A complex chart shows concentric circles, each representing categories of AI principles. Colour coded dots of varying sizes sit all around the circles, each line of them representing an organization's declaration of principles.]

Many of these are reflected on this figure from the Berkman Client Centre for Internet and Society, which demonstrates how, despite the proliferation of AI principles, there has actually been remarkable consistency across approaches. In some ways, the development of AI principles was in fact, the easy part. The real challenge is for governments to take these principles and translate them into effective regulatory systems for AI.

[Text reads: "The real challenge is for governments to take translate these principles into effective regulatory systems for AI." The text fades away to a chalkboard full of complex equations. Text fades in reading "Translating principles into regulation: Lessons from Europe, the UK and US."]

Next, we're going to look at some of the regulatory efforts coming out of the European Commission, the United Kingdom, and the United States.

[A graphic with a binary code background shows the European Commission logo.]

Since the General Data Protection Regulation, or GDPR, came into force in 2018, the European Commission has pursued an aggressive regulatory agenda. Since December 2020, the Commission has tabled four distinct acts that have interrelated impacts on the use of AI systems.

[The acts list out: "General Data Protection Regulation, Data Governance Act, Digital Markets Act, Digital Services Act, Artificial Intelligence Act."]

We're going to focus on aspects of the Digital Services Act and the Artificial Intelligence Act. However, both the GDPR, which includes rules for the processing of personal data by automated decision-making systems and the Data Governance Act, which seeks to open up access to high value data sets in the EU, should also be noted as part of the Brussels effect on the global digital regulatory landscape.

Let's take a closer look at the EU's proposed Artificial Intelligence Act, which is the first comprehensive regulatory proposal for AI in the world.

[As he lists elements, key phrases are listed in text.]

The EU AI Act takes a risk-based approach aimed at protecting the safety, health, and fundamental rights of European citizens. Obligations of AI developers and users are defined by a particular systems risk category and there are three broad categories defined in the act.

[Text reads "EU AI Act: Risk Categories. 1. Low/Minimal risk, 2. High Risk, 3. Unacceptable risk." As he details each category, key spoken details list under each numbered heading.]

Developers or users of low and minimal risk systems will have basic transparency reporting obligations. These include AI applications currently used throughout the business world, such as AI chat bots and AI-powered inventory management. High risk systems must undergo pre-market conformity assessments, attesting to compliance with all of the act's obligations. High risk systems include AI systems incorporated into critical infrastructure or medical devices, those used in education, or which pose a risk to health, safety and fundamental rights more broadly, such as remote biometric identification, or credit scoring, or even AI systems used to make hiring decisions. There is also a category of systems that are prohibited outright. These AI systems are deemed to pose unacceptable risks to society. They include systems that manipulate through subliminal techniques, systems that are used for social scoring, real time remote biometric identification, and systems used in public spaces by law enforcement.

[A graphic title reads "EU AI Act: High-risk systems." As each requirement is spoken, it lists out in text.]

Now let's take a closer look at high-risk AI systems. In the act, a high-risk system would be subject to the following requirements, risk assessment and mitigation measures, data quality and data governance rules, logging activity to ensure traceability of results, detailed technical documentation, transparency requirements, appropriate human oversight to minimize risk, as well as high levels of robustness, security, and accuracy. In addition to pre-market conformity assessments, providers of high-risk AI systems must undertake post-market monitoring, reporting, and submit reports on their efforts to regulators.

The EU AI Act is also explicit in the way that it intends to leverage the development of technical standards and conformity assessments to support compliance with the Act's requirements. For high-risk systems conformity with international standards will, in certain cases, create a presumption of conformity with the Act.

[A table shows Act requirements and various ID codes within.]

This table outlines some of the key requirements of the Act and maps them against the development of technical standards by international standards bodies, such as the ISO and the IEEE, including in areas such as privacy and data governance, transparency, quality management, risk management, and human oversight. Many of these standards' development processes have only just begun.

[Text reads "EU Digital Services Act."]

Next we'll look at the EU's proposed Digital Services Act. The Digital Services Act introduces new rules for online platforms to ensure the safety and protection of fundamental rights of European citizens online. The Act includes a number of measures related to the regulation of AI systems, specifically article 26 of the Act requires that large online platforms, those having 45 million active monthly users in the EU or more, conduct annual risk assessments that take into account the impact of AI-driven content moderation.

[As requirements are mentioned, they list out as text.]

It also requires the platforms to take appropriate mitigation measures to manage risks. Article 31 of the Act requires the platforms to provide independent researchers with access to platform data and models for the purpose of conducting studies on the impacts of recommender systems that curate and deliver content in order to foster a better understanding of the systemic risks that they pose and to scrutinize compliance.

[A graphic shows the title page of the UK's Office for Artificial Intelligence's National AI strategy. On it, the shape of the UK is made up of many tiny icons and emojis, coloured in a blue and pink gradient.]

The United Kingdom recently published their national AI strategy, which articulates an emerging approach to regulating AI in the UK.

[A title reads "Emergent UK regulatory approach:" Key points from this approach are listed as they are mentioned.]

The UK government intends to publish a white paper outlining a pro-innovation position to regulating AI sometime in early 2022. This has come after some reflection regarding the inadequacies of taking a sectoral approach to regulating AI and consideration of the need for horizontal framework across the marketplace to avoid confusion and duplication of overlapping rules in different sectors. The need for horizontal rules implies that the regulatory landscape related to AI is actually much broader than the privacy and data protection context. The UK national strategy also sites the need to ensure that national commercial interests are not supplanted by global regulatory efforts.

[The list clears. A logo for the Centre for Data Ethics and Innovations pops up, and new details list under the title as Phil mentions them."]

The UK Centre for Data Ethics and Innovation, a government agency, recently published an AI assurance roadmap clarifying the set of activities needed to build a competitive market for AI assurance services that leverage technical standards, conformity assessments, and certification. The UK approach reflects the important investments needed into technical standards development, the testing of AI systems, and pilot programs as a basis for building AI regulatory and assurance frameworks from the ground up. A partnership between the British Standards Institute, the UK's National Standards Body, and the Allen Turing Institute, the UK's National AI Institute, was announced in January 2022, to develop an AI standards hub to help accelerate progress on building out AI assurance and compliance tools.

[A headline reads "Americans Need a Bill of Rights for an AI-Powered world."]

In October 2021, the White House Office of Science and Technology Policy opened a consultation on the development of a bill of rights for an AI-powered world.

[Two American official seals sit in the top righthand corner as a title reads "US Bill of Rights for an Automated Society." As rights are mentioned, they list below it.]

As part of the consultation, the White House released a sample of potential rights to be considered, including the right to know when and how AI is influencing a decision that affects civil rights or civil liberties, the freedom from being subjected to AI that hasn't been carefully audited, freedom from pervasive or discriminatory surveillance and monitoring in one's home, community, and workplace, as well as the right to meaningful recourse in the event that an AI system has contributed to harm. These are just some of the examples of new protections and derivative rights that we might need to incorporate across a broad scope of laws and regulations related to the development and use of AI systems.

[A headline reads "Lawmakers come for Facebook algorithm with 'filter bubble' bill." The article shows a shattered Facebook logo with a heart at its center.]

In November 2021, a bipartisan group of US lawmakers introduced the Filter Bubble Transparency Act.

[A title reads "US Filter Bubble Transparency Act." As Phil mentions requirements, they're listed.]

 The act would require large platforms to let people use a version of their services where content is not selected by so-called opaque algorithms. Opaque algorithms are defined in the proposed act as, "Algorithms that make inferences based on user specific data to select content a user sees." In other words, the act would require large social media companies to make a version of their services available that does not rely on a recommender systems generated by user-specific content. The bill would also require social media services, employing such algorithms to inform the user of their use and how they operate.

[A headline reads "This senate bill would force companies to audit AI used for housing and loans." Over it, text fades in, reading "US Algorithmic Accountability Act."]

More recently in January 2022, a group of US senators tabled the Algorithmic Accountability Act. This bill would require companies to conduct ongoing impact assessments of AI systems to detect and mitigate bias and to evaluate overall performance as well as a host of other factors.

[An image shows blank puzzle pieces fit crudely together. Text reads "What are the challenges of regulating AI?"]

 This is just a brief overview of only a handful of the leading initiatives to regulate AI globally and you can be sure it is only the beginning.

To end our time, I would like to close with a few comments on the challenge of regulating AI to help animate broader discussion.

[As he lists his comments, they appear as text.]

First, confronting the speed, complexity, and scale of AI operations with traditional regulatory tools is going to be tough. Manual processes, such as human inspection, risk and impact assessments, simply cannot keep pace with the challenges of governing a highly dynamic, unpredictable, and a rapidly evolving technology like AI. Instead, we will almost certainly need to invest in the development of new regulatory technologies and tools that can help operationalize standards and automate aspects of conformed assessments for AI companies at scale.

Second, it's hard to regulate what you haven't tested and measured. We need greater transparency into the practices of companies and their use of AI systems. In addition, we need to have a better understanding of the impacts of the AI systems, which are often unique across domains, applications, and use cases.

Third, international coordination on the development of technical standards, conformity assessments, and other means of compliance, such as regulatory technologies, is needed to ensure global interoperability of AI regulatory frameworks to facilitate international trade and investments.

[The list fades away to the image of the globe surrounded by dots and lines in a vast, connected net.]       

In summary, we've come a long way from developing AI principles. The challenge is now, to translate those principles into practice, and to incentivize the development of innovative compliance tools that will enable effective AI regulation and governance at scale. Thank you.

[The video fades away to a title screen reading "Artificial Intelligence is Here Series."]

[Neil Bouwer appears in a video chat.]

Neil Bouwer: Well, thank you for that, Phil. What a fascinating overview from an international perspective of the regulatory issues around AI. That's a great presentation.

Well, we're going to switch gears now. We're actually going to bring Phil in live along with two others. So, you've already heard from Phil Dawson. As I mentioned earlier, he is a lawyer and public policy advisor for the Schwartz Reisman Institute. He specializes in the area of governance for digital technologies and artificial intelligence.

We're also joined by Monique Crichlow. She is the executive director of the Schwartz Reisman Institute. She has expertise in technological innovation and healthcare, and the private sector experience. She's also the chair of the Canadian National Committee on Data and for Science and Technology. Welcome, Monique.

We're joined as well by Craig Shank. And Craig is a strategic advisor, a consultant and speaker with experience in ethics, organizational governance and practices, and AI and data, and other emerging technologies. He's worked with Microsoft and other private sector organizations and comes with a vast array of experience.

So, welcome, Craig, to all three of you. It's great to have you here. And the first question I really want to ask actually to Monique and Craig is, well, what did you think of what Phil just said? Is there anything you would want to add sort of as areas for consideration from that albeit broad, but overview of the AI in terms of international regulation. Monique, maybe I could start with you.

Monique Crichlow, University of Toronto: Thanks, Neil. I think that Phil has done a great job of outlying a primer on the current landscape and some of the considerations. And what I'm really hoping that we're able to kind of probe a little bit more into today as we talk, is mechanisms for collaborating to resolve these issues and to think about them perhaps in ways that we haven't historically done.

Neil Bouwer: Fantastic. All right. We will definitely get to that. Craig, what about you?

Craig Shank, Artificial Intelligence and Emerging Tech Ethics: Thanks so much, Neil. And thanks Phil. Phil and Monique, I'd really like to build on both of your comments and identify just a couple of things. I think there are two areas that struck me. One is, Phil's observation about the need for innovation in regulation, in policy. And I think we have long experience with many of us here on this call, having thought about policy for innovation. The tricky part here is going to be making the crossover to innovation in policy. And I think this is going to take a number of new tools. I think we're going to have to try some things, and we're going to have to try some things that I think are prompted by the public-private partnerships that are out there, and by leveraging what the market will do while we still recognize that a completely unconstrained market may not be in our collective interests. That's one.
The second observation that I would share is, Phil articulated a global regulatory ecosystem. And of course, there are limited ways of really managing the global regulatory ecosystem. At the same time, we know that there are such issues with these new technologies that everybody who is affected is going to have a voice, and we're all going to be affected. And so, fundamentally, there actually is a role for people at all tiers. And I think many of those roles are not as evident at the outset, but the power of different kinds of policymakers to start making progress on this innovation journey, I think exists. And I think we're all ready to start seeing that develop.

Neil Bouwer: I love it. Okay. We're going to unpack that. But first, Phil and Monique and Craig, the first question I want to start with is, why has AI not been regulated so far, and why should AI be regulated differently than other emerging technologies? So, Phil, maybe you could start us off by answering the question, why does it feel like the Wild West when we talk about regulating AI? Why doesn't it feel more organized?

Philip Dawson: Thanks, Neil, it's a great question. I think the first thing I'd say is that in many contexts, I think, actually, AI is regulated. Or at least there are laws in place that should enable us to regulate and enforce different standards that we'd like to see applied with respect to AI. I think the big gap is that we don't really know how those existing legislative frameworks or regulatory frameworks do apply.

If you think of some of the earlier conversations, they may have touched on parts of this in the lecture. But at the beginning of the conversations on kind of a global AI policy and principles, there were a lot of folks across the spectrum of stakeholders, but mainly maybe in civil society and academia saying, "The question is, do we need new laws? It's, how do we apply existing laws better?" And maybe the human rights framework, for example, is somewhere we could look now to understand some of the gaps in existing legislation and regulation. What are some of the things that we like to see out of this technology that looking at, for instance, the international human rights framework? We can help identify gaps and then, appropriate regulatory responses based on what we'd like to see in terms of respect for rights. Whether it's access to remedy or it's certain accountability principles, and then try and devise the right kind of new legislative provisions or regulatory development to kind of plug in those gaps. So, I think we actually have... If necessary. I think there are in financial services, there are existing regulations on risk management, model risk management. The sectoral regulators are looking about how to apply those to in the AI context. Do they need to be updated? So, in different sectors that regulation, I think, is fairly sophisticated. It's a question about understanding how it applies new context, and what, if any, new changes are necessary.

Why it seems like AI is completely unregulated is because some of the most visible or least publicly discussed instances of algorithmic or AI developments or consequences, are really taking place in social media, whether it's platforms like Facebook or YouTube or other places. And that is really in itself kind of its own sector, but it just has incredible impact on people. And so, and that's a space that is under-regulated. That's a space that is definitively under-regulated, it's a new space. That's one of the reasons. And it is existed this way for some time, for a number of reasons, but including we don't really have a good idea of how these systems operate. For a long time, I don't think most people even knew they were operating. So, it was kind of... We were in the darkness about this. So, in short, in a lot of contexts, especially sectorally, there are frameworks in place. We're trying to understand how to better use them and how to adapt them. And then in a much larger context, it impacts most of us every single day. It is under-regulated, and countries are looking to address that, and some of those pieces of legislation came up in the lecture.

Neil Bouwer: Okay, excellent. Now, Monique, you used the C word of collaboration. So, I'd be interested to know sort of who is working together on finding solutions versus who maybe is resisting solutions. And if we are going to move to more regulatory oversight, or at least clearer, more defined regulatory oversight, what does this ecosystem look like? What do the players look like when it comes to collaboration and cooperation?

Monique Crichlow: Thanks, Neil. I think that in the lecture, Phil does a great job of identifying internationally, kind of some of the movers and shakers that are really driving the discussion on AI regulation. And I think that's a great place to see. So, we know that internationally, this conversation is happening. And then, I think as you start to peel down the layers and you look, folks, whether it's in civil society or if it is industry, even in academia, have really demonstrated an interest in having this conversation. Coming together and chatting about what does regulation look like? What are we doing well? What do we not understand?

I really think it's not so much about who's not cooperating, or who's not participating or not interested in the conversation. I would say generally, there seems to be an interest in the conversation. I think folks are looking for someone to lead and signal, particularly in unknown areas, like social media where Phil has kind of spoken out about. But if you look in other sectors that are well known, Phil, throughout financial services and even health, I would say that there's a lot of collaboration that's coming and happening. Folks want to know globally, internationally what this could look like. And to Phil's points, how can our sectors adapt to this infusion of AI coming together in a more kind of intertwined way? So, I would say that the collaboration is happening. The players are at the table, and it's really about how do we start to convene in an actionable way?

Neil Bouwer: I mean, it seems like AI will be harder to regulate. If you talk about trains, planes, or automobiles, it's a thing, you can regulate it, you can inspect it, there are known professional standards. But when I hear you and Phil talking about regulating Facebook or other, it just seems more theoretical to me. Do you think it's possible to regulate Facebook and to regulate these other AI applications, Monique? Do you think this is in five years time this is going to be obvious, or do you think this is going to be really hard?

Monique Crichlow: I think that we're seeing from folks that they want rules in place. We're seeing, even from companies, that they're looking for rules. So, I think the regulation, it is happening. There's people that are taking their governments to court on topics related to AI data protection, information security. So, we see that there is this interest and there is a need, I think, for us to think about how do we respond in a way that protects public interest? Because it's clearly being signalled from the public that they have a position, and an expectation of governments to act in their best interest.
And I think we're seeing from companies, some willingness to do this. And we're seeing from groups like Civil Society and Research, that they're willing to come to the table to help to define that how and the what to this situation. So, I think it's happening. I think that we're going to see more of it. And what it looks like for each sector and group, I think is really what we need to start focusing on, because I don't think it'll be the same for everybody.

Neil Bouwer: Interesting. And I remember when Mark Zuckerberg famously told the U.S. Congress to regulate him if they were so upset with what they were doing. So, Craig, you mentioned innovation, I think you... To paraphrase you, said that, "to get there, we're going to have to try different things." So, can you describe a little bit what you would see as sort of innovative regulatory tools? And I'll open that up to the others as well, but do you want to lead us off on sort of what some of those innovative approaches might entail?

Craig Shank: Sure. In fact, I'm eager to hear from both Monique and Phil on this. And there are many here in the audience that I think would be able to give us an entire schooling on the topic, but I'd like to offer a few thoughts. I think some areas that I would centre on would be first, what can we do? What do we know that we can do? And that means paying some attention to what we can't do, and just isolating that and acknowledging it. There are a few quagmires that are really tricky, dealing with the fact that this is a predictive technology, not a deterministic technology. That's a really important aspect of it, and we don't yet have doctrine from a policy making perspective as to how to handle that. But let's pause it and think about it in risk-based terms rather than in specific regulatory terms. So, that's one. Another would be-

Neil Bouwer: Just before you go on, can you explain that a little more? Tell us a little bit, what do you mean by-

Craig Shank: Thank you. Thank you. Yeah, so when we look at tools, at AI tools, I am not going to get to defining AI, but I will give us some attributes of AI. One of them is, these are systems that have the capacity to learn and change. That means that the answer tomorrow may be different from the answer today.

A second thing is that they typically use models that are predictive, usually making predictions about us and our own behaviour, or about the behaviour of things in the universe. They make predictions about that behaviour, but they are making predictions based on what they are learn, and as a consequence, their predictions themselves may vary over time. That combination of learning over time and using predictive values rather than a deterministic, A plus B plus C equals D, gives us a system that actually, you can't reproduce the outcome every time. The traceability of the build is very difficult to manage. So, you have to really think through those elements, but for the time being, what I would say is, it's really important to set that aside and recognize that's an area that will need some work.

There are things that we can do right now. One of them is risk-based models. Another is really being thoughtful about tools that are in our hands today, impact assessments. If I were advising a team that is about to go implement or about to go purchase, I would say, go really get good at impact assessments, borrowing from data impact assessments and human rights impact assessments, really give that some thought.

The second bucket would be: really give some thought to your launch plan and how you're managing change in your stakeholder community. And the third would be, monitoring and know where the off button is. And I think those are some areas that might give some hooks to those who are in policy making roles.

Neil Bouwer: Amazing. Monique and Phil, do you guys want to add to that list?

Monique Crichlow: Phil, I'll let you go ahead first and-

Philip Dawson: Sure. I think, sure. Thanks, Neil. And I think Craig did such a great job unpacking why AI is unique, or at least some of the unique properties of AI, and some of the ways that it may require new approaches, from just standpoint the technology and some of the tools we have available, I think, in enterprises. I think governments, and this came up a little bit in the lecture, but are looking at different ways also of, in responding to the unique challenges of AI in terms of its complexity, in terms of its scalability, are looking at ways of trying to optimize or tap into private sector resources for aspects of AI oversights in a regulatory regime. And what I mean by that very concretely is, for instance, in the Digital Services Act, the EU legislation that is looking at the digital services provided by online and very large online platforms, social media companies in particular, there's a provision that looks at how can we enable, accredit independent academic researchers to collaborate with social media companies to advance purposes of the Act? Whether from a compliance perspective or to just study the impacts of these systems that are shielded from us, at least their development and their impacts, how do we leverage existing resources that we have... Trusted resources in this case, academic resources, to further our understanding of the technology in ways that will help us better regulate it or understand its consequences in the future?

That, I think, is a pretty innovative approach to looking at how the regulatory process can be supported by the appropriate resource. And that's a piece of that legislation. There's going to have to be guidance around how that actually works in practice. I imagine that's going to be a complicated feat, but it's worthwhile to pursue. And if we think of the sustainability of AI regulation and oversight and enforcement over time, I think it makes a lot of sense to try and leverage extra resources outside of government, while maintaining robust public oversight and accountability, and defining aspects of the approach to ensure that we're getting what we want out of that collaboration and that end product, and so it's accountable.

But I just think, thinking of Canada, we have some of the... One of the things we've talked about since the Pan-Canadian AI Strategy in 2017 is Canada's strength in AI research and academic research. So, there are other jurisdictions out there that are looking at resources outside of government as opportunities to support a prospective regime. I think that's really interesting.

Neil Bouwer: So, it sounds less like a ministry of magic that we need, but rather we need some collaborations with trusted third parties to do a data analysis, which is not magic. It's something that we do now for multi-intersectional data analysis. And so, it sounds a lot more pragmatic maybe than at first blush. Monique, did you want to add to some of the innovative approaches to regulation?

Monique Crichlow: The only thing that I would add is, I intimated this at the beginning around collaboration, and I think we also need to think about not just what's in our tool kit and how can we leverage the expertise of academics or industry, but what is their particular expertise, what is not their particular expertise, and making sure that is balanced in our approaches. We need to make sure that we're thoughtful about who comes to the table and in which ways, because I think with a lot of things related to AI and data, trust is a big part of the conversation. And so, even from a governance collaboration and approach to regulation, we need to think about, are we being clear in our governance approaches? Are we being accountable in our governance approaches? Are we being transparent in our governance approaches? I think that is an expectation that folks will have.

Neil Bouwer: Interesting- [crosstalk 00:44:41] Yeah, go ahead.

Craig Shank: I want to underscore something that both Phil and Monique articulated here, and for Monique's comments that is such a powerfully important point to bring up. I think we're trying to be innovative in the policy making, but we're going to have to be innovative in how we draw in stakeholders. And in that innovation to draw in stakeholders, we're going to need to be innovative about the value proposition for those stakeholders to participate. Because you could participate in a lifetime of meetings, and not ever get your point to be fully absorbed in the developing policy, and that would be a bad outcome. It would be a bad outcome as we are seeking to draw more diverse voices into this work. So, I think that's a crucial point.

A second observation I just would share is that Canada has a strength already in the impact assessment world. And I bet there would be great data that could come out of a five year look at, as that develops, a five year look at those impact assessments against what actually developed and what any harms might have been. I think you have a unique system there that allows for that.

Neil Bouwer: Well, I can't wait to see when citizens are called into focus groups or user groups to check out an algorithm or a citizen assembly, to look at how an algorithm might affect important public services. How much of this do you think, and maybe Craig starting with you. How much of this do you think is a domestic effort versus an international one? I mean, what Phil described really was an international ecosystem. There's different sort of regulatory ecosystems developing across the world. How much of the work of innovating, and also inclusion and collaboration as you've all articulated is international?

Craig Shank: I- we need both. We will need both fundamentally and both are happening, but we really learn a lot, as things start to develop in more modest sized environments. I don't mean to call New York City, a modest sized environment, but New York City has an existing AI procurement law that says that by 2023, things that are acquired by the city for AI use, will have to be audited. Now that's valuable for a couple of reasons. One is that means, somebody is going to have to figure out how to audit these systems. It's actually a provocation that drives innovation. It's a market signal that is really helpful to Monique's point about market signals and I think we will gain a lot from that.

On the other end of the US in the state of Washington, there was an effort to try to regulate facial recognition. That effort failed. It didn't fail because the technology companies ran ahead of it. They were actually the proposers of it. It failed because there were deep concerns from law enforcement about constraining their ability to use a tool that they perceived as important and valuable. They recognized that there were situations in which that usage was inappropriate, but nobody could agree swiftly enough to get in through the legislature. Something that would actually be supportable by all sides and so it ultimately it has failed for the time being, but those experiments are crucial as we learn how these things actually operate and sectorally, what are the kinds of issues? What are the stakeholders and what are the interests involved?

Neil Bouwer: Well, I for one hope that a social liberal democracy is the first country to regulate facial recognition and figure out how that's done. Phil, what about you? I mean you covered a lot of ground internationally in terms of the solution set and kind of addressing the challenges that you laid out, how much of this is an international effort in your mind versus how much of it is domestic?

Philip Dawson: Yeah, so I think I agree with Craig. It's for sure to require both, I think a smaller countries or middle economies, maybe like Canada and other places, similar to Canada. I think there's going to be strong reliance on International Standards to help businesses working in Canada, secure market access and as one of the primary tools for doing that, International Standards and conformity assessments against International Standards will be pretty key and typically those are not always developed by Canadians or the Canadian kind of mirror committees, as they're known to the International Standards bodies, but if Canada does in some cases tend to punch above its weight and has had key input to some of these processes. And there's actually an initiative right now that is underway in Canada, where the Standards Council of Canada has done just that.

They've capitalized so to speak on some success they've had and shaping an International Standard, which is a quality management standard for AI. Likely one of the key International Standards that will enable compliance internationally, still too early to tell, but very likely. And there was a budget 2021 commitment to develop an accredited conformity assessment of that standard and through some pilots with different businesses in Canada. So, that's a really great example of Canada looking to some of the International Standards and then just trying to put some resources, to help just shore up our strength on that particular effort. But yeah, I think that's probably a good model for Canada to look at International Standards and what to do with them, because typically in the world, big market economies are not necessarily looking at Canadian National Standards as precedents.

We don't really need them to be that, but when we have a good sense of where there's a good international rule for us to follow that maps out with our values and the needs of our businesses, then we can develop those domestic programs to just really support our technology ecosystem. So, that's an example. I think countries will have different approaches to regulation. I think, without this being negative or pejorative, the lowest common denominator, I think will have to come for interoperability, will have to come through a harmonized approach to International Standards. Otherwise, I suspect there'll be a lot of different approaches to regulation. If you look at how- in a way detailed and even prescriptive the EU AI Act looks, we're very unlikely to see something like that coming out of the United States. And Craig can correct me if I'm wrong, but I don't think we'll see that kind of approach in the US.

Looks like we might see a middle ground coming up in the UK with a white paper that they're forecasting to put out on AI regulation 2022. So, I suspect we'll be diverging national approaches, but there'll have to be some kind of common baseline at the International Standards level to, for interoperability and trade.

Neil Bouwer: Fantastic. Well, that's a good segue actually to our first question from the audience and maybe Monique, I'll start with you on this one, but it's a very pointed question with two parts. First part is. Is regulation in Canada robust or comprehensive enough to meet the proven challenges of AI systems? And the second sort of pointed part is, what should Canada do to catch up with AI regulatory initiatives in the EU and in the US? So, Monique, I'll start with you, but I know that both Craig and Phil will probably want to add to it. Go ahead.

Monique Crichlow: So, I think on the first question about, are we robust, or comprehensive enough to meet proven challenges of AI systems? I think Craig and Phil have both pointed to some of the advantages that Canada has in this area. We have a lot of impact assessments. We're good at doing risk-based type of assessment work. We have expertise in this area. We have strong regulatory systems, the standards already in place. So, I'd say the foundation is there. The question really is about kind of where do we want to go with this? And to what degree do we want to influence? Not just things in Canada, but perhaps internationally. And so, on that latter part of the question around catching up to the EU and the US, I actually do have a strong opinion on that one.

I think we have a real advantage right now to be able to trial and pull together groups that we have made investments in artificial intelligence to trial standards right now. There are key sectors of our economy where we know that we have a competitive advantage. We know that we want to make sure that we could be maintained that competitive advantage. And perhaps those are the areas where we should be focusing on, on standards. But I think in terms of expertise, tools, and opportunity, we are well positioned to close those gaps. That's just kind of my soapbox moment on this topic, but I turn it over to Craig and Phil for comment.

Neil Bouwer: Super, Craig?

Craig Shank: So, I'll jump in. I agree very much, Monique. That's a great assessment. I think there are a couple of things that Canada can do in order to take advantage of that situation. And I think one of them is that I would pull off to the side, one of the most difficult of all of these and that is the social media topic, because I think that takes a different kind of thought and regulation from the horizontal view of risk-based regulation of artificial intelligence or sectoral regulation of artificial intelligence.

The second thing that I would really think about is strong signal. Canada is a trusted participant in the international ecosystem. And it's a trusted participant in the standards community trusted participant through its Five Eyes relationship, its trusted participant in cybersecurity. And can indeed play a very powerful role in this. I think strong signal about the requirements that Canada would like to have in place, really driven by procurement. I will come back to this again and again, that market signal is incredibly valuable, and it will turn the heads of tech companies and the implementers that are in that partner ecosystem that actually build things on top of what the tech companies put together. So, I think that is potentially very, very useful. I think the, I'm a cynical American, the possibility that something is actually going to occur here in the US, that isn't just driven by anger at the social media companies is modest at best. It's- we've got a very tough legislative environment right now for that horizontal work. And then on this, we can have the social media conversation in a different kind different day or different part of this discussion. But I think that's a very different beast.

Neil Bouwer: Well, I'm going to turn it to Phil, but first I just want to highlight something you said around procurement, because in the Canadian context, of course, the departments of PSPC and Treasure Board Secretariat have really been trying to advance as well, as ISED and others and an agenda for departments and agencies that were all members of, to actually start using procurement strategically for, to procure AI and including from a lot of small vendors that are in the ecosystem, that could be really important. So, I just wanted to underscore that point, but Phil, to the question, whether Canada is doing enough and what more should it be doing to close this potential gap with the EU and the US?

Philip Dawson: That's a great question and I'm not sure there's a huge gap with what the US is doing first of all. But in terms of the US, the gap with the EU, I'm not only sure that's the right question for a couple, a couple reasons. I mean, there're fundamentally different approaches. Look at the EU, this came up in the lecture, but the EU in relation to data and AI has tabled, I think five pieces of legislation in the last 18 months and I don't think I even had on the screen there, the Data Act, which is just came out, I think after this was done. So, five different pieces of legislation, very comprehensive legislation, all interlocking pieces of legislation, very complex 18 months. I think in one sense, we also have to ask ourselves is, are we going to do that? Is that something that, just in terms of a capacity, not like, can we do that? Not sure.

And then two, is it desirable? And in some cases, if you have a legislative function where you can develop that type of legislation in such a short period of time, then you could probably revise it in such a short period of time. You could probably continue to update it almost iteratively from our perspective, or at least from the US perspective, rather jurisdictions looking at that legislative function, it's almost looking like our regulations. Like how well, which can be adapted much more with far greater agility. So, if we're looking at, in Canada, what should we do to catch up? I think we want to be careful, what we wish for, and what we want to put into actual legislation that is, as we've seen is difficult to adjust and modify or may be better- and that's not to say there are not real risks and harms that we want to protect the public again.

So, I think there is a need for some action, but it might look a little bit more like what Craig was saying and some strategic areas, procurement, or at a horizontal level, a framework that sends a signal to the market about these types of rules are coming, maybe through regulation standards with robust public oversight, but I'm not sure that we'd want to catch up. And that we'd be positioning ourselves well to catch up, given just how difficult it is to change laws in our country. And then also we're still talking about a pretty nascent field in terms of technology. And so, anyway, I'll just leave it there, but I think it's a great question. And it's something that we constantly ask ourselves because we see a lot of activity in EU in particular.

Neil Bouwer: Excellent.

Monique Crichlow: Sorry Neil. I just want to [crosstalk 01:00:11]. It sounds like you're saying, maybe it's that standardization and regulation of AI will look different. It doesn't all need to happen through legal mechanisms like we've seen in the EU. I think that's what I'm hearing from you.

Philip Dawson: Yeah, thanks Monique. I'm saying that there may be many features of what we see in other jurisdictions that for us, will make sense to have them develop through regulation or through standards with hopes back into a legislative framework. Exactly.

Neil Bouwer: Well, I really like this distinction to what you put in law versus what you put in regulation. And I note that the legislative branch has to trust the executive branch if it gives them regulatory making power, which in this dynamic area is very interesting when it comes to trust. There is another question I want to take from the audience, but I also want to remind people that you can use the raise hand button in the top right-hand corner of your screen. So, please feel free. We've got a few questions queued up, but please feel free to send your question in if you would like me to pose it. And the next one, and I think Craig, this might have come from something you said, so I'll direct it to you first. But bear with me, I'll read the question.

It says "if AI is more and more useful, needed, or inevitable to tackle the daily tasks done by governments and companies, then if one finds that an AI actually does harm and the organization would be better off without it, is there room to simply pull the plug or will we always add layers of AI over potentially harmful AI?" I think you mentioned the advice of always being able to pull the plug. So, do you want to expand on that as you start with this question?

Craig Shank: Sure. It's a great question and it's a great point and it's a point that is more about human nature than it is about technology. And that is the ease with which we accept things into an organisation and then choose to patch or fix them rather than actually, if something is really not fit for purpose, pull it out and start over. I think that certainly is a part of the point of the question. And I think the key underlying this is to make sure that the... There are a few different parts. Componentization of construction of these systems really needs to be handled in a way so that things that don't belong in the system can be pulled out of the system. And then you can do a system integrity check, and you understand what it is that you have left and how can you actually then continue the operation of the system. It depends on what the feature, the role is. The question also, I think, recognizes or draws in the fact that so many of the AI systems that will be built over the coming years will be pieced together from many different systems. And so, you actually have with that composite, you will need to be very alert to what goes on with each of the components. And then what happens when you stop operating one of the components, but you do still it's- with any system, you still have to be able to pull things that are creating hazards offline. You've got to be able to stop operating them in some sense, even if that means that your system, if you are building a mission critical system that relies on some of these components and you are going to have to have it operating at five ninths of reliability, then you have to have a method of pulling that component out and patching something back in to hold its place, that actually has to be part of both the impact assessment, the launch plan and the monitoring plan. I'm not sure that I've been very [inaudible 01:04:02] that, but it's a really good question.

Neil Bouwer: So, let me just-

Craig Shank: Yeah.

Neil Bouwer: Let me just check my understanding. So, let's say I have a self driving car that's powered by AI, but the high beam component doesn't work, or the blind side detector doesn't work. What you're saying is we should be designing AI to be a sum of many components so that you could turn off these components if it stops working. Is that right?

Craig Shank: I would share two thoughts. First, we know that the systems will be built in that componentized way, just because of the complexity of the systems, and the number of people involved. But what we'll have to do is be able to turn those things off. And if we are turning something off that the system should not run without, we should either have the ability to patch something back in, or we should have the ability to stop the system. That means the system has to be something that can be stopped, and the system has to be something that isn't going to stop a human heartbeat, if you stop it as an example. It's an important design attribute of really thinking through how do we design these systems?

Neil Bouwer: Okay that's cool. And I guess there's a responsibility then to kind of back up versions that maybe you served a purpose at a point in time but.

Craig Shank: Back up versions and actually knowing your versioning and knowing your traceability. And when we start talking about this and start peeling back layers, we realized regulating that with what legislating for that would be very nearly impossible. It's too brittle, legislation is too brittle to be able to handle that kind of dynamic system development. Regulating for it maybe but regulating for risk analysis and regulating for the traceability, regulating for the ability to pull a system offline. You actually could do that without regulating for the details of how you manage each component. Only in China do they have the resources available to regulate each of those pieces.

Neil Bouwer: Interesting. Monique and Phil did you want to add to that discussion sort of on turning off parts that aren't working? No?

Monique Crichlow: I was going to say, I think Craig really sum that up really well. And the example of the self driving car is a good one that you threw out there Neil. It really is about, I think in all areas when it comes to standards, if we see something unsafe, the ability to respond.

Philip Dawson: And I just like to highlight something I like from Craig too, which the law being little bit what I was trying to unpack a little bit before, but with the law being brittle. And just really not the right mechanism, it's not so much that we... it's not desirable to do all this legislation, because it's just not going to be fit for purpose over time or respond to the challenges that we're facing. So, there's not in any way an abdication of responsibility at the legislative level or even the regulatory level. It's just recognizing that the right tool to address this type of dynamism is at the industry standards level. And then we can have a whole other question about: is that legitimate from a kind of a democratic and accountability perspective? And how do we make that more legitimate if it isn't, but it's responding to the kind of on the ground enterprise level challenges with the technology that's going to be done at the standards level and not at the first two.

Neil Bouwer: And actually, one of the audience members is asking a question that might be right on this point Phil. You use the term soft law in your opening lecture in terms of international norms. Can you give an example or expand a little bit on the sort of the soft law that would govern this area?

Philip Dawson: Sure. So, in the lecture, I think we talked about some of the international organizations who have been active in developing guidance and principles related to AI governance. I mentioned earlier, just in the discussion today, the International Human Rights Law and legal scholars and civil society groups have been discussing how this is a useful framework for understanding AI's impacts and helping to articulate our responses in what manner that's consistent with the imperatives of the human rights framework. International Human Rights Law could be described as soft law. It's not, it's part of International Legal Frameworks. It's when it becomes incorporated into domestic law through some of, for instance in Canada, our human rights legislation at the provincial or federal level, that we would call hard law. So, at a domestic level that is binding and that is justiciable, I think is the word, by courts. That's a distinction.

The Universal Declaration of Human Rights, that's an example of soft law or some of the guidance under it through the UN guiding principles, some business and human rights, the OECD AI principles, and some of the guidance that the OECD develops. This is the realm of soft law, which often is developed particularly for technology, but in other spaces, maybe before countries develop their own black and white hard letter law. These are tools that really help guide legislative efforts and policy discussions at the domestic level. Some cases, they may actually be incorporated with great specificity into actual legislation, but they're non-binding.

Neil Bouwer: Now can I just ask you the pointed question, though? Do you think the industry can regulate itself through standards or do you think we need hard law in this area?

Philip Dawson: So, that's a great point because I think, Neil, before the panel today, we were just discussing this, the great series that the Canada School for Public Service did with the Standards Council of Canada on the data governance standards roadmap that came out last summer, and a series of data governance standards that could be implemented to support different commercial activities, future compliance efforts.

I think in the data and AI space thus far, and thus far I mean up until the last maybe 18 months or so, in a lot of cases in relation to AI, standards were viewed as optional, voluntary. They were kind of nice to have to give common benchmarks to businesses wanting to do business together to kind of reduce some of the friction to commercialization of AI. More and more as these standards are clearly becoming pathways to compliance with legislation globally, there's a bit of a transition between the use of standards as kind of a form of soft law, industry-led self-regulation and making use of them to support hard law regulatory frameworks. And that grey space is a really interesting space to discuss because whether or not countries actually will directly reference specific standards or whether they'll just reference standards produced by certain known recognized international bodies, that's I think where a lot of the discussion's going to be had. And then you'll see that some of these soft law creatures may be leveraged for hard law purposes, if that makes any sense.

Neil Bouwer: Go ahead, Craig, please.

Craig Shank: Just to build on that a little bit, I would like to... Phil has identified one of the key uses of standards as a potential Rosetta stone between regulatory regimes. And it is incredibly powerful as a use of standards and can become a role that standards play that hardens up into a body of justiciable law. But there are two more pathways to hardening up soft law that make it more than just self-regulation that I think this group in particular will be interested in. And one of those is through the process of companies saying, "We are compliant." And then whatever your fair-trade regime is says, "If you say you are compliant, you have to be compliant, or we are going to sanction you in different ways through your fair trade commission or whatever else it is."

The second is through procurement, and we're back to the procurement story. But I just think many of the people here in this audience have the capacity to influence that procurement system and to use it strategically as a vehicle to help identify the need for not just the standards, but the third-party insurance that those standards have been met, whether it is through certification, through audit or whatever the particular vehicle is. And I'd say if I had a leave behind, it would be that those are some specific tools. And I might even think about some specific timelines to try to identify to the industry, to signal, "Here's what we want. And here is when we are going to want it so you better start piecing it together." Because fundamentally we are trying to nudge a... We're not trying to nudge a set of non-regulatory regulatory tools to happen more quickly than the normal multi-decade cycle that they would take. So, anyway, that's...

Neil Bouwer: Fantastic. Monique, do you want to weigh in on this?

[Monique speaks silently.]

Neil Bouwer: You're on mute though, Monique.

Monique Crichlow: No, I think that Craig has laid out a good position about standards. I know that we often think about standards as being kind of, "Oh, well, we don't know what to do, so let's start there." And I wouldn't say that that's necessarily the case when it comes to standards and regulation of AI. I think that standards has a role to play in how we think about regulation in regards to AI and oversight and governance of AI systems and AI tools. But I think that we should think about all of these things as distinct tools that we can use to our advantage. It's not that one is better than the other, it's that they serve different purposes, and together they'll give us the agility that's needed to manage the environment that's emerging. So, I think that's perhaps the only thing that I would add.

Neil Bouwer: Yeah. And for me, it goes back to something you said earlier about the maturity of Canada's regulatory regime, as well. If you have trusted players in the ecosystem, if you have a tradition of regulations with the verification and so on, and you have that kind of institutional capital, then I think standards makes more sense as a policy instrument. I think for those reasons as well, maybe, we should not look at standards with cynicism. I think we should be looking at it as an opportunity, as well as procurement and others.

Okay, great. I got to ask this China question here, and I'm going to just put it out there for any of you to answer. But the question is, is do you have any concerns in regard to North America and Europe being economically left behind due to being too cautious on artificial intelligence, as we've seen with the development of regulations, for example, in comparison of other countries? And we mentioned, like China, that may have a different approach, maybe a faster approach to regulations around artificial intelligence. So, maybe I can just open that up and see if any of you want to take that bait.

Craig Shank: I'll jump in very quickly. The first, I have concerns, they don't start with overregulation. The reason for that is, of course, we don't have any regulation in place yet. So, there's nothing being prevented by regulation today. The concerns that I have are around fundamental investment in the U.S. Canada has done a beautiful job of doing fundamental investment in AI, and I really wish that we could see the U.S. side matching that on, essentially, on a comparable proportionate level. And I would really like to see Europe free... They're making good national investments, but I'd like to see them free some of that up and really allow that to play a more innovative role in a market ecosystem. And I think from that, we would have a much more powerful, robust, multinational consortium that would, I think, be very effective.

And then there's a key inflexion point that we might miss in the shifting. If data centres make a big shift to quantum computing and that all happens in China first, that would be a big change in essentially the relative digital power, intellectual power that could be brought to bear by two different regimes.

Neil Bouwer: Phil, did you want to add to that?

Philip Dawson: Yeah, sure, Neil, just a little bit. Yeah, I'm not sure I'd be worried about Canada and trading partners being left behind. There's quite a few of those countries that we're talking about and more and more really through foreign policy is being used to try and leverage that kind of trading block in a way that can help protect some of the values that Canadians and other countries identify as important, but also open up trade as much as we can. This is kind of an innovation in and of itself. Look at the EU-US Tech and Trade Council which is a bilateral initiative which I think should be expanded. A bilateral initiative, where you have two countries that are looking at... Well, the EU and the U.S. are looking at trying to harmonize their approaches to trade with respect to AI to ensure that they're enabling export as much as possible while maintaining the shared values.

You look at the global partnership on AI as another kind of multilateral alliance that is driving at the same purpose. So, I don't know if we'll be left behind because of a cautious approach. And I think that another challenge will be to identify some of the grey areas where those countries that we'd like to become closer with that are also doing business with China and that are... We don't want to turn this into too polarizing of a conversation, which some people would like to see happen, which I think is problematic. But I think that all the discussions on trade between Canada and different countries and how to promote trade as much as possible on AI while respecting our values, I don't think Canada and other countries must be left behind.

Neil Bouwer: All right. Fantastic. We're getting close to the end of the event here, so I think what I'm going to do now is I'm just going to ask each of you to answer a question, starting with you, Monique. I would just like to know, if you could advocate for a single action by Canada in the regulation of AI, if you could advocate for one area... We've touched on a number of them, but if you could make the pitch for candidates to do one thing in the area of regulating AI, what would it be? So, Monique, maybe I'll start with you. And then Craig. And then we can end with the person we began with and go to Phil. So, Monique, to you first.

Monique Crichlow: Oh, this is a tough one. I have my mind around the economics and globalism because Phil and I were having that chat earlier. If I were to have one thing, I think it's what I said before. I think that we've, perhaps in a very traditional Canadian way, we've sold ourselves short in this area in terms of our expertise and the value that we have to bring to the conversation and the investments that we've made to date. I think we've made significant investments in our science policies around artificial intelligence. I think we've made significant investments in making sure that our governments remain digital. I think we are really trying to demonstrate expertise in and the need for standardization and regulation. So, from my perspective, I don't know if I have a single wish that'll do this, but I think it's we really need to take stock of all of the things that we've done well, and look at them in sum and say, "How can we start to connect the dots and build that baseline?"

I think there's a lot of opportunity there based on the investments and skill that we do have that will leave us as a country that is true to our values, but also remains competitive in the global market. And I think there's just a lot of opportunity there. And if I were to make one wish it would be let's pull those together and take action in the areas where we do have strong investments.

Neil Bouwer: Fantastic. I hear that as, "Get it together, Canada." Fantastic. Craig, how about you?

Craig Shank: So, I think Monique is, "Get it together and walk with pride." I really like that, Monique. I think I'll go with a much more tactical and broken record answer. And that is I would continue to build toward using the power of strategic procurement in ways that will help benefit this ecosystem, and I actually think will create an exportable model for Canada. And frankly I would start now signalling that by 2024, we are going to want third-party certifications from somebody who has been properly accredited by SCC, Standards Council of Canada. And we're going to leave it to industry to sort out the hard parts in the first instance, but you're going to have to listen to our collective voice on what that's going to mean, what's necessary. And I think from that, you'll see some scrambling. But I think you would also see some positive outcomes and some experiments from which you can draw high value conclusions.

Neil Bouwer: Fantastic. Well, I like that because I think every forward-looking public servant should be thinking about how AI is going to transform the business of government in their domain. So, procurement's one way to come at that. And then with the assurance service is also procured, that's really practical advice. Thank you. Phil, how about you?

Philip Dawson: Yeah. Happy to add a couple more. I think Canada could go a long way by doubling down on some of the areas where it's really competitive globally. For instance, in AI research. I think we mentioned it earlier but look at the partnership between the Alan Turing Institute in the UK and their standards body, British Standards Institute, BSI, as a way to augment their standardization capacity and expertise on AI standards. I think that's super interesting. We have a terrific national standards institute, the Standards Council of Canada, which has received some additional resources to support its AI work. I would mention that pilot conformity assessment that's underway. I think these are great things and I would love to see them even more heavily resourced and leveraged because of the advantages that we have there.

Then I think one thing that we haven't talked about today is something that is often discussed at the Schwartz Reisman Institute is the role of technology in helping to operationalize standards and conformity assessments and tools and technologies that turn what used to be in the past, manual assessments and certifications, into automated processes that greatly reduce the time and also increase the effectiveness of the operation. So, I think if we know, or we think that in some cases, different technologies are going to be instrumental in operationalizing these soft law instruments we've talked about today, standard certifications, what does a more strategic coordination look like between some of our R & D spending programs, our innovation spending programs on AI and research funding at national AI institutes, for instance? Are there testing hubs or measurement hubs for AI systems, particular domains that we should invest in? And then give companies developing these types of certification technologies an opportunity to really accelerate progress? I think that could be really interesting. I'll just end there, but I think those would be two really interesting things for us to explore.

Neil Bouwer: Fantastic. Thank you for that. And as you're all speaking, I'm thinking of our colleagues at Innovation, Science and Economic Development Canada who are thinking about how to build on Canada's strengths and bring the pieces together, including procurement and standards and spending on science and technology. So, really fantastic. Well, look, thank you so much to our amazing guest speakers here, Monique Crichlow, Craig Shank, and Phil Dawson. Thank you to these tremendous experts who have shared their time and their insights with us on the future of AI regulation around the world. Thank you so much for being with us here today.

If you enjoyed this event, you won't want to miss the next one in the series. It's coming up on April 19th. It will focus on artificial intelligence, machine learning and foreign intelligence featuring Janice Stein from U of T's Munk School of Global Affairs & Public Policy and Jon Lindsay from the Georgia Tech Nunn School on International Affairs. We hope that you can join us for that event. Also, please, you will receive an email after this event. It will include a survey. We'd love to get your feedback so that we can improve on our events as we go forward with this series and with all of our events. So, on behalf of U of T and the Schwartz Reisman Institute, as well as the Canada School of Public Service, thank you all for joining us. Have a great rest of the day. Take care.

[The video chat fades to CSPS logo.]

[The Government of Canada logo appears and fades to black.]

Related links


Date modified: