Alarms sounded on growing AI risks; positive shift on Ukraine diplomacy

ACTION NEEDED NOW TO ADDRESS GROWING AI DANGERS

In our 24 February 2023 blog post, we considered serious concerns about the use of Artificial Intelligence (AI) in nuclear command systems.

We quoted the opening paragraph of an article by Peter Rautenbach, which stated in part:

Artificial Intelligence (AI) systems suffer from a myriad of unique technical problems that could directly raise the risk of inadvertent nuclear weapons use.

Now countless leading AI experts, and tech giants including Apple co-founder Steve Wozniak and Tesla CEO Elon Musk, have sounded the alarm about the risks inherent in the technology in general, in a 22 March 2023 document entitled Pause Giant AI Experiments: An Open Letter.

It begins:

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.

It goes on to cite the widely endorsed Asilomar AI [Governance] Principles, which state:

Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

The letter continues:

Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.

The Open Letter’s signatories believe that:

Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.

Accordingly, the Open Letter issues the following call:

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors.

If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

What needs to be done during the pause

The Open Letter outlines the steps that need to be taken during the 6-month pause:

  • joint development and implementation of a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts;
  • refocusing AI research and development on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal; and
  • dramatic acceleration by AI developers and policymakers of robust AI governance systems.

The letter is accompanied by a background paper, Policy Making in the Pause, that includes a set of policy recommendations.

AI Open Letter sparks controversy

Alas, but unsurprisingly, the Open Letter’s call for a pause on the development of advanced artificial intelligence (AI) systems has divided researchers.

In an 11 April 2023 article in Science, Laurie Clarke writes:

The pause itself seems unlikely to happen. OpenAI CEO Sam Altman didn’t sign the letter, telling The Wall Street Journal that the company has always taken safety seriously, and regularly collaborates with the industry on safety standards. Microsoft co-founder Bill Gates told Reuters the proposed pause won’t “solve the challenges” ahead.

Letter signatory Michael Osborne, a machine learning researcher and co-founder of AI company Mind Foundry, echoing the Open Letter’s alternative proposal, believes that governments need to step in:

We can’t rely on the tech giants to self-regulate.

Laurie Clark contrasts the Biden administration’s “voluntary and non-binding” AI Bill of Rights with the European Union’s AI Act:

expected to come into force this year, [the EU’s AI Act] will apply different levels of regulation depending on the level of risk.

A specific example is given:

policing systems that aim to predict individual crimes are considered unacceptably risky, and are therefore banned.

More steps by the US administration

On 4 May 2023 Vice-President Kamala Harris met with chief executives at the forefront of the industry’s rapid advances to announce new measures to address the risks posed by “generative AI”, asserting that

AI has the potential to dramatically increase threats to safety and security, infringe civil rights and privacy, and erode public trust and faith in democracy.

Harris continued:

Government, private companies, and others in society must tackle these challenges together.

President Biden and I are committed to doing our part — including by advancing potential new regulations and supporting new legislation — so that everyone can safely benefit from technological innovations.

Dan Milmo, Global technology editor for the guardian.com, in an article entitled US aims to tackle risk of uncontrolled race to develop AI, provides more detail:

The US government said on Thursday it would invest $140m (£111m) in seven new national AI research institutes, to pursue AI advances that are “ethical, trustworthy, responsible and serve the public good.”

Initiatives include:

  • the agreement by leading AI developers to their systems being publicly evaluated at this year’s Defcon 31 cybersecurity conference; and
  • the planned release by the President’s Office of Management and Budget of draft guidance on the use of AI by the US government.

G7 Digital Ministers endorse AI action plan

After a two-day meeting in Tokyo in early May, the G7 digital ministers issued a declaration indicating that among the topics discussed were

responsible AI and global AI governance.

Ministers endorsed an AI action plan for “promoting global interoperability between tools for trustworthy AI” and committed to future meetings on regenerative AI, covering governance, intellectual property rights, transparency and misinformation.

The latest warning  comes from Canadian ‘Godfather of AI”

The latest warning of the manifold dangers posed by AI comes from Canadian AI pioneer Geoffrey Hinton, who has resigned as a Google computer scientist so that he can speak freely about the dangers. In a tweet about his resignation, he wrote:

In addition to the potential for massive disinformation campaigns, Hinton — often billed as the Godfather of AI — said he was also concerned about the

existential risk of what happens when these things get more intelligent than us.

Asserting that this eventuality was “fairly close”, he continued:

What we want is some way of making sure that even if they’re smarter than us, they’re going to do things that are beneficial.

But we need to try and do that in a world where there’s bad actors who want to build robot soldiers that kill people and it seems very hard to me.

Valérie Pisano, the chief executive of Mila — the Quebec Artificial Intelligence Institute — recently commented to the Guardian.com about the slapdash approach to safety in AI systems:

The technology is put out there, and as the system interacts with humankind, its developers wait to see what happens and make adjustments based on that.

We would never, as a collective, accept this kind of mindset in any other industrial field.

In Hinton’s view, artificial general intelligence, or AGI, has now joined climate change and nuclear Armageddon as ways for humans to extinguish themselves.

Open Letter from Canadian researchers

In Canada, 75 researchers and startups say in an Open Letter, released in mid-April:

The pace at which AI is developing now requires timely action…. In short, the window is rapidly closing, and further postponing of action would be drastically out-of-sync with the speed at which the technology is being developed and deployed.

In light of the dangers, they make the following request:

We ask our political representatives to strongly and urgently support AIDA (the Artificial Intelligence Data Act).

For more on the letter, see Canadian experts urge Parliament to pass AI law fast (Howard Solomon, itworld.com).

Proposed Canadian AI legislation has big problems

The legislation referenced in the Canadian open letter is Bill C-27, the Digital Charter Implementation Act. It is part of an omnibus bill that includes the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act, and the Artificial Intelligence and Data Act.

On 19 April 2023 leading Canadian technology and law expert Professor Michael Geist writes:

AIDA may be well-meaning and the issue of AI regulation critically important, but the bill is limited in principles and severely lacking in detail, leaving virtually all of the heavy lifting to a regulation-making process that will take years to unfold.

He continues:

While no one should doubt the importance of AI regulation, Canadians deserve better than virtue signalling on the issue with a bill that never received a full public consultation.

Geist warns:

Our [parliamentary] committee system is not designed for good policy outcomes when dealing with what amounts to an omnibus bill on privacy and AI.

Geist argues that the government should “scrap it [Bill C-27] altogether” and make privacy reform the urgent priority arising out of Bill C-27.

As for AI policy, in Geist’s view:

the government should do what it should have done from the start: launch a real consultation and public discussion on what we think AI regulation should prioritize, what principles should serve as the foundation for such regulation, how to develop effective administration and oversight, and how to ensure that the law keeps pace with a rapidly changing technology environment that has huge human rights and economic implications.

Whither Canada?

Bearing in mind the rapidly changing technology environment and its attendant human rights and economic implications, we call on the Government of Canada to withdraw Bill C-27 in favour of meaningful public consultation on the broad parameters of AI regulation including:

  • what AI regulation should be given priority
  • what principles should serve as a foundation for AI regulation
  • how to develop effective administration and oversight and
  • how to ensure the law keeps pace with the rapidly changing technology environment.

For more on the dangers that AI poses to the very fabric of our democracy, see AI and Politics: How Will We Know What—and Who—Is Real? (Colin Horgan, thewalrus.ca, 27 April 2023).

UKRAINE UPDATE: UNITED STATES WARMS TO CHINA MEDIATION ROLE

The Washington Post headline The U.S. warms to a role for China in resolving the Ukraine war (David Ignatius, 3 May 2023) signals a marked change from past Biden administration statements which minimized or even ridiculed the idea of a possible Chinese diplomatic role.

In response to a direct question by Ignatius about the US working with China to achieve a stable outcome in Ukraine, Secretary Blinken stated:

In principle, there’s nothing wrong with that if we have a country, whether it’s China or other countries that have significant influence that are prepared to pursue a just and durable peace. … We would welcome that, and it’s certainly possible that China would have a role to play in that effort. And that could be very beneficial.

In another notable change from earlier administration statements, the WP reports:

Blinken said there were some “positive” items in the 12-point peace plan that China announced in February. The Chinese proposal includes respecting “the sovereignty, independence and territorial integrity of all countries,” which implies a Russian troop withdrawal; “reducing strategic risks” and agreeing that “nuclear weapons must not be used”; and taking steps “to gradually de-escalate the situation and ultimately reach a comprehensive cease fire.”

Blinken also characterized the Xi-Zelensky telephone call as “a positive thing” without repeating past expressions of “skepticism”.

China as a peace plan guarantor

The Washington Post also reported the view of several administration officials that Russia has been “unhappy” with the Chinese mediation effort — but that Moscow cannot easily resist China’s wishes.

Ignatius writes:

That’s one reason administration officials are intrigued by Chinese peace efforts; they believe they might prevent Russia from trying to renew the war later — after a pause.

According to Ignatius, one official told him:

The only stability is China as a guarantor.

Former Canadian Ambassador to NATO talks diplomacy

You know that change is afoot when Canada’s former Ambassador to NATO, in recent testimony before the House of Commons National Defence Committee on the Ukraine conflict, starts talking about diplomacy.

In response to a direct question from NDP Defence critic Lindsay Mathyssen on “where we need to go on that diplomatic side,” Kerry Buck said, in part:

we have to talk to some of the countries that have leverage with Russia. That is going to be key to bringing about some kind of peace at some point, when President Zelenskyy calls the time for a peace settlement.

She continued:

We need China experts and people who are close to India and other places who can help to apply some pressure to Russia. You need a full-court press to convince President Putin that it’s time to either lay down arms and come to a table or…. I can’t even start to guess where this war will go in its next steps.

The Rideau Institute comments:

Given the recent remarks by US Secretary of State Blinken on potential American and Chinese cooperation to mediate the Ukraine conflict, it is time for the Canadian government to weigh in on the importance of a diplomatic track.

Whither Canada?

We call upon the Minister of Foreign Affairs to clearly signal Canada’s support for discussions within NATO on a political/diplomatic track toward peace negotiations between Ukraine and Russia.

NOTABLE NOTES: NEW BOOK ON OUTER SPACE

We are extremely pleased to announce that a new book by Michael Byers and Aaron Boley, both of the University of British Columbia, Vancouver, entitled Who Owns Outer Space? International Law, Astrophysics, and the Sustainable Development of Space (Cambridge University Press, April 2023) has been published “open access” so that it is freely accessible to everyone.

Topics addressed include space debris, anti-satellite weapons and many other environmental, safety and security challenges raised by humanity’s rapid expansion into space.

To access the book in PDF format, click HERE.

URGENT REMINDER REGARDING THE FOREIGN INFLUENCE REGISTRY PETITION

In our 23 April 2023 blog post entitled Foreign Influence Registry is a danger to Canadian democracy, we urged readers to sign a petition opposing the proposed registry and/or to contact their parliamentary representatives with their concerns.

We repeat those calls to action here.

TO SIGN THE PETITION, CLICK HERE.

We also urge readers to email the following parliamentarians to convey support for the petition and its request for the Government of Canada to withdraw its proposal for a Foreign Influence Transparency Registry:

Prime Minister Justin Trudeau: < justin.trudeau@parl.gc.ca  >; < Alana.Kitely@pmo-cpm.gc.ca >;

Public Safety Minister Marco Mendicino: < Marco.Mendicino@parl.gc.ca >; < kelly.murdock@ps-sp.gc.ca >;

Leader of the NDP Jagmeet Singh: < Jagmeet.Singh@parl.gc.ca >;

NDP Critic for Public Safety Peter Julian: < Peter.Julian@parl.gc.ca >;

Conservative Critic for Public Safety Raquel Dancho: < Raquel.Dancho@parl.gc.ca >;

Bloc Quebecois Critic for Public Safety Kristina Michaud: < Kristina.Michaud@parl.gc.ca >;

Green Party Critic for Public Safety Elizabeth May: < Elizabeth.May@parl.gc.ca >;

And find your local Member of Parliament HERE.

Whither Canada?

We reiterate our call on the Government of Canada to withdraw its proposal for a Foreign Influence Transparency Registry and instead concentrate on strengthening its laws against harmful interference, including the refinement of legal definitions of prohibited activities and in relation to effective enforcement.

Photo credit: mikemacmarketing, www.vpnsrus.com, Creative Commons license

 

Top
  »
No comments yet

The comments are closed.