Both the UK and US government have begun to circle warily around the recent emergence of powerful AI technologies, and are taking the first steps towards attempting to rein in the sector. The British Competition and Markets Authority (CMA), fresh from pulling the rug out from under Microsoft’s proposed Activision Blizzard acquisition, has begun a review of the underlying systems behind various AI tools. The U.S. government joined in by issuing a statement saying AI companies have a “fundamental responsibility to make sure their products are safe before they are deployed or made public.”

This all comes shortly after Dr. Geoffrey Hinton, sometimes called “the Godfather of deep learning”, resigned from Google and warned that the industry needs to stop scaling AI technology and ask “whether they can control it.” Google is one of many seriously big tech companies, including Microsoft and OpenAI, that have invested enormously in AI technologies, and that investment may well be part of the problem: Such companies eventually want to see where the returns are coming from.

Dr. Hinton’s resignation comes amid wider fears about the sector. Last month saw a joint letter with 30,000 signatories, including prominent tech figures like Elon Musk, warning about the effect of AI on areas like jobs, the potential for fraud, and of course good old misinformation. The UK government’s scientific adviser, Sir Patrick Vallance, has entreated the government to “get ahead” of these issues, and compared the emergence of the tech to the Industrial Revolution.

“AI has burst into the public consciousness over the past few months but has been on our radar for some time,” the CMA’s chief executive Sarah Cardell told the Guardian. “It’s crucial that the potential benefits of this transformative technology are readily accessible to UK businesses and consumers while people remain protected from issues like false or misleading information.”

The CMA review will report in September, and aim to establish some “guiding principles” for the sector’s future. The UK is arguably one of the leaders in the field, with the UK-based DeepMind (owned by Google parent company Alphabet) among other large AI firms including Stability AI (Stable Diffusion).

In the US, meanwhile, Vice President Kamala Harris met executives from Alphabet, Microsoft and OpenAI at the White House, afterwards issuing a statement saying that “the private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products”.

This feels a bit like closing the stable door after the horse has bolted, but the Biden administration also announced it is to spend $140m on seven new national AI research institutes, focused on creating technologies that are “ethical, trustworthy, responsible, and serve the public good.” AI development at the moment is almost entirely within the private sector. 

I suppose they’re finally paying attention, at least, even though you do wonder what capacity we have to put the brakes on this stuff. A notable point made by Dr. Hinton is that, regardless of what direction future advances take, “It is hard to see how you can prevent the bad actors from using it for bad things”, before comparing controlling its uses to a backhoe.

“As soon as you have good mechanical technology, you can make things like backhoes that can dig holes in the road. But of course a backhoe can knock your head off,” Hinton said. “But you don’t want to not develop a backhoe because it can knock your head off, that would be regarded as silly.”

Go to Source