The UK Government Wants To Test AI: Report

Posted on

Why do we have to give exams or tests? So that someone higher up, someone more qualified, could judge how good we are in a certain subject or field. And also, if we can do a good job for the position/post we were tested for. In life, you have to go through several tests, some academic, some sports-related, others mental or emotional. These tests are everywhere. Some have even called life a series of never-ending tests (not taking up that thread of reasoning now). Speaking of tests, the UK government is also considering a unique series of tests to give to artificial intelligence (AI) before passing new AI laws, according to a report by The Financial Times. The report is based on inputs from multiple people claiming to have knowledge of the matter. Can an AI model pass the tests to be considered worthy enough to do the job it was meant to do?

The UK government plans to test the AI programs and models developed (and being developed) by some of the world leaders in the domain, namely ChatGPT-maker OpenAI, Google, Microsoft, and Meta.

An AI Test Criteria Could Be In The Works

As per the report, the UK ministers are readying to publish the test criteria in the coming weeks. It would be a series of tests, the results of which will decide the government’s future stance on AI models by companies like OpenAI, Google, etc.

At the UK government’s global AI Safety Summit in November, firms like OpenAI, Google’s DeepMind, Microsoft, and Meta, signed voluntary commitments on their products being safe. They agreed to UK’s AI Safety Institute inspecting their AI models before approving them for users.

The report says that though the inspection could be going on, it’s not clear how the tests are conducted or whether the AI companies would allow the UK government to explore every bit of the AI model they’ve built.

The Important Tests AI Would Have To Face

One of key tests involves checking if the systems developed by UK’s AI Safety Institute succeed in identifying the risks pertaining to artificial intelligence technology. If the systems fail, then it is not looking good for AI and could force the government to go for curbs.

Another AI test that could further embroil AI in legislative procedures is if “AI companies fail to uphold voluntary commitments to avoid harm.”

UK’s Stance On Artificial Intelligence

Contrary to US and China, where governments are pretty much bullish on having a tight control on AI, the UK does not reportedly want to create “specific AI legislation in the short term in favour of a light-touch regime, over fears that tough regulation would inhibit industry growth.”

According to the report by The Financial Times, the UK government’s tests on AI are part of “its response to a consultation on its white paper, published in March, that proposed splitting responsibility for AI regulation among existing regulators, such as Ofcom and the Financial Conduct Authority.”

As per inputs from one of the sources, the report says that before tabling any legislation, the UK government has to give proof that its move will lessen the risks posed by AI and will not hamper innovation.

THe UK government has reportedly said that it’s “working closely with regulators to make sure we have the necessary guardrails in place — many of whom have started to proactively take action in line with our proposed framework”.

Recently, the UK Supreme Court refused to grant patents to an AI machine. The court reasoned that a patent can only be granted to a human or an organization, not a machine. An appeal for the same patent was earlier denied by the US Supreme Court.