Quantcast
Channel: E-Commerce | Ireland IP & Technology Law Blog
Viewing all articles
Browse latest Browse all 10

The Bletchley Declaration: further evidence of the global legislative focus on Artificial Intelligence

$
0
0

At the start of November, the UK government hosted an AI Safety Summit, intended to provide a forum for in-depth discussions on the challenges and opportunities of AI technology. It attracted a wide array of senior stakeholders from across policy, business, academia and government and appears to show a UK desire to lead in this area.

The opening day of the Summit saw the unveiling of the Bletchley Declaration (the Declaration) – a non-binding commitment agreed by the 27 countries whose delegates attended the Summit including USA, China, India and the EU.

Although a voluntary declaration, lacking any real “legislative teeth”, the Declaration does signify an important step towards greater international collaboration on both AI development and its regulation.

It is a prevailing sentiment among policy stakeholders that for any regulatory framework in respect of AI to be effective, some form of international regulatory cooperation will be needed and this Declaration lays the groundwork for further regulatory collaboration between countries.

The substance of the Declaration includes much of what might be expected in such a high level, policy document. It notes the enormous opportunities which AI could create across a number or areas of human life (including transport, education, health and justice), while also recognising the significant risks and potential harm to those same areas if development of the technology is mishandled or developed without proper levels of transparency and human rights protections. Equally however, the Declaration stressed the need for pro-innovation and proportionate governance to allow for the benefits of AI to be maximised.

Though not yet finalised, the EU’s proposed AI Act (see our recent overview of the Act here) is one of the most developed AI regulatory frameworks currently under consideration and it is interesting to see its possible influence on the Declaration. This may indicate a willingness by the global community to follow the EU’s lead in this area.

For example, the latest version of the EU’s AI Act inserted a new focus on the development of “foundation models” of AI. Foundation models are the highly capable generative AI models which can be applied to a wide variety of tasks. The Declaration seems to echo this focus, specifically highlighting the unique safety risks arising in respect of such foundation models.

The EU’s AI Act also proposes a risk-based approach to regulation – the greater risk attaching to a certain use of AI, the more intensive the regulatory obligation. Although stopping short of proposing a similar system, the Declaration repeatedly underscores the importance of risk analysis in respect of different AI uses and promotes the formulation of risk-based policies.

However, in contrast to these indications of the EU’s legislative approach being followed more widely, when speaking after the Summit on 9 November, the UK’s technology minister, Michelle Donelan, rejected calls for the UK to introduce AI specific legislation. She explicitly noted that the UK government did not intend to implement a “copycat of EU legislation” in respect of AI. She asserted that the UK’s existing regulators in the areas of data privacy, competition, communications and financial services were well placed to consider emerging AI issues.

Accordingly, achieving international consensus on a more detailed regulatory framework beyond the Declaration is, probably, a somewhat distant prospect for now.


Viewing all articles
Browse latest Browse all 10

Latest Images

Trending Articles





Latest Images