SUPPORTING THE DEVELOPMENT AND CONTAINING THE DANGER OF ARTIFICIAL INTELLIGENCE: A GUIDE TO STATE POLICY
“Mark my words: AI is far more dangerous than nukes.”
Experts in the digital economy have been talking about AI (artificial intelligence — S.K.) for several years. However, the number of people with a vested interest in the area of artificial intelligence quickly reached much more than 100 million as early as the first months of 2023. This coincided with the launch of the marketing promotion of the newest tool under the market name “ChatGPT.”
ChatGPT turned out to be an incredibly powerful conversational tool that has become a groundbreaking model in the area of AI and natural language processing and is an ingenious development of the OpenAI startup. An important detail should be mentioned: the main features of this and other similar tools are free of charge for consumers.
Scarcely had AI entered its early days of consumer use when analysts paid attention to its tremendous opportunities for business development, particularly in such domains as rapid creation of texts of almost any kind, marketing and sales, engineering, auditing and law, and R&D – for example, the acceleration of drug discovery through better understanding of diseases and discovery of chemical structures.
British experts add to this list the quiet automation of many aspects of our daily life. AI has already revolutionized nuclear fusion control and accelerated scientific advancement, including the discovery of new technologies needed to combat climate change.
Given the rapid development of AI, technologically advanced states are in a hurry to introduce relevant regulation policies at the national and international level, particularly in view of the risks that AI has potentially created. It is already clear that some of the uses of AI can harm our physical and mental health, violate people's privacy and undermine human rights to the point of quite palpable apocalyptic scenarios.
At the beginning of 2024, the database of the Organization for Economic Cooperation and Development (OECD) already presented more than 1,000 documents of a political and strategic nature from 69 countries and territories, the EU included, which form the basis of AI regulation.
Specifically, there are 295 national strategies in various spheres, 67 initiatives to create AI monitoring bodies, 152 documents based on the results of expert discussions, etc.
These documents define the AI regulation policy in relation to 20 major areas, to wit: corporate and public administration, digital economy, science and technology, industry and entrepreneurship, education and innovation, etc. The leading countries in the development of such documents are the US (82), the UK (61), Germany (37), France (34), Japan (25), China (22). A total of 62 such documents have been adopted within the EU. As of early January 2024, there was one document from Ukraine on the OECD website.
Among individual countries, let us first of all pay attention to the US and the UK, China and EU members, which determine the key trends in AI regulation.
While the US has a more decentralized approach focused on specific applications of AI, the UK uses political leverage through the adoption and operation of policy strategies of other regulatory instruments.
How do they do it?
United States
Tens of billions of dollars invested by American investors in the development of AI systems, an increasingly intense technological competition with China, and potential problems for consumers and national security have inspired top US politicians to resolute bipartisan support and regulation of AI platforms (compare this to the political conflicts between the two parties over the peculiarities of support for Ukraine). In 2023 alone, the committees of the House of Representatives and the Senate held almost three dozen hearings and submitted more than 30 AI-oriented bills. This is in addition to the 50 laws and bills introduced over the past four years at the state level.
In June 2023, Senate Majority Leader Chuck Schumer (D) along with Senators Martin Heinrich (D), Todd Young and Mike Rounds (R) released a new bipartisan proposal to develop legislation to regulate AI.
The plan to increase the global competitiveness of the United States in the development of AI provided for the concurrent appropriate protection of consumers and workers. To facilitate comprehensive AI legislation, a new procedural approach outside the normal legislative process was announced.
For this purpose, five policy principles were agreed upon, namely security, accountability, fundamentals, transparency and innovation.
In addition to bipartisan political agreements, several bipartisan AI bills were introduced related to: promoting leadership in AI research and development; protecting national security; promoting transparency; protecting election integrity; training the workforce; and promoting the use of AI by federal agencies.
On October 30, President Joe Biden, taking into account the joint political efforts, issued a comprehensive (60 pages long!) Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence to solve a number of problems related to the risks of using AI.
The order provides clear guidance for federal agencies on the appropriate and responsible use of AI, including ethical and legal considerations, as well as the potential impact of AI on government operations.
The order also provides for the optimization of procurement processes for AI products, standardization of best practices, and partnerships with industry experts to ensure responsible implementation of AI technologies. Special attention is attached to the importance of ethical considerations when deploying AI by the government. It is guaranteed that public institutions use AI in a way that meets ethical standards and respects human rights. The administration has also taken steps to promote the responsible development of AI systems, including by securing voluntary commitments on AI security and transparency from 15 of the largest technology companies.
The deadline for compliance with Biden's guidance spans from 30 to 365 days — right before the 2024 presidential election.
Let us note that after the publication of the decree, hundreds of consequential publications saw the light of day in the US information field — ranging from the Congressional Research Service to the websites of influential law firms about the practical aspects of the implementation of the order.
United Kingdom
Today, it can be argued that the UK holds the record (trailing perhaps only the US) in terms of the number of documents that define AI policy, strategy and tactics.
However, unlike the US, where legislation has been proposed and passed to manage the risks that AI may pose, the UK government is currently in no rush to do so. In recent years, however, the executive branch has developed and published a series of logically interconnected AI strategies, manuals and guidlines.
The Information Commissioner's Office (ICO) issued guidance on the use of AI in 2020, publishing a draft Guidance on an AI auditing framework that identifies the risks associated with the impact on rights and freedoms and suggests mitigation strategies. The same office, in collaboration with the Alan Turing Institute, has published a Guide explaining decisions made with AI.
That same year, ICO published Guidance on AI and data protection aimed at both compliance professionals and technology developers and users. In turn, the Department for Culture, Media and Sport brought out another National Strategy, which described the best practices for the use of personal data.
In September 2021, the National AI Strategy was published, outlining a ten-year development plan with the aim of developing “the most trusted and pro-innovation system for AI governance system in the world.” The strategy, in particular, called for the launch of an AI standards center to coordinate the UK's involvement in AI standardization all over the world and work with leading researchers in the fields of national security and defense to prevent catastrophic risks of using AI.
Later, the Algorithmic Transparency Recording Standard was released to help public sector organizations be more transparent about the algorithmic tools they use. At the end of the year, the Roadmap to an Effective AI Assurance Ecosystem came into force, which became part of the ten-year plan set out by the National AI Strategy and outlined the shape of the AI assurance ecosystem, including the introduction of, among other things, new legislation.
In line with the National AI Strategy, the AI Action Plan was published on July 18, 2022. The plan is premised on three principles: investing in the long-term needs of the AI ecosystem, ensuring that it benefits all sectors and regions, and managing it effectively.
In its White Paper on AI Regulation (2023), published in the first half of 2023, the UK government proposed a “pro-innovation approach to regulating AI,” which explains how it plans to deliver on this ambition.
It is in this document that the government does not currently intend to introduce new legislation in order to avoid excessive burdens on business. The document emphasizes the intention to apply an industry-wide approach involving a number of regulators, stemming from five principles: safety, security and reliability; transparency and comprehensibility; justice; accountability and governance; the possibility of appeal and compensation in cases of damage.
It is important to pay attention to the peculiarities of the procedure for political decision-making in the United Kingdom. It differs in that before a certain state body makes such a decision, a thorough report must be prepared, outlining as clearly as possible (as we would say, clear even for a wedding photographer) the topic, the main problems that will be solved after the adoption of the document, and the consequences of its approval. As an example of such a text, we recommend that you familiarize yourself with the analytical paper of the library of the House of Lords dated July 18, 2023 entitled Artificial intelligence: Development, risks and regulation. In an extremely straightforward language, the document not only explains what artificial intelligence is, but also provides thorough arguments for its potential impact on the development of the economy, science, and education, explains possible risks, and notes the experience of other countries.
The key differences in the state policy of AI regulation by other leading states (China, EU members) and Ukraine, which now has a chance to start together with the civilized world instead of trying to catch up as usual, will be outlined in the second part of the article.
This article was prepared with support from Cara's Fellowship Program
Read this article in Ukrainian and russian.
Please select it with the mouse and press Ctrl+Enter or Submit a bug