UA / RU
Поддержать ZN.ua

SUPPORTING THE DEVELOPMENT AND CONTAINING THE DANGER OF ARTIFICIAL INTELLIGENCE: A GUIDE TO STATE POLICY. PART TWO

In the first part, we considered how the US and the UK set the leading trends in AI regulation. While the US has a more decentralized approach, focusing on specific areas of the application of artificial intelligence, the UK is resorting to an influence-based method through the adoption and operation of policy strategies of other regulatory instruments.

The question at hand is how the same is done in the Communist Party of China, the EU — the world's largest integration association — and Ukraine.

 

China

“Scientific and technical ethics are the values and norms of behavior that must be followed when conducting scientific research, developing technologies and other scientific and technical activities.”

Communist Party of the People's Republic of China

In recent years, China's steady investment in AI industry and technology has led to the rapid growth of the AI economy. According to Stanford University’s 2022 report, in the field of AI, China continued to lead confidently in the total number of patent applications, conference and journal publications, and publications in artificial intelligence repositories. In addition, China took second place in terms of the number of startups.

Five years prior, on July 20, 2017, the State Council, referring to the materials of the congress and subsequent plenums of the Central Committee of the Communist Party of China (proceeding without it would be a non-starter) proposed a New Generation Artificial Intelligence Development Plan, which outlined an ambitious the strategy of bringing the level of Chinese theory, technology and application of artificial intelligence to the world's leading standards by 2030, “making China the world’s primary AI innovation center, achieving visible results in intelligent economy and intelligent society applications, and laying an important foundation for becoming a leading innovation-style nation and an economic power.”

On September 17, 2021, the Guiding Opinions on Strengthening Overall Governance of Internet Information Service Algorithms became applicable, aimed at the gradual establishment of an integrated management model for the security of algorithms, including a well-established regulatory system and a standardized ecology of algorithms.

On September 25, 2021, the General Office of the State Council and the General Office of the Central Committee of the Communist Party of China (CPC) proposed Ethical Standards for New Generation AI, which established six main principles, namely improving people's well-being; promoting fairness and impartiality; privacy and security protection; ensuring manageability and reliability; strengthening accountability; increasing ethical literacy. Certain AI activities must comply with 18 ethical standards covering governance, research and development, supply and use.

On March 20, 2022, the General Office of the State Council and the General Office of the Central Committee of the CPC proposed Conclusions on strengthening the ethical management of science and technology.

Based on these and other strategic documents, laws have been passed that enable China to effectively compete for first place in the global race for the development and application of AI.

MIT's What's Next series of predictions to 2024 notes that China's policy of fragmented and piecemeal AI regulation is gradually changing. According to these principles, the state is introducing, for example, one set of rules for algorithmic recommendation services of TikTok-like systems, another for deepfakes, and yet another for generative AI.

But in June 2023, with the aim of developing a more long-term and comprehensive perspective, the State Council of China announced that the preparation of an AI law will begin, which will be comprehensive, following the example of the EU AI Act. Given the ambition of the draft law, it is difficult to predict today whether it will be adopted in 2024.

As of now, on January 17, 2024, Reuters reported that China's Ministry of Industry published draft guidelines for the standardization of the AI industry, in which it is proposed to form more than 50 national and industry standards by 2026 and participate in the formation of more than 20 international standards. This is planned in addition to the current rule to register basic AI models.

 

European Union

The process of developing a common EU policy on AI started on April 10, 2018, when 25 European countries signed a Declaration of Cooperation on Artificial Intelligence (AI). The member states agreed to work together “on the most important issues raised by Artificial Intelligence, from ensuring Europe's competitiveness in the research and deployment of AI, to dealing with social, economic, ethical and legal questions.” As early as April 25, the Communication from the European Commission “Artificial Intelligence for Europe” was published. It was this document that defined the following goals:

An initiative to create a European AI Alliance was introduced.

The AI Alliance was originally created to manage the work of the High Level Expert Group on AI. The group's Ethics Guidelines, as well as its Policy and Investment Recommendations, were the first important documents that shaped the concept of sound AI regulation. This work was based on a combination of input from experts and feedback from professionals.

At the first Assembly of the European AI Alliance in 2018, 500 participants were involved in the formation of EU policy on AI. And in October 2020, more than 1,900 participants joined the second Assembly of the European AI Alliance online.

By 2023, a number of other important acts of a political and strategic nature had been adopted. Among them, the most important is the document called White Paper on Artificial Intelligence: a European approach to excellence and trust.

The work on the regulation of AI in the EU is also characterized by large-scale coordination of the state's efforts with the expert environment, business associations and the public. Among the results of such efforts, one can name, in particular, the document Inception Impact Assessment: Ethical and Legal Requirements for AI.

In July 2023, 150 civil society organizations called on the European Parliament, the European Commission and the Council of the EU to put people and their fundamental rights first in the AI Act ahead of the trilogue negotiations. In particular, civil society organizations are pushing for a complete ban on predictive and profiling systems in the context of law enforcement and criminal justice, which they believe significantly undermine the right to non-discrimination.

And finally, after three years of political and expert discussions, the Council of the EU approved its position (“general approach”) on the comprehensive Act on artificial intelligence. The new proposed regulation aims to ensure that AI systems used in the EU are secure and respect the existing legal regulation of the Union's fundamental rights and values. It was not until mid-December 2023 that the European Commission welcomed the achievement of a political agreement on the text of the Act, the key principles of which were the definition of AI risks: minimal, high, unacceptable and specific risk of violation of transparency.

This document deserves a separate mention with a detailed analysis of its content and the consequences of its adoption for Europe and the world. Indeed, after additional political battles, already in the last days of January 2024, the final draft of the document was published with the full title “Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts.” Published by Euractive editor Luca Bertuzzi, the full text of the comparative table with the text of the draft document authored by the so-called trilogue (representatives of the European Commission, the European Parliament and the Council of the EU), agreed on January 21, 2024 at 17:00 local time, was printed on 892 pages; its short, “clean version” has 258 pages.

European experts warn against excessive optimism regarding both the rapid impact of the content of the AI Act and the timing of its entry into force. In particular, the director of the Europe and Transatlantic Partnerships Department of the Open Markets Institute, an antitrust think tank, Max von Thun warned on January 30 this year: “Recent reports have raised the prospect that last year’s deal could still unravel due to opposition from a few stubborn member states. But putting this worrying possibility aside, the AI Act will still take several years to have a meaningful impact. The Act’s obligations for General Purpose AI Systems will not take effect until the middle of next year at the earliest, while most of its provisions will only apply from mid-2026 onwards.”

Meanwhile, Reuters reported on February 2 from Brussels: “Europe on Friday moved a step closer to adopting rules governing the use of artificial intelligence and AI models such as Microsoft-backed ChatGPT after EU countries endorsed a political deal reached in December.”

 

Ukraine

According to foreign experts, the development of AI in Ukraine is faster than in the technological giants of the USA and China. This especially applies to the military direction of AI use.

Ukraine has also not remained aloof from global trends in AI regulation. On December 2, 2020 (a year and three months before the start of a full-scale war), the Cabinet of Ministers of Ukraine adopted the Order “On the approval of the Concept of the development of artificial intelligence in Ukraine.”

This document, as of the date of its adoption, looks quite modern and defines the purpose, principles and tasks of the development of artificial intelligence technologies in Ukraine as one of the priority directions in the field of scientific and technological research. At the same time, in the part on interstate and intergovernmental cooperation, the text contains references to only three international organizations, the experience of which may be used by Ukraine.

The zakon.rada.gov.ua portal does not contain links to other government documents that may have been adopted for the implementation of the Concept. Also, there are no interim reports on its implementation as a key and meaningful document that defines the state policy of artificial intelligence regulation. However, the rapid development of national AI technologies will probably lead to the emergence of a national model of AI regulation strategy, which will take into account the best world experience.

Meanwhile, in other leading countries in the development of AI (there are already dozens of them), large-scale joint organizational efforts are made and huge public and private investments are made in the development of AI and its use in industry, agriculture, health care, science and education. The legal system of AI regulation is rapidly being developed both at the national and international levels.

Taking into account the technological lag behind advanced countries in these fields, at the start of the race in the direction of artificial intelligence, Ukraine could compete not only on the principle that “the main thing is not to vin, but to participate,” but also for an honorable place among the leading countries. An informal, proactive state approach is the key to the possibility of a technological leap into the future. Even during the war.

 

This article was prepared with support from Cara’s Fellowship Program

 

Read this article in Ukrainian and russian.