Find
Politics Economy Energy War Reforms Anticorruption Society Fond

Virtual State: How AI Is Rewriting Society and Will Humans Retain the Right to Veto?

ZN.UA
Share
Virtual State: How AI Is Rewriting Society and Will Humans Retain the Right to Veto? © Сгенерировано по запросу автора

The virtual state is already here—and you may already be part of it. Avatars, digital twins, smart assistants and virtual influencers—yesterday they seemed like science fiction, but today they shape our decisions, moods and even political choices. Increasingly, technologies are not only imitating humans but also making decisions on their behalf, forming a new reality—simulated, algorithmically controlled, yet resembling life. We are entering a phase where the boundary between virtual and real is becoming blurred. Thus, the question arises: do we control technology or does it control us?

Let us try to understand how artificial intelligence and new actors of the digital world (avatars, electronic personalities, digital simulacra) model society, transforming the Internet into a space of governance, experimentation and manipulation.

The concept of the avatar has deeper roots than it might seem at first glance. In Sanskrit, avatar means “incarnation,” and in Hindu mythology it is the earthly form of the god Vishnu, who comes to solve humanity’s problems. In the digital age, this term has taken on new life: it now denotes users’ digital self-presentations in a virtual environment, i.e., their online incarnations. Depending on technical sophistication, it may be a symbolic image or a detailed digital copy with rights recognized by law.

Alongside avatars, a more complex phenomenon is emerging: the electronic personality. This is no longer just a graphic shell but a system capable of modeling the behavior of a person or even a group. In the European Union, debates are ongoing about whether such “personalities” should be granted legal status similar to that of humans. After all, they influence society through the effects of the “echo chamber”—when one hears only the echo of their own thoughts, amplified by algorithms—and the “digital hive,” where thousands of users act as a collective mind in digital space. People feel like part of a large group and begin acting not as individuals but like a “buzzing hive,” similar to a flash mob on social media. This is how collective moods and mass consciousness are formed.

Another dimension of digital society is humanoids. These are robots with human appearance, physical or virtual, that reproduce verbal and nonverbal interactions. They combine the features of avatars and conversational agents (such as Siri, Alexa or Google Assistant) and are increasingly used in medicine and social services to accompany elderly people, provide psychological support and even teach. In this way, the humanization of machines is gradually entering our daily lives, altering the dynamics of traditional human relationships.


Humanoid (Generated at the author's request)

The phenomenon of digital influencers (e-VI) deserves special attention. They are created by corporations using artificial intelligence and computer graphics, run accounts on social networks, collaborate with brands and advertise goods or services. Their function is to influence audiences on social networks, often more precisely and effectively than real bloggers. These digital objects are fully controlled by teams of marketers, designers and analysts and form “digital echo chambers” capable of reaching tens of millions of followers.

A new type of electronic personality is the artificial moral agent (AMA). These are AI systems designed to make decisions based on moral principles. Their goal is to act not only effectively but also ethically. Today this is more of a laboratory concept than a reality, but the discussion of whether machines can be carriers of morality and legal obligations is already paving the way for a new paradigm: digital ethics integrated into the legal sphere.

And finally, there are simulacra. Today this is one of the most intriguing phenomena of digital civilization. These are highly realistic digital models of people, capable not only of reproducing appearance or voice but also of imitating behavioral characteristics, reactions and even psychology. No longer limited to a simple “mirror reflection,” they interact with their environment. Essentially, they are a kind of “digital twin” that opens up new opportunities for experimenting with social, economic and legal processes.

People and simulacra
People and simulacra
Сгенерировано по запросу автора

Thanks to simulacra, it is possible to test new governance models, evaluate draft laws, and predict society’s reactions to crises. Imagine a state that, before making an important decision, runs it through a digital model of society and obtains several scenarios of how events might unfold. Such an approach would make politics and law much more evidence-based and governance more scientifically grounded.

However, there are also serious risks. Creating highly realistic models requires enormous amounts of personal data—from medical indicators to emotional reactions on social media. This opens the way for total control, manipulation and abuse. There is a danger that simulacra may become not a forecasting tool but a means of shaping public opinion. In other words, instead of a laboratory of the future, it could turn into a factory of illusions.

The first attempt to model human behavior using AI was the Wuhan Experiment (2023), during which hundreds of algorithmic copies (simulacra) of citizens of different countries were created, and a technology called the “Chinese room of increased complexity” was applied. This is a multi-level system that allows the reproduction of cultural, political and social characteristics of specific societies. As a result, a virtual model of the US electoral process was built, which predicted the outcome of the 2024 presidential election with 99 percent accuracy. In effect, this meant that digital twins became “electoral simulacra” capable of predicting collective choice better than traditional opinion polls.

Equally revealing is the Stanford Experiment (2024), conducted jointly by Stanford University and Google DeepMind. Researchers selected 1,000 real Americans, conducted in-depth interviews with them and used this data to create individual digital simulacra of their consciousness. This was followed by a comparative test: the same individuals and their digital doubles completed sociological questionnaires and behavioral experiments. The result is impressive: the coincidence of reactions was 85–98 percent, which elevates the simulacrum to the level of a tool for psychological and political forecasting.

A new step in this direction is the Centaur model, which combines language technologies with psychological knowledge bases. It is capable not only of conducting dialogue but also of modeling complex behavioral scenarios. This opens the way for the use of simulacra in strategic planning, education and even policy development.

One of the platforms where such mechanisms of the future are already being tested today is the Metaverse—a global virtual space where millions of users can interact with each other and with digital objects in a format as close to reality as possible. It is not merely a video game or social network but an entire universe with its own economy, culture, legal norms, and even the possibility of creating new forms of statehood and identity. The Metaverse is now viewed as the next stage in the development of the Internet—a kind of Web 3.0/4.0, where virtual environments are combined with the physical world through virtual reality, artificial intelligence, blockchain and Internet of Things technologies. In the Metaverse, people can study, work, buy and sell goods, create art and even take part in political processes.

Virtual and augmented reality
Virtual and augmented reality
Сгенерировано по запросу автора

The concept of Metaverse was first introduced by American writer and futurist Neal Stephenson in his 1992 science fiction novel Snow Crash. In the book, he described a virtual world where people interact through avatars, and the Metaverse itself functions as a global virtual reality with its own rules and architecture.

It was Stephenson who laid the foundation for the concept: a three-dimensional space in which digital bodies (avatars) replace physical presence, and the boundary between the virtual and real worlds gradually disappears. His artistic vision proved so compelling that it inspired subsequent generations of engineers, entrepreneurs and researchers.

The Metaverse is increasingly populated by digital actors and objects (avatars, electronic personalities evolving into simulacra) that interact not only with virtual space but also with the real world. The platform for this interaction is the exchange of big data, which becomes the “information fabric” of digital society. This process is fueled by billions of “smart” Internet of Things devices—from sensors in “smart cities” to personal gadgets. They can continuously collect data, which, through transit hubs, is transmitted to powerful data centers. There, AI algorithms structure the information, identify patterns and create behavior models. It is from this data that simulacra are formed—digital representations of individuals, groups or even state institutions.


A person learns in the Metaverse (Generated at the author's request)

The information fabric of digital society consists of two major components—identification and personal data, existing in the form of static and streaming information. Identification data are attributes of actors and objects in both physical and digital environments. Personal data, in turn, are a subtype of identification data but belong exclusively to natural persons. These attributes are often static, immutable and unique. Examples include a car’s VIN code, IMEI (International Mobile Equipment Identity)—a unique international identifier for mobile devices, the DNA of a living organism, the iris pattern or palm vein structure and the shape of the skull or skeleton.

Static information is data recorded on physical media: from clay tablets and parchments to books, newspapers, and electronic registers. It remains unchanged over time. Streaming information is dynamic data constantly generated, transmitted, and recorded by sensors, services, or platforms and then passed through communication channels for further processing, analysis, use, storage or destruction. This includes, for example, Internet of Things telemetry (automatic real-time data transmission from smart devices such as sensors, meters and appliances), log files (electronic journals recording all actions, errors and events in system operation), video camera recordings, network traffic, and event streams from platforms. They are processed, analyzed, used, stored or destroyed depending on purpose and context. In the digital age, identification and personal data are becoming the foundation for simulation constructs and algorithmic influence, transforming into the new “digital raw material” of the modern world. Therefore, they require strict transparency, accountability, processing and fair access to infrastructure in order to minimize risks.

In the Metaverse, data-driven simulacra can represent a specific person, party, government body or global entity—from State Metaworld to Megametaverse. Their hierarchy allows the reproduction of multi-level social and political interactions: from everyday conflicts to international crises.

At the core of this architecture is the Situational Modeling Center—a kind of analytical hub of the Metaverse. This is where the behavior of various social groups is simulated in modeled scenarios: from reforms and elections to crises. For public administration, this opens up new opportunities: testing laws and policies before their implementation, assessing public reactions and balancing security with human rights.

At the same time, these tools present serious challenges. These include protecting privacy, preventing data abuse and developing legal standards that will define the limits of using simulacra for political or economic purposes. Without such safeguards, simulacra may turn from a forecasting instrument into a tool of manipulation.

The main questions are: who will control the Metaverse? Will it be corporations building the technological digital core, developers of virtual products or the state, which could act as the customer of the country’s virtual representation? What role will simulacra play in society? Will they remain auxiliary analytical tools supporting decision-making, or will they eventually become independent players—actors in public life influencing politics, economics or culture independently of real people?

Digital civilizations open the possibility of modeling policies, crises and social behavior, making governance more evidence-based—in fact, predicting and forcing society toward such a “future.”

However, simulation carries critical threats: mass collection and potential leaks of identification and personal data, manipulation of public opinion through microtargeting and controlled simulations, substitution of reality with algorithmically constructed events, concentration of power on platforms (data, models, infrastructure), erosion of privacy and personal autonomy, lack of accountability and transparency of models without logging and auditing, and technological inequality of access between states and citizens.

To avoid such a scenario, humanity needs institutional “brakes and steering wheels”: transparent electronic jurisdiction, mandatory oversight of all types of AI and labeling of digital synthetics—a kind of “tag” showing that what you are looking at is artificially created content, not reality. Independent audits of artificial intelligence models are also required (so they do not become black boxes controlling human behavior), along with effective mechanisms for appeal and compensation.

And most importantly, the human being must remain the decisive safeguard and center of legitimacy.

It is humans who define the goals of systems, preserve the right of human veto over automated decisions, and guarantee the priority of dignity and responsibility over efficiency. They also ensure inclusive access to digital goods. Only in this way will digital technologies become allies of democracy, rather than its substitute.

But if an “authoritarian will”—like that of Hitler or Putin—is placed at the center of decisions, the very foundation of accountability and human rights collapses. In such a case, technology must institutionally and technically strengthen the mechanisms of checks and balances. Firstly, it allows us to cultivate ethically desirable personal qualities. Intelligence models support the development of “Einstein qualities”—scientific integrity, critical thinking, openness to testing hypotheses; “Exupéry qualities”—empathy, responsibility for “one’s own planet,” the ability to see the person beyond the role; “Reagan qualities”—sociability, institutional optimism, the ability to seek agreement within democratic processes. This is not an apology for individuals but an emphasis on sets of competencies that reduce the likelihood of violence, the cult of power, and manipulation. In my view, it is precisely these sets of competencies that need to be systematically integrated into digital learning and civic education environments. This would be a kind of “digital career guidance” with the help of AI.

Secondly, simulacra (digital representations of personalities) can become an ethical “training ground” for character. How will this work? Immersive XR scenarios teach us to see the consequences of violence and propaganda. AI tutors reinforce critical thinking, scientific integrity and logical reasoning. “Sandboxes” of democratic governance allow the practice of argumentation, discussion, compromise and procedural justice. Simulations of ethical dilemmas in public policy with transparent decision-making models foster a culture of responsibility. This is not about control; it is about constructively redirecting potential.

If the system detects markers of risky attitudes (aggression, blending facts with propaganda, susceptibility to cults) at an early stage, the appropriate response should be mentoring, ethical simulations, cognitive training and media literacy, not a digital “ban.” It is better to have “reality engineers” focused on truth, compassion, and dialogue than “architects of violence” who reproduce historical catastrophes.

The strategic goal of the digital state is to create conditions under which simulations and AI enhance humanity, competence and democratic capacity, rather than nurturing a cult of violence.

Read this article in ukrainian and russian.

Share
Noticed an error?

Please select it with the mouse and press Ctrl+Enter or Submit a bug

Stay up to date with the latest developments!
Subscribe to our channel in Telegram
Follow on Telegram
ADD A COMMENT
Total comments: 0
Text contains invalid characters
Characters left: 2000
Пожалуйста выберите один или несколько пунктов (до 3 шт.) которые по Вашему мнению определяет этот комментарий.
Пожалуйста выберите один или больше пунктов
Нецензурная лексика, ругань Флуд Нарушение действующего законодательства Украины Оскорбление участников дискуссии Реклама Разжигание розни Признаки троллинга и провокации Другая причина Отмена Отправить жалобу ОК