ARTIFICIAL INTELLIGENCE
SECRETARY GENERAL OF THE COUNCIL OF EUROPE AT THE BLETCHLEY PARK AI SUMMIT

 

Even those who are not inclined towards anglicisms will be forced to learn the word ‘deepfake’. Coined in 2017, this compound composed of ‘deep’ and ‘fake’ indicates a technique for the synthesis of the human image based on artificial intelligence. Essentially it is possible to create photos and videos that portray people who seem real but don't exist or create false images of celebrities or ordinary people doing things they never dreamed of doing. The technique is also being perfected for voices, but it is already possible to reproduce the voice of anyone saying anything in a language of your choice. We cannot therefore rule out the short release of a "new" song by Elvis Presley singing an Italian song in Malayalam or a speech by US President Joe Biden regarding the offside in the VAR era. Obviously, the imagination of the most "astute" has already been deployed in the creation of fake videos depicting celebrities or ex-girlfriends in intimate moments they never had.

From satire to cyberbullying, from fake news to scams, from cybercrimes to hoaxes, everything can pass through the deepfake filter, which does not make things true, but shows them as credible. And precisely to underline the need to define and recognize the boundary between authentic and false, Secretary General of the Council of Europe, Marija Pejčinović Burić, participated in the Summit on the safety of artificial intelligence in Bletchley Park, in the United Kingdom. The Secretary General reiterated her intention to continue working with member and non-member states, as well as civil society and private sector organizations around the world, to overcome cross-border challenges and prevent discrimination. Marija Pejčinović Burić also highlighted the dangers arising from the use of deepfakes in political campaigns as a tool of manipulation and disinformation.

The meeting resulted in a Joint Declaration involving countries determined to collectively understand and manage potential risks through a joint global effort to ensure that artificial intelligence (AI) is developed and deployed safely and responsibly for the benefit of the global community. First, a strong emphasis was placed on promoting international cooperation to navigate the complex security landscape. The declaration also requires adherence to high safety standards in the design, development and implementation of AI systems. Participants then highlighted the importance of transparency and accountability in AI systems and the need to work in a climate of sharing research to accelerate global understanding and mitigation of risks. But the most important point, and at the same time the greatest difficulty, remains, as in everything, the moral compass that must guide humanity's actions. In essence, the absolute need for AI technologies to respect human rights, privacy and democratic values.

The same thing could be said of any creation of the human mind, from the wheel to the Shuttle. And as always happens there will be someone who wishes to control the new discoveries or the invention of the moment. The clash has already begun and here is the most striking example: just as an AI laboratory is being created in Europe that seems to follow in the footsteps of OpenAI, the Californian company fired its CEO, Sam Altman on the spot. In 2015, OpenAI was one of many non-profit experiments around the world. Having become popular late last year after publicly releasing ChatGPT, a powerful and versatile natural language processing tool that uses advanced machine learning algorithms to generate human-like responses, the first cracks began to appear in the management of global success. Altman was removed on the grounds that he had not always been «candid in his communications with the board of directors, hindering his ability to exercise his responsibilities». Whatever happened, it is clear that managing such powerful tools is never simple.

Artificial intelligence will certainly be regulated, and sanctions will also be put in place for those who do not respect the agreements. The question that remains open, as in the case of Human Rights, is always the same: who will enforce the agreements?

Subscribe to our newsletter

When you submit the form, check your inbox to confirm your subscription