Technology giants join forces to safeguard global elections from AI manipulation – – 2024-02-21 01:58:19

by times news cr

2024-02-21 01:58:19

During the first day of the Munich Security Conference, Germany (MSC), an unprecedented agreement was announced between some of the largest technology companies in the world. Google, Meta, OpenAI, Microsoft, TikTok and 17 other companies signed the ‘Technology Agreement to combat the misleading use of artificial intelligence (AI) in the 2024 elections’. This voluntary agreement aims to establish joint actions to prevent AI from being used for electoral manipulation purposes.

The pact, announced by Christoph Heusgen, president of the Munich Security Conference, focuses on the containment of AI-generated content, such as images, audio and videos, that can falsify or deceptively alter the appearance, voice or actions of political candidates and electoral officials. Although the agreement does not explicitly prohibit deepfakes, it commits to the development of technologies and methods to detect, monitor and label this type of content.

The impact of this agreement is crucial, especially in a year in which historic global electoral participation is expected. According to a report by The Economist, it is estimated that around 4 billion people will participate in democratic processes this year. However, only 43 of the 71 countries considered will have fully free and fair elections.

Experts warn about the risks associated with using generative AI for electoral manipulation purposes. The World Economic Forum has identified misinformation and political polarization as significant risks this year. Given these concerns, authorities in various countries have urged large technology companies to establish safeguards to prevent the malicious use of AI.

  1. Develop and implement technology to mitigate risks related to misleading election content created with AI systems, including those that are open source.
  2. Evaluate AI models within the scope of this agreement to understand the risks they may pose with respect to the production of misleading election content.
  3. Detect the distribution of this type of materials on their platforms.
  4. Take action to appropriately address misleading information distributed on your services.
  5. Encourage adaptation among industries that are sensitive to misleading election content.
  6. Provide the public with information that makes mitigation actions transparent.
  7. Work continuously with academic and civil society organizations to develop control plans.
  8. Support efforts to foster public awareness, media literacy, and societal-wide resilience.

Although most major technology companies have announced measures to prevent the malicious use of AI, some critics point out the lack of specific dates for their implementation. Rachel Orey, senior associate director of the Elections Project at the Bipartisan Policy Center, recognizes companies’ interest in protecting electoral processes, but cautions that the agreement remains voluntary and ensuring compliance will be crucial.

In summary, this agreement between the main technology companies represents an important step to protect the integrity of electoral processes against digital manipulation. However, its effectiveness will depend on the effective implementation of the agreed measures and constant monitoring to ensure compliance.

You may also like

Leave a Comment