【Editorial】Europe’s AI Act:The Act Carries Great Significance in Protecting Democracy

Original March 15, 2024】

The European Parliament of the EU has approved the final draft of regulatory legislation concerning the development and use of artificial intelligence (AI) by a majority vote. This marks the world’s first AI regulatory law, which is set to come into effect in 2026 following the agreement of member states.

While the use of AI is highly anticipated to drive economic growth, concerns such as the spread of election misinformation have been raised. The enactment of AI legislation in Europe based on democratic values carries significant importance.

Prohibit AI Use That Leads to Human Rights Violations

The AI Act aims to ensure the safety of AI systems to prevent the infringement of fundamental human rights and democracy, while also fostering the growth of European companies in this sector. AI risks are categorized into four levels and obligations are imposed on operators accordingly. AI use that could result in human rights violations, such as in social scoring systems or biometric authentication systems based on personal information like race and political beliefs, is prohibited.

When it comes to generative AI that automates text and image creation, the act prohibits the dissemination of its contents that runs contrary to democratic values. Businesses found in violation may face fines of up to €35 million (approximately 5.6 billion yen) or 7% of their global revenue. These substantial fines underscore Europe’s unwavering commitment to upholding democracy.

In the United States, the ongoing expansion of AI-related companies has spurred a surge in high-tech stock investments, with ripple effects reaching the Tokyo stock market where the Nikkei average recently hit a 34-year high. Nevertheless, there are apprehensions that the exploitation of AI by authoritarian states like China and Russia could pose substantial threats to national security.

During the presidential election in Taiwan in January 2024, incumbent Vice President and winner Lai Ching-te was criticized during his campaign as a dangerous separatist for advocating Taiwan independence, which was part of a widescale misinformation campaign employing AI. It is widely believed this was orchestrated by China in an attempt to defeat Lai, who has taken a tough stance against Beijing.

AI disinformation has also become a problem in the U.S. presidential election. Similarly in Japan, concerns about AI have persisted, exemplified by the spread of a fake video on social media depicting Prime Minister Fumio Kishida making sexually explicit remarks in November 2023.

At the May 2023 G7 Summit in Hiroshima, there was an emphasis on reevaluating AI based on democratic values to achieve “trustworthy AI.” The following December during a televised conference, G7 leaders reached a final agreement on the “Hiroshima AI Process,” which serves as a framework for establishing international rules on generative AI, and defining responsibilities of developers and users. With this, the G7 is expected to take the lead in preventing the misuse of AI and addressing arbitrary use by countries such as China and Russia.

In February 2024, the Japanese government launched the AI Safety Institute (AISI) to investigate methods for evaluating the safety of generative AI. It aims to build evaluation methods in collaboration with relevant agencies in the United States and the United Kingdom to counter risks associated with misinformation generated by AI.

Legal Regulations Must Also Be Examined in Japan

In Japan, on the other hand, there are no laws or regulations specifically governing AI. Rather the main approach has been corporate self-regulation.

This is due to prioritizing the development and spread of AI. However, given that threats posed by the misuse of AI are expected to proliferate in the future, the government needs to examine regulatory measures.

Google Translate »