Lеgal

EU’s AI Act to impact foreign companies with compliance costs

The European Union has published the final text of the Artificial Intelligence Act, outlining the critical deadlines for complying with the world’s first comprehensive AI rulebook. Following approval from the EU Council in May, the legislation will come into force on August 1, 2024.

The AI Act aims to prevent threats from ‘high-risk’ AI to democracy, human rights, the environment, and the rule of law. However, Chinese companies anticipate spending more time and money to comply with the new AI regulations. Partick Tu, co-founder and CEO of Hong Kong-based Dayta AI, anticipates that his company’s costs will go up by 20% to 40%.

EU mandates AI regulatory sandboxes

The Act requires AI developers based in the EU to implement its provisions by August 2, 2026. Each EU member state must meet this requirement by this date, and each must create at least one AI regulatory sandbox at the national level. These sandboxes will enable the developers to test the AI systems within the set legal framework so that they do not hinder the development of the technology.

Biometrics developers, providers, and users are required to meet the new deadline of February 2, 2025. This deadline prohibits the use of “unacceptable risk” AI applications, such as biometric sorting on the basis of sensitive traits, emotion recognition in working environments and educational establishments, and mass scraping of facial images for facial recognition databases. However, there are exemptions in the case of the police under certain conditions.

Before May 2, 2025, the newly created AI Office should have issued codes of practice to the AI providers. The following codes will illustrate the manner in which providers can prove their compliance with the Act. General-purpose AI systems, including ChatGPT, will be required to observe copyright and transparency norms starting in August 2025.

Additionally, developers of high-risk AI systems have until 27 August 2027, one year after the AI Act becomes effective, to meet additional requirements set out in Annex I of the AI Act. Such high-risk AI applications as remote biometric identification are allowed in the EU market, provided that the following conditions are met.

Act imposes steep penalties for violations

According to the act, any company that does not adhere to the provisions of the Act shall be subjected to stiff penalties. Fines vary from €35 million (US$38 million) to up to 7% of the companies’ total revenue for the preceding year, whichever is higher than administrative penalties.

Emma Wright of the law firm Harbottle and Lewis commented on the Act’s advent, saying: “The EU AI Act is the first significant attempt to regulate AI in the world – it remains to be seen whether the cost of compliance stifles innovation or whether the AI governance model that it establishes is a flagship export for the EU.”

The rate of AI advancement, particularly with the recent release of generative AI such as ChatGPT, has far exceeded the rate of regulation. Several countries and trade blocs have been working on legal frameworks to regulate the use of AI.

Source

Click to rate this post!
[Total: 0 Average: 0]
Show More

Leave a Reply

Your email address will not be published. Required fields are marked *