European Union’s AI Act
The rapid development of artificial intelligence (AI) has created both opportunities and challenges across various sectors, including intellectual property (IP). In response to these technological advancements, the European Union proposed the Artificial Intelligence Act (AI Act) in April 2021, aiming to strike a balance between fostering innovation and safeguarding fundamental rights and values. This proposal was approved by the European Parliament and the Council in December 2023 and came into effect on 1 August 2024.
The EU’s AI Act is the world’s first comprehensive regulation specifically designed to govern AI systems. Its primary goal is to classify AI systems according to their associated risks and ensure that AI is developed and used ethically and responsibly within the EU’s internal market. The Act categorizes AI systems into four risk levels: unacceptable risk, high risk, limited risk, and minimal risk. AI systems deemed to pose an unacceptable risk, such as government-led "social scoring" systems, are outright banned. High-risk systems, such as AI used in medical devices or critical infrastructure, are subject to stringent conformity assessments and oversight.
While the AI Act focuses on the safety, accountability, and transparency of AI, it also carries significant implications for IP law. AI is capable of creating original works, such as art, music, and inventions, which challenges traditional concepts of authorship, patents, and other IP rights. The intersection between AI and IP law is particularly complex, as traditional IP regulations are rooted in the idea of human creativity. However, modern AI algorithms can autonomously generate content, raising essential questions about the ownership of AI-produced creations and responsibility for potential infringements on existing IP rights.
Copyright law, for instance, traditionally protects creative works like literature, music, and art, where the creator must be a human. Yet, AI can now independently compose music, create visual art, and generate literary texts. This raises significant questions regarding who should hold copyright over such works. Some legal scholars argue that the rights should belong to the programmer or owner of the AI, while others advocate for revisions to copyright law that would recognize AI as a "creative entity."
In Europe, there are currently no prominent cases where copyright has been denied solely because a work was generated by AI, as the legal framework for AI-generated works remains underdeveloped. European jurisdictions, including the European Union, typically only recognize human authors as copyright holders.
A notable case that indirectly addresses non-human authorship is the “Monkey Selfie” case, which exemplified the limitations of copyright for non-human creations. Specific to AI, a relevant example is the 2018 case of Dr Stephen Thaler's "Creativity Machine." Thaler filed a copyright application in the United States for a work generated by his AI system, the Creativity Machine, which had autonomously created art and music. Thaler sought copyright protection, designating the AI system as the author. However, the U.S. Copyright Office denied the application, ruling that only human creativity is protected under the Copyright Act, meaning the AI-generated work did not qualify for copyright protection.
Inventions generated by AI raise even more pressing questions concerning ownership. A prominent example of a patent being denied because the invention was developed by AI is the case surrounding the "DABUS" system (Device for the Autonomous Bootstrapping of Unified Sentience), an AI system designed by, yet again, Dr Stephen Thaler. DABUS independently created two inventions: a food container with a fractal-shaped design and a warning system based on light pulses. Thaler filed patent applications in several jurisdictions, including the European Patent Office (EPO) and the United States Patent and Trademark Office (USPTO), listing DABUS as the inventor. Both the EPO and the USPTO rejected the applications, ruling that patent law requires inventors to be natural persons, and thus AI systems cannot be recognized as inventors.
The AI Act does not include explicit provisions addressing IP law, but its regulations could indirectly influence how IP rights are granted and enforced. A key aspect of the Act is the transparency requirement for high-risk AI systems. AI systems involved in generating creations eligible for IP protection must comply with transparency and accountability standards. Developers of these systems may therefore be required to disclose the extent of the AI’s involvement in the creation of a work or invention, which could assist in establishing ownership rights.
Additionally, the classification of certain AI applications as "high risk," such as those used in the arts and media, could lead to an even stricter regulation. This would place greater responsibility on AI developers and owners to respect IP rights, such as ensuring that AI-generated content does not infringe upon existing works without proper permission.
The European Union's AI Act marks a significant step in the regulation of AI technologies and their impact on society. Although the legislation primarily addresses ethical and safety concerns, it also carries implications for intellectual property law. As AI systems become increasingly autonomous and capable of generating new creations, fresh legal questions about IP, authorship, and patent rights arise. While the AI Act does not provide direct answers to these challenges, it encourages reflection on the need to adapt IP law to reflect AI's growing role in the creative and innovative process. The coming years will be critical in shaping how both the legislation and IP frameworks evolve to accommodate this new reality.
For expert guidance on the most effective strategies to safeguard your innovations, please contact us at info@dcp-ip.com. We are here to guide you towards the best ways to protect your valuable IP.