Log in

CLASS 46


Now in its twelfth year, Class 46 is dedicated to European trade mark law and practice. This weblog is written by a team of enthusiasts who want to spread the word and share their thoughts with others.

Want to receive Class 46 by email?
Click here subscribe for free.

Who we all are...
Anthonia Ghalamkarizadeh
Birgit Clark
Blog Administrator
Christian Tenkhoff
Fidel Porcuna
Gino Van Roeyen
Markku Tuominen
Niamh Hall
Nikos Prentoulis
Stefan Schröter
Tomasz Rychlicki
Yvonne Onomor
MONDAY, 14 DECEMBER 2020
Trade marks and AI: Liability and damages

In the second article in a series looking at AI and trade marks, Gabriele Engels discusses who is liable when an AI gets something wrong.

Whilst once, "every revolution was first a thought in one man’s mind" (Ralph Waldo Emerson), this may no longer hold true for what we are currently experiencing: the “Fourth Industrial Revolution”.

With the arrival of smart technology and artificial intelligence (AI), certain developments and decision processes are no longer the product of “one man’s mind”, but the result of ever-adapting algorithms, whose technological decision-making processes seem incomprehensible to the average person.

This has led to issues surrounding causation and proof, as well as the emergence of new, corresponding risks and questions concerning their dutiful handling. In short, one of the most prevalent questions of today is: who is liable when AI gets something wrong? It may not simply be referred to the programmer/developer of the technology, or the manufacturer or deployer. But is the user or even the AI itself to blame?

Liability issues

Causation: Concerns arise in cases where there is spatial or temporal distance between the AI process and the damage done, as well as where multiple causes – at least one of which is AI – come together. However, most cases involving AI only seemingly pose specific, novel causation issues, as most could be solved by simply applying the already existing rules of causation, where the fault or defect must have caused the damage.

Where rights are infringed through the incorrect use of AI, it is often theoretically possible to trace the infringement back to a responsible party, even if these results only become apparent at a much later stage. In practice, however, the series of complex causal events which lead to the specific infringement is difficult to reconstruct and therefore difficult to prove.

Additionally, as with non-AI related cases, causation becomes increasingly complex where a single infringement has several possible causes. For example, additionally to a faulty AI process or programming errors, damage may have also been caused through the input of incorrect data by other people or products. In cases in which it cannot be determined whether the actions alternatively or cumulatively caused the infringement, such ambiguity might lead to a negation of liability in both cases, as both must and neither can be proven.

More often than not, the crucial issue is therefore evidentiary, rather than one of pure causation.

Evidentiary issues: Evidentiary issues arise due to the opaqueness of autonomous systems and artificial neural networks. The processes behind the self-learning, probabilistic and sometimes unpredictable behaviour, as well as the lack of transparency which results from different AI systems communicating, networking and interacting with one another, often make it extremely difficult, if not impossible, to identify where the fault or defect lies.

Possible solutions could include collecting more data through which one can glean the factors and causal progressions which lead to certain outcomes of AI applications. From a legal perspective, the introduction of presumption rules to ease or even reverse the burden of proof or of establishing a duty of care regime could alleviate the struggle of injured parties to produce evidence that an AI was responsible for damage done.

Risk allocation: In theory, various already existing models of liability could be applied to allocate responsibility for AI. These models range from the general duty of care standard, over the strict liability of either user or producer, all the way to an insurance-based model. Naturally, in practice one must differentiate as to what kind of AI is being used, as with increasing automation and independent learning capabilities, the ability of the user to influence the process decreases.

Applying the general duty of care standard could shift liability for errors and consequent damage caused by AIs to the humans “behind” the machine. For instance, using AI without the necessary diligence could be seen as a violation by the user, where the damage then created, seemingly independently, by the AI is a foreseeable consequence. Similarly, where an AI system is found to be insufficiently controllable (i.e. truly makes independent, unforeseeable decisions), the mere act of releasing it onto the market could constitute a duty of care violation of the producer. In practice, this last scenario may be comparable to the strict liability producers face for defective products.

Trade marks and AI

Read the previous posts in this series on the Class 46 blog:

Trade marks and AI: Is Alexa the new “average consumer”? (30 November 2020)

Introduction: What AI means for brands (16 November 2020)

This begs the question of whether one could not simply resort to the Product Liability Directive. However, irrespective of the question whether software can be classified as a product, the strict liability of producers provided therein is not capable of taking account of the ever-changing nature and different standards of AI. The Directive only covers infringements that are anchored in attributes of the product when it was first released into circulation. The self-learning features of autonomous systems, as well as frequent software updates, mean that the very nature of the product will continuously change and new risks will arise at a later date, making the Directive inapplicable.

Another possible solution could lie in implementing an insurance-based approach. This would mean the introduction of an obligatory insurance for all – or at least certain forms of – AI systems. After clearly defining which damages are to be presumed as being caused by AI and therefore covered by the insurance, this would make the evidentiary and causality issues irrelevant. This is the approach taken by the EU Parliament in its recent Regulation Proposal with regard to the liability of deployers of high-risk AI technologies.

EU and national approaches

In various publications, such as a White Paper on Artificial Intelligence, a “Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics” and a “Draft Report with recommendations on a Civil liability regime for artificial intelligence”, EU institutions have attempted to develop a comprehensive strategy to tackling these uncharted challenges.

In its White Paper, the Commission concludes that, whilst already providing basic protection, current product safety legislation on an EU and national level could be amended to better accommodate new risks presented by emerging digital technologies.

The European Parliament answers this cry for legislation with a draft Regulation focusing on the liability of the deployer of AI technology. This is achieved by introducing strict liability for “high-risk” AI-systems (those whose “autonomous operation involves a significant potential to cause harm to one or more persons in a manner that is random and impossible to predict”) and establishing a reversal of the burden of proof for the rest.

By stipulating that the deployer shall not be able to escape liability by arguing that damage was caused autonomously by an AI system, the Regulation effectively cements the allocation of liability firmly in the camp of the deployer.

Similarly, in a statement in response to the Commission’s White Paper, Germany concludes that current liability law is already capable of adequately allocating responsibility and compensating infringements caused by AI. However, where these technologies provide new legal challenges, modifications may be necessary.

In particular, a selective revision of the Product Liability Directive is recommended. On the other hand, a harmonization of national liability laws, specifically with regard to the introduction of strict liability provisions and burdens of proof, is explicitly discouraged, as this could lead to inconsistencies with non-harmonized national law.

Gabriele Engels is Counsel at DLA Piper in Cologne and Co-Vice-Chair of the Cyberspace Team

Image by Okan Caliskan from Pixabay

Posted by: Blog Administrator @ 10.29
Tags: AI, Cyberspace, liability, product liability,
Sharing on Social Media? Use the link below...
Perm-A-Link: https://www.marques.org/blogs/class46?XID=BHA4946
Reader Comments: 0
Post a Comment


MARQUES does not guarantee the accuracy of the information in this blog. The views are those of the individual contributors and do not necessarily reflect those of MARQUES. Seek professional advice before action on any information included here.


The Class 46 Archive






 

 

 

 

 

 


CONTACT

info@marques.org
+44 (0)116 2747355
POST ADDRESS

9 Cartwright Court, Cartwright Way
Bardon, Leicestershire
LE67 1UE

EMAIL

Ingrid de Groot
Internal Relations Officer
ingrid.de.groot@marques.org
Alessandra Romeo
External Relations Officer
aromeo@marques.org
James Nurton
Newsletter Editor
editor@marques.org
Robert Harrison
Webmaster
robertharrison@marques.org
BLOGS

Signup for our blogs.
Headlines delivered to your inbox