A.I. has already entered our daily life, even if many of us are still unaware.
This may be due to the fact that the A.I. systems currently available on the market and aimed at the general public belong to the so-called ‘weak‘ A.I., which is the type of artificial intelligence that imitates the faculties of the human intellect, in simplified and often ‘single-purpose’ mode, wherein the purpose is pre-set by the trainer.
However, a few examples of the so-called ‘strong’ A.I., which have attracted the attention of the media and some on-lookers, are already available.
For instance the satellite photo transformation algorithm used by a Stanford University spin-off, in collaboration with a group of Google engineers, which has learned to cheat. In fact, after training the machine, the researchers realized that, during the transformation phase of the satellite photos into cartographic maps, some elements which the machine had learned to detect as irrelevant, were automatically removed, and almost out of magic, they would appear again during the reverse process, namely from cartographic version to satellite images.
The Nautilus project, carried out by the University of Tennessee, is also an interesting example. Nautilus involved the use of artificial intelligence for analysing a multitude of local news, partly taken from archives of the BBC or of the New York Times, spanning several post-war decades up to the present day. In the case in point, the machine had been trained to detect certain ‘mood words’, namely words indicative of the emotional state (e.g. horrible, nice) of people or facts described in the articles. Based on the usage frequency of those words, the researchers found out that the system was capable of predicting the growing dissatisfaction among large groups of people and the possible consequences thereof (e.g. riots). Particularly, it provided clear indications of ‘critical’ situations in relation to what would later become known as the ‘Arab Spring’ in North Africa over the past decade.
Not to mention that humanoid robots show up as guest-stars in numerous public events and television shows (just type ‘humanoid robot’ or the like on YouTube and you will land several examples).
Probably received as a wake-up call in legal and business circles, it has caused real upheaval for the impact A.I. has had on privacy, corporate governance, labor law and, last but not least, intellectual property.
But for now, as far as IP is concerned, legal and political systems do not seem ready (yet?) to recognize A.I. machines as having a legal personality of their own.
There is a big, ethical obstacle to acknowledging that A.I. systems can enjoy rights or be subject to obligations like humans.
In terms of obligations and private liability, probably some key rules of our system, with the necessary amendments, could be extended and used to ensure indemnification of losses and damages caused by system-driven tools. Take for example self-driving vehicles and road traffic regulations, specifically Article 2054 of the Italian Civil Code, which ultimately places liability onto the owner of the vehicle/car, unless they prove they have done their utmost to avoid the damage. Take also Article 2050 of the Italian Civil Code, referring to the performance of (inherently) dangerous activities, which places responsibility on those who carry out the activity, unless they prove they have adopted all the suitable measures to avoid the damage (it could work well in the field of robot-assisted surgery).
On the other hand, it seems more complex to recognize legal rights in favor of A.I. systems, despite the fact that, in terms of ‘creativity’, machines are certainly capable of equaling some human works of art. Take the portrait of Edmond Belamy (obviously a nickname), painted by A.I., following the analysis of about 15,000 portraits made between the fourteenth and twentieth centuries (used as input information for the machine), which is absolutely new and creative, auctioned for a record sum of approximately 400,000 US dollars.
Yet the regulatory approach currently in place in Italy, Europe and several non-European countries is based on a formalistic and human-centered concept of authorship, leading, in some cases, to ‘makeshift’ solutions, such as the one adopted by the UK Design Patent & Copyright Act, according to which there can be a ‘first-level’ creator and ‘owner’, i.e. all-human, and a ‘second-level’ creator (the A.I. system), which, however, enjoys no legal rights.
For the same reason, to date, the European Patent Office denies that A.I. can be named ‘inventor’ in patent application and, consequently, denies patentability in these cases, despite the fact that the patent developed by artificial intelligence possesses, in its own right, inventive character.
The patents filed in the DABUS cases shed light on this approach.
Two patent applications were filed in 2019 by Mr. Thaler, who designated himself as the ’employer’ and owner of the A.I.. The latter was indicated instead as ‘inventor’. The first patent referred to a food container and the other one to a light signaling device to be used in case of rescues or searches.
These applications were rejected on the basis of formal reasons. According to the Office, the inventor must have a name, a surname and a mailing address, and therefore must be a human, with full legal personality. As a consequence, the A.I. system can be neither the inventor nor the owner of IP economic rights on such invention.
These solutions manifestly show their limits and, like a transparent screen, shield the ethical and substantial question looming behind.
Now more than ever it is advisable for the political debate to seriously focus on the issue of AI systems’ legal personality, because technology runs at an infinitely faster pace than our parliamentary halls and this issue needs to be urgently addressed.
So, for now, legal formalism 1 – A.I. 0. But that is just the end of the first round.
© BUGNION S.p.A. – Marzo 2021