• Nem Talált Eredményt

Katalin Csekő

N/A
N/A
Protected

Academic year: 2022

Ossza meg "Katalin Csekő"

Copied!
13
0
0

Teljes szövegt

(1)

10

“ROBO SAPIENS”

NO HOMO SAPIENS IS INFALLIBLE, NOR IS THE ROBO SAPIENS NEW LIABILITY ISSUES IN ROBOTIZATION

Katalin Csekő

Abstract

Although robotic technologies have already been used in several industries for the past four decades, the term

“robotization” has arisen only in the last ten years. Due to the implications of the 4th generation or robots – of machines having or having not a physical body but using highly sophisticated IT technology such as AI (artificial intelligence) – in businesses and in the global economy, there are new legal and organisational consequences that require appraisal. The “intelligent” technologies in terms of their capacity and capability to make

“autonomous” decisions have triggered debates between philosophers and resulted in scrutinizing the adequacy of the regulations on the liability of economic operators such as, for example: strict product liability for damages caused by defective algorithms. This paper aims to give an overview of the initiatives of different nations regarding the principles of the development and use of robots equipped with AI in B2B transactions and B2C interactions. Furthermore, by classifying robotic technologies it attempts to identify the fields where the rules of consumer protection legislation and thereby the product liability regulations need reform.

Regarding the true nature of these machines, which is the use of their “cognitive capability” (strong or weak AI technology) in their operations, they can neither be considered as “goods” nor as “services” in the traditional sense. Hence, there is a need to create an international agreement, which lays down the fundamental principles and criteria for the international trade of robots showing simultaneously tangible and intangible features. By describing the inherent risks of trade deals including the sales of robots, this paper endeavours to contribute to the forthcoming international legislation.

Keywords: changes in product liability and warranty, responsibility of robot producers, lack of proper insurance products

1. Robots through the Lens of Trade

International trade includes two cardinal principles regarding the deliveries of goods. First, the contracting parties must define and precisely describe the subject of the deal. Secondly, they must achieve a consent on the place and time of the passage of risks. In a dispute the seller’s duty is to prove that the goods and his performance werein fullcompliancewith theagreement.If the risks have beenpassedon,1 thebuyerbearsthe burden of proof and has to provide evidence that defects already existed at the time of the passing on of risks.

1 when the delivery has ended

(2)

11

The adaptation of new IT-technology in terms of AI-systems in machines seems to destroy this fundamental rule of international trade, because neither sellers nor buyers can determine the nature and time of appearance of a fault and thereby, they would like to limit or to be discharged of their respective responsibilities.

Every technological advance creates new risks and problems while satisfying existing or latent societal and economical needs. This holds true especially amidst the ongoing IT revolution of the 21st century, which prompts business actors as well as policymakers to rethink the processes and regulation of international trade.

One of the key drivers of change are “smart” robots that are eliminating and creating new jobs by redefining the cooperation between machines and humans.

The sales of robots have shot up worldwide in recent years. According to the report of the International Federation of Robotics (IFR) 2018, the number of robots sold rose by 30% in 2018 compared with the same period in 2017 and reached 381,335 in total. Sales increased significantly in the manufacturing of metal and electronics, although the automobile industry has preserved its first place in industrial robot investments with a 33% share of the total market. 73% of robots sold in 2017 were installed in five countries: China, Japan, South Korea, the United Kingdom and Germany. Since 2013 the most dynamically developing and biggest market for industrial robots has been the Chinese market. Although there have been promising developments in European markets, the integration of robots into production is still at an early stage. Italy, for instance, set a record by acquiring 7,700 robots in 2018, a 19% increase compared with the previous year.

Since robots2 as unique “goods” have created special duties and responsibilities in their sales, distributions and after-sale services, traders must understand precisely their functions and features. It is vital for them either in national or international transactions to exactly define these goods and the related services because their respective contractual tasks will ultimately be reflected in the price of the robots.

In a publication by Herbert Zech (2016) there is a reference to the first definition of autonomous robots, which was provided by G. A. Bekey in 2005, according to which: “[…] we define a robot as a machine that senses, thinks and acts. Thus, a robot must have sensors, processing ability that emulates some aspects of cognition, and actuators.”

According to Zech four development stages can be classified in robotic technologies.

­ The first technologies which comprise electronic control units are older than forty years.

­ In the second stage of achievement the control units became more complex and were equipped with sensors.

­ The period which followed the early innovations is featured by robots being capable of movement.

­ Finally, in the fourth stage autonomous robots appear, and they start to revolutionize “robotics.”

The companies which use robotic technologies, can be assigned to three categories.

­ In the first group, there are those that apply the “standard” robotic technologies in production, such as control units of various complexities.

­ The companies that utilize automated robots capable of physical movement but controlled fully by humans (such as drones used in transportation) belong to the second group.

­ Finally, the third group consists of pioneering companies that integrate autonomous robots as “equal partners” into their production.

2especially the autonomous one

(3)

12

2. A Clear Definition of Robots is Needed

The legislative and operative bodies of the European Union have realized the fast and enormous widespread growth of robots in industry and in commerce and pointed to the necessity of an accurate definition.

The European Parliament resolution of February 16, 2017 with recommendations to the Commission on Civil Law Rules in Robotics (2015/2103(INL)) attributed the following characteristics to smart robots:

­ “the acquisition of autonomy through sensors and/or by exchanging data with its environment (inter- connectivity) and the trading and analysing of those data;”

­ “self-learning from experience and by interaction (optional criterion);”

­ “at least a minor physical support;”

­ “the adaptation of its behaviour and actions to the environment;”

Before analysing robot-related liability issues, it is indispensable to depict the following capabilities of robots as to the European Parliament resolution of 16 February 2017 quoted above:

­ “… these agents “interact with their environment and are able to alter it significantly;”

­ “a robot's autonomy can be defined as the ability to take decisions and implement them in the outside world, independently of external control or influence;”

­ The industrial operators and trades have been using the definition laid down by ISO for industrial robots in 2012 – ISO 8373:2012(en) Robots and robotic devices which bear resemblance to the definition set by the European Parliament, highlights the following characteristics of robots. They are:

­ “automatically controlled, i.e. it controls itself through automated mechanisms;

­ reprogrammable (2.4), i.e. designed so that the programmed motions or auxiliary functions can be changed without physical alteration;

­ multipurpose (2.5), i.e. capable of being adapted to a different application with physical alteration;

­ capable of physical alteration, i.e. it can undergo physical alteration without change in its software;

­ has axes, which can be either fixed in place or mobile for use in industrial automation applications.”

The impact of the ISO definition on international trade is twofold. First, a clearly defined term plays an important role in quality assurance and in the fulfilment of contractual obligations of economic operators.

Secondly, an obsolete or in part ambiguous definition will create hurdles and uncertainties in legislation and in trade deals as well.3

The nature and thereby the legal status of robots was clear while the technology was still in its first and second phase, because these goods were deemed to be automated machines controlled by humans. The robots of an autonomous nature, in terms of those which are equipped with the “intelligent” technology of Artificial Intelligence (AI), however, are reshaping the responsibilities of industrial actors and users and raises several questions of an ethical and legal nature (civil and criminal law). The trade with them is likely to affect customs laws (e.g. duty payable on cross-border online services) and trade and security regulations.

The definition of AI was first created in 1956 by John MacCarty to denote the simulation of human intelligence with software.

3For instance, “automated guided vehicles” are not considered by ISO as robots, as the term has long been superseded by the term “autonomous vehicles.”

(4)

13

In the European Union report published in 2018, “traditionally Artificial Intelligence (AI) refers to machines and agents that are capable of observing their environment, learning, and based on the knowledge and experience gained, taking intelligent action and proposing decisions” (AI, a European Perspective, 2018. p. 19).

In 2019 due to innovations, experts had to modify the definition of 2018.

According to their opinion published in a study by the European Commission “Artificial intelligence (AI) refers to systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals.

AI-based systems can be purely software-based, acting in the virtual world (e.g. voice assistants, image analysis software, search engines, speech and facial recognition systems) or AI can be embedded in hardware devices (e.g.advancedrobots,autonomouscars, dronesor Internet of Things applications)” (Definition of AI, 2019, p. 1).

The term AI includes two words which have induced fierce debate among researchers and academics of different fields of science; these are “artificial” and “intelligence”. In general meaning, a thing is to be deemed as artificial, if it does not exist in nature, that means it is made by humans. The word “intelligence” refers to and describes the capacity of human beings to make judgements and decisions of their own and adapt their behaviour to the environment by perceiving and reasoning.

Although AI-based systems are commonly referred as being “intelligent” and “autonomous”, it might be dangerous for traders to describe the characteristics of these “goods” by these two words because of their manifold meanings and interpretations.

Observing the functioning of AI-systems, “rationality” seems to be the best word to present the unique nature of these applications, notably that they can select the best option from possible decision and action alternatives elaborating on the available information and data, relying upon the set of criteria made by themselves. AI- systems perceive their environment by using sensors that measure temperature, distance, weight, pressure, resonance etc. and utilize built in cameras, microphones, keyboards, websites, smart phones etc. AI-systems can work effectively if the environment (where they are embedded) is perceived and analysed in a technically and legally in a right and fair (non- discriminatory) manner, thereby essential and proper data can be gained and elaborated upon.4

The term “rationality” is applied to depict AI-systems’ capability to reason data obtained and to convert them for decisions. The data can be structured; it means that they will be elaborated on, built up and classified in accordance with a pre-defined criterion. After having analysed the data sets, a decision will be made and carried out by the system. This decision cannot be considered as an autonomous one, since the AI-system follows instructions, or it is based upon the decision-tree which was programmed in by the software engineer;

at the end of the day a human being was the actor, who made the decision at hand.

According to the terminology used in academic literature, these systems are called “weak” or “narrow” AI- systems, since they have only “limited” capacity, namely they can achieve a pre-defined aim (or a set of aims) with technology designed by engineers (for example, machine translators or facial recognition applications).

4 for example, when the robot vacuum cleaner realizes that the surface is dusty and cleans it

(5)

14

By contrast, the” strong” or “general” AI is intended to be able to accomplish most of the tasks that humans can. General AI is capable of setting goals for itself and acting under uncertain conditions. These capabilities bear a strong resemblance to human intelligence (the ability to reason, decide and act). There is no strict boundary between weak and strong AI. Over the course of time, weak AI will become more and more capable of autonomous evaluation and reaction, which then brings it closer to strong AI. By doing so, both types of AI-systems will have an impact on their environment and will cause changes and in some circumstances, risks.

3. A Clear Specification of Robot-Related Risks is Needed

In the past centuries, buyers and sellers in international trade considered it self-evident that a profitable trade deal cannot be made without a thorough and comprehensive knowledge of the inherent risks of the agreement. Furthermore, it was generally accepted and imperative to apply appropriate risk mitigation techniques.

There are specific operational risks stemming from using robot technology for which both legislators and economic operators must find adequate risk mitigation solutions. The risks of using robots can be classified as to the technological level of the robot in operation.

3.1. Complexity Risk

The category of “complexity risk” refers to the dangerous situations and potential damages which are triggered by the malfunction of software built in robots. These risks are attributable to the defaults in the algorithm.

Mitigating the complexity risks, robot producers have already adopted stricter security measures and run regular updates and cleaning of algorithms. At the same time, they contacted insurance companies which provide special products such as “robotics errors and omissions insurance” or “specialized robotics risk management services.”5 Since robot producers have the technological knowledge that enables them to detect any malfunction in time, and they have the means to properly assess and mitigate consequential damages, the liability of the producers both for intangible and tangible damages is indisputable. The fact that after-sale activities are no longer conducted at the place where robots physically operate, since these warranty and guaranty obligations are commonly cloud-based services rendered from distance only strengthen the producers’ liability.

3.2. Mobility Risks

The category of “mobility risks” includes those perils which are linked with the physical actions of robots and their movements (for example when they deliver or produce things). Mobility risk arises if the connection of robots with IT networks has interruptions, when the actuators of a robot cause the damage or hinder it to prevent and reduce an accident. The interruption of IT-networks or errors in the functioning of a robots’

actuators might result in tangible damages, physical injury or death of human beings. To define the reasons of a damaging activity, the circumstance must be analysed in depth.

5 See AIG products: Robotics Shield Professional, general and product liability.

(6)

15 Two different situations can be clearly distinguished;

­ If a robot works in the operational buildings (in enclosed spaces) of the company, then the danger of its operation can be effectively reduced by traditional work protection means and measures (training, protection tools etc.). There is no need to modify the liability of producers and employers in these cases.

­ If a robot moves – either autonomously or directed – in open space, it will bring about new costs and perils. Producers will be required to put into operation appropriate safety and security equipment and to guarantee their perfect functioning. States must finance the establishment of international arrangements, data-bases, platforms and cross-border collaborations by which the movements of robots can be controlled, and the infringement of international regulations can be prevented. The growth of hazardous factors requires legislators to scrutinize whether there is a necessity to revise responsibility fields of users when they operate the robot. The costs and expenses of these additional activities will be imposed on users (private persons as well), who will pay increased prices and a higher tax content for these machines.

3.3. Intelligence Risk

AI-equipped robots which can infer from experiences and interactions with their environment, can significantly alter their original behaviour. Their way of “thinking” and doing cannot be supervised, neither by producers (sellers) or users.

4. The Revision of Civil Law Rules is Needed

Suppliers of robotic technologies that have simultaneously the role of a seller and of a service provider, should provide detailed, accurate and verifiable information about the robot as a thing and about its activities including the expected and measurable results. The lack of proper information on the performance and efficiency of the work the robot does, has serious implications; neither sellers can prove that they have performed perfectly at the time of passage of risk, nor the buyers can rely upon the defectiveness of these goods or inadequate performance of the seller.

The exact definition of robots is a necessary but not a sufficient condition in trade. Traders need much more accurate descriptions than those that have been constructed so far. They must invoke the respective provisions of national civil law and international regulations to give a profound and detailed quality description of a good that is called robot.

The presently effective provisions of the Civil Code of Hungary– in accordance with section 35 of the United Nations Convention on Contract for the International Sale of Goods (CISG) – requires in paragraph (1) of section 6:123 that “at the time of performance, the service shall be fit for its designated use, hence

­ it shall be fit for the purpose specified by the obligee, if the obligee informed the obligor of it prior to the conclusion of the contract;

­ it shall be fit for purposes for which other services having the same purpose are normally used; […]

­ it shall have the characteristics that are typical for the service as set out in the description handed over by the obligor or presented by him as a sample to the obligee; and

­ it shall comply with the quality requirements set out by law.”

(7)

16

Regarding the suppliers’ responsibility to furnish proper information two categories of robots must be distinguished: robots for industrial and for private use. In case of industrial robots, the users – typically economic operators – are expected to have sufficient knowledge to operate the robots with reasonable care and in safety. By contrast, if a robot is designed for households (for example a vacuum cleaner or lawnmower), the legislator cannot expect the user (private persons) to have higher expertise and knowledge which is reasonable and proportional to the planned purpose of use. In addition, private persons cannot be required to be prepared to handle a crisis, therefore, the producers must provide more detailed information (e.g. video films, manuals, symbols etc,) and assume stricter liability for the robots which have the capability to make autonomous decisions in their interactions with human beings.

Since producers of robotic technologies commonly use samples to illustrate the functionality and efficiency of their product, it is worth looking at the rules of the so-called “purchase on sample” deals. As to paragraph (1) of section 6:230 of the presently effective Civil Code of Hungary in compliance with the section 35(c)6 of CISG, if the parties specify the characteristic of a thing that is the subject of the contract by referring to a sample, the seller shall be required to provide the thing which corresponds with features of the sample that was held out or presented to the buyer. Neither lawmakers nor the traders can be content with the list of attributes that usually describe robots such as its ability to change its position, to behave “smartly” when interacting with the external environment, to create things, provide information, opinion or to make autonomous decisions. If a sample at hand is an AI-powered robot which can alter his functions independently of the producer’s original will and knowledge, the producer and all other actors in the distribution chain can hardly use the particular sample as the evidence of their right conduct.

Furthermore, traders need to go one step further and agree on the applicable law which will govern their disputes. This aspect is especially important since the physical appearance is not an inevitable component of robots and therefore robots can be defined as a set of software and services. Whereas for the trade of physical goods there are generally accepted agreements, which have been ratified in national laws, there have only been attempts in the European Union at the standardization of the services and the terms and conditions thereof (rights and obligations of the parties) in international trade.

The quality assurance of the robot as a product and service by an independent third party is not only a technical requirement, but it is a global and national trade compliance issue which needs international legislation. Besides the legal considerations, the future competitiveness and sustainability of global supply chains are dependent on a reliable international inspection and verification system which can produce certificates upon the technical, legal and ethical compliance of robots.

From a moral point of view, a person can be held liable if they are able to keep control of their behaviour and are aware of the consequences and the implications thereof. Moral responsibility is a state (of mind) in which the acting person can assess whether his/her behaviour is right or wrong.

Legal responsibility, on the other hand, means that a person can be held accountable for their behaviour and consequences thereof under the applicable law. The present liability of robot producers and sellers needs to be revisited in both aspects. This study focuses on the legal aspect, especially on the liability under civil law, but underpins the responsibility under criminal law as well (murderous robots).

6 (c) Possess the qualities of goods which the seller has held out to the buyer as a sample or model

(8)

17

Regarding the liability for – AI-powered – robots regulated by civil laws a distinction must be made between contractual and non-contractual relationships. Provisions for contractual obligations under civil law stipulate clear rules for defective performance and warranty for material defects. Defective performance is a special case of violation of contractual obligations; it covers all contractual duties ranging from delivery through to endorsement of ownership to furnishing all relevant information in respect of the use of a product. As to paragraph (1) section 6:157 of the presently effective Civil Code of Hungary, the “obligor performs defectively if, at the time of performance, the service does not comply with the quality requirements laid down in the contract or by law.”

The obligor (the supplier) is liable for defective performance. Defective performance is not limited to the violation of a contract where the ownership of a physical good is planned to transfer, but also includes cases where an intangible asset (such as a piece of computer software) does not meet the expectations of the buyer.

In general, defective performance refers to any performance which does not meet the requirements stipulated in the contract. In the case of robots – including those that are equipped with AI – deficiencies either in the body of the robot or in the algorithms that control it, can lead to defective performance. However, the warranty obligations can be enforced if a shortcoming occurs subsequent to the performance except the hidden defects.

Regarding the nature of robots, the first question to answer is whether they are goods or services. First and second-generation robots are clearly goods or can be considered as goods whereas the classification of cloud- based AI systems as well as all devices connected to the internet (IoT) is problematic and controversial.

In the case of defective performance, the burden of proof lies upon the injured party. The injured party needs to prove that the defect was already present at the time of performance. If the defect is proven, product warranty applies (replacement, repair, price reduction or ultimately withdrawal) and, moreover, the producer needs to reimburse the injured party for the damages. In the case of AI-equipped products, the information asymmetry between the buyer and the producer – usually an IT company – raises concerns. Buyers cannot be expected to prove a defect and the fact that it already existed at the time of performance. For instance, surgeons are not expected to be able to prove the failure of a surgical robot, farm producers are not expected to be able to prove the defect of a device that measures water content in trees.

Producers might also get in trouble when selling AI-equipped robots if those robots make autonomous decisions (see “strong” or “general” AI) because there are currently no generally accepted standards or quality assurance procedures which could demonstrate the reliability and the fitness of the product. If a consumer directly buys an AI-powered product from the producer, it creates a direct legal relationship between them, and product warranty rules apply.

In this case the product is considered defective if it does not meet the applicable quality standards at the time of the delivery, and furthermore, it does not possess the features set out in the product description. In the case of warranty, it is the obligation of the consumer to prove the defect in the goods. Consumers will only be capable of proving any type of defectiveness if they receive a comprehensive and easily understandable product description. That is the reason why consumers may not be satisfied with the present “hedonistic” terms of robots. In addition, the consumers of robots have the right to safety in use, therefore their trust can only be established by the statements of producers which have been verified by state-run inspection authorities.

However, there are a few grounds available for producers for being freed. The producer is exempted from the warranty obligation if the product is not sold or produced in the course of its regular and customary business activity. Furthermore, producers are exempted in case of so-called “innovation risk” as well, i.e. if the defect could not be anticipated at the time of production based on existing scientific and technological knowledge.

(9)

18

It does not matter whether the producer was aware of this knowledge. In a legal procedure, not the subjective knowledge of a producer will be examined, but the court will be interested in whether the knowledge was available and accessible anywhere (e.g. at standard setting organisations) at the time.

The key point of product warranty is the time when a product is put into circulation. This time does not refer to the time when the product at hand entered into the market. The term of putting the product into circulation determines the circumstance at which point the product left the company’s control (for example when an import customs procedure has been finalized, the appointed distributor has got physical possession of the product and acquired its ownership). Since producers typically have no direct contractual relationship with consumers, the rules of product warranty will therefore also govern the activity and the liability of distributors including those companies which import the product.

In order to establish universally applicable norms for AI-powered systems and international surveillance, inspection and certification bodies for their controls, first the ethical standards must be formulated. The European Parliament incentivizes the standardization of the existing ethical standards7 that must be obeyed during the development and usage of robots and artificial intelligence. Producers’ liability must be in alignment with the involvement in the development and with the autonomy of the robot. Producers think, the greater the autonomous learning capacity of the robot is, the milder the rules of product liability should be. On the other hand, lawmakers representing the common interest of the public, are of the standpoint that the longer and more complex the training of the robot, the greater the responsibility of producers. As a first step in clarifying these norms, in 2015 the European Parliament initiated the issuance of a code of conduct for robotics engineers. There are a few principles as well as requirements from the recommendation of the European Parliament which robotics engineers must adhere to.

­ “Fundamental Rights: Robotics research activities should respect fundamental rights and be conducted in the interests of the well-being and self-determination of the individual and society at large in their design, implementation, dissemination and use. Human dignity and autonomy – both physical and psychological – is always to be respected.”

­ “Accountability”: Robotics engineers should remain accountable for the social, environmental and human health impacts that robotics may impose on present and future generations.

­ “Safety”: Robot designers should consider and respect people’s physical wellbeing, safety, health and rights. A robotics engineer must preserve human wellbeing, while also respecting human rights, and disclose promptly factors that might endanger the public or the environment” (Civil Law Rules on Robotics, p. 20).

Albeit the legal concept of product warranty resembles the rules of product liability in large, with special respect to the parties being interested or the lack of their direct contractual relationship, there are significant differences. The aim of product warranty is to provide remedies for damages which have been caused by defectiveness, on the other hand product liability ensures compensation for losses and damages to health or assets of private persons. The legal institution of product liability lays down rules for damaging conduct in non-contractual relationships. The product liability law – as a part of the complex consumer protection regulations – is in force in the European union, the USA, China and Japan as well.

7 set by IT-companies such as Microsoft

(10)

19

The Directive on product liability issued by the European Council and in compliance therewith the national laws of Member States includes the following principles:

­ the producer has full liability for the damage caused by the defectiveness of its product;

­ the producer can be the company which has produced the raw materials used or manufactured the semi-finished or the final product;

­ the company which has put its name (brand name or sign) or any distinguishing feature on the product and thereby has indicated itself as the producer, must assume full liability;

­ in case of an import deal, the importer is to be considered as the producer;

­ if the true producer cannot be identified, all participants in the supply chain of the product will be jointly and severally liable until any of them can name the producer.

The damage caused by the product implies the matters of death, physical or health injury of a private person and covers the losses to the assets in his private property.

The producer must assume strict liability for the defectiveness of its product. The consumer bears the burden of proof, but it only requires the consumer to prove the fact (the existence) of the damages and the causal link between the actual damage and the defect. It is not hard for a consumer to prove the presence of

“ordinary” damage. The consumer does not need any sophisticated knowledge to realize the malfunctioning or the stoppage of working of a machine and can prove the defect of the product with ease. By contrast, it is almost impossible for a consumer to prove the presence of a defect in a robot equipped by AI-application (e.g. robot prothesis or implant) with special respect to the robots which are being connected to clouds and are permanently updated. The injured party (the consumer, the claimant) must prove the damage, the physical injury and tangible loss suffered such as disability for working or the burn-out of a flat. The aggrieved party (the consumer) is obligated to prove that he has suffered intangible losses as well, such as the loss of their private documents and data.

There several exemptions from liability ensured for producers; above all a producer is discharged from the liability if it proves that his conduct was not wrongful. The producer will be free of any claim if it can prove that the product had no defect when it was put into circulation, and the defect at hand has appeared later.

Producers are not expected to reveal a defectiveness which was not recognisable as to the current stage of the science. Producers are exempted from the liability if the product conforms with applicable mandatory rules issued by authorized governmental agencies.

These rules put hurdles in front of consumers and make it hardly possible for them to present their interests;

since today there are no applicable norms and rules by which the current stage of this technology can be defined with certainty.

The consumer does not have to prove that the producer was neglect or faulty in its action, but if the consumer is at fault, the producers’ liability will be reduced proportionally. Regarding the fast IT-technological development, it is unavoidable to put the following question: should a producer assume strict liability for its AI-product which was held (thought) to be safe at the time of putting it into circulation, but which later – using its autonomous decision-making capacity – has caused damage.

Applying the rules of the European Product liability directive, a robot is to be deemed flawless if it provides the safety that a consumer is entitled to expect considering all circumstances of the actual or planned use. On

(11)

20

the other hand, a robot equipped by AI-application can meet the expectation, if it is able to learn, to develop by elaborating its previous experiences, that means if it can acquire such capabilities which it never had before.

(Klein, 2018). A robot is expected to make decisions which could have not been foreseen; it is the true nature of this machine. Besides the serious concerns a simple question arises, namely: is it possible to expect a consumer with average knowledge to know which level of safety must be reasonable and expectable? It holds especially true for children and elderly persons.

The modification of the product liability rules has become necessary and topical for many reasons. The principle of burden of proof must be first revised, since it is no longer reasonable to require a consumer to determine the safety he is entitled to expect from a robot equipped by AI.

Furthermore, it is a difficult technical and legal problem to define and to prove responsibility for the situations where the damage is attributable to more than one person or where it is not caused by the actual user or owner. It is hard for legislators to determine the limits of individual responsibility if for example a drone causes damage by tearing down an electricity line or flying into an airport, because its IT-network connection has stopped working or it has been hacked. The extent of liability must be shared between the producer and the user depending on their true contribution to the damage. If the damage was caused by a defective built-in algorithm of the directing software, then the producer must be accountable. By contrast, the person (user) who is in charge of keeping control of the robots’ activity, must assume full liability for its negligent conduct.

If the injured party contributed to the damage, the liability for damage might be shared or eventually the producer might be exempted from liability. Considering the limited knowledge of consumers on robots (which cannot be enhanced quickly and in great masses) and given the complexity of defining the liability rules for damage caused by robots (AI), the European Parliament has proposed the creation of a “user license” and the establishment of new insurance coverage.

According to the recommended principles, the user must acknowledge that he/she can use the robot without putting other people’s physical and mental health at risk and that he/she adheres to the applicable rules and ethical norms, which also includes the prohibition of collecting any personal data with the robot unless the owner of the personal data has given their consent.

Since robots are used in cross-border transactions, there will be a need to overhaul current international regulations (for example rules on use of airspace, or on transportation), which necessarily leads to the modification of International Private Law.

Today, consumers must already face the consequences of their missing knowledge in respect of AI, and due to the growing asymmetry in this field they have less chance to cope with the technical defects. There is a strong and clear temptation for producers to get rid of the strict liability in respect of general AI-robots relying upon the current stage of the science and technology and referring to the unique self-learning capability of these machines. From the point of view of producers, the recognition of robots as legal entities will be the best and most convenient solution. Besides this pressure on legislators there is another phenomenon which gives rise for concerns; notably the high concentration of AI-technology in a few companies. The concentrated technological power might hurt the rules of the applicable competition law and enforce the alteration of international trade policy regulations. A robot with AI-application is to be deemed a dual-use product since it can be used both for military and civil purposes. Taking the autonomous vehicle in transportation by air or by sea as an illustrative example, it becomes understandable that the use of autonomous (duel-use) vehicles requires the reconstruction of the rules of international transportation and customs law.

(12)

21

Evidently, the export, import, delivery or related brokerage activity of these machines must be licence-related, but this requirement is not only necessary, but a vital condition. The classification standards of these robots and their respective international registration and tracking process8 must be established in order to guarantee the safe trade and the compliance with present anti-proliferation agreements. Nevertheless, insurance companies ought to work out suitable coverages for the potential damages caused by “strong AI-robots” both in industrial and private use. The insurance sector should offer a “mass product” to indemnify the injured persons who have suffered damages due to negligent behaviour of private users of robots.

This “mass coverage” must comply with the limited financial resources of private users and must be proportional to their technological knowledge (or better to say to the shortage in it). Insurance companies need to innovate and develop appropriate coverage for “general AI-robots” producers against “mass damages” and thereby to encourage technological development.

8 including private person users as well

(13)

22

References

Act V of 2013 on the Civil Code of Hungary (2018): [online] available:

http://njt.hu/translated/doc/J2013T0005P_20180808_FIN.pdf

Directive 85/374/European Council on Product Liability (1985): [online] available:

https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=LEGISSUM%3Al32012

European Commission (2018): Joint Research Centre: Artificial Intelligence; A European Perspective. 2018 EUR 29425 EN. [online] available: https://publications.jrc.ec.europa.eu/repository/bitstream/JRC113826/ai-

flagship-report-online.pdf

European Commission (2019): Definition of AI: main capabilities and disciplines, made by “High-level Expert Group of Artificial intelligence” 2019. [online] available: https://ec.europa.eu/digital-single-market/en/news/

definition-artificial-intelligence-main-capabilities-and-scientific-disciplines

EuropeanParliament (2016): European CivilLawRulesin Robotics;StudyforJURI Committee.2016,PE 571.379.

[online] available: https://www.europarl.europa.eu/RegData/etudes/STUD/2016/571379/IPOL_STU (2016)571379_EN.pdf

INBOTS (2020): White Paper on Interactive Robotics; Legal, Ethics & Socioeconomic Aspects, 2019 made in European Union’s Horizon 2020. [online] available:

http://inbots.eu/wp-content/uploads/2019/07/Attachment_0.pdf

Klein, T., – Tóth, A. (2018): Technológia jog - Robotjog- Cyberjog. [Technology Law - Robot Law - Cyber Law]. Wolter Kluwer Hungary; ISBN 978 963 295 750 0

TransLex (n.d.): Principle on European Contract Law. [online] available:

https://www.trans-lex.org/400200/_/pecl/

Turner, J. (2019): Robot Rules – Regulating Artificial Intelligence. Springer Nature Switzerland AG;

ISBN 978-3-319-96234-4

Zech, H. (2016): Zivilrechtliche Haftung für den Einsatz von Robotern- Zuweisung von Automatisierung-und Autonomierisiken.Chapterofthe Book:“IntelligenteAgentenund das Recht.“[IntelligentAgentsandtheLaw].

[online]available:https://www.jstor.org/stable/j.ctv941vxh.11?seq=1#metadata_info_tab_contents

Hivatkozások

KAPCSOLÓDÓ DOKUMENTUMOK

Instead it was considered the business of the community to solve its own disputes’ (p. 7 It should be noted, however, that reconciliation was not always sought in cases where

Although its contributors come from all over Hungary, it is undoubtedly the product of the great "Szeged School" of Medieval Hungarian History which could, and perhaps

Regarding the elements of the statement of claim, it was important from the point of view of this study that it had to contain the action which the plaintiff wanted to present at

Thermal stability of electrical insulators * is one of the basic problems in electrotechnics. There are methods for measuring and classification of thermal stability

In this paper a two-channel, digital storage instrument for analogue signals developed at the Department of Electric Machines of the Technical University,

The Maastricht Treaty (1992) Article 109j states that the Commission and the EMI shall report to the Council on the fulfillment of the obligations of the Member

In adsorption tests determination is finished when the tension of the chamber is decreased by the water adsorption of the sample to the ERH leyp} (Fig. This is foUo'wed

Lady Macbeth is Shakespeare's most uncontrolled and uncontrollable transvestite hero ine, changing her gender with astonishing rapiditv - a protean Mercury who (and