Take a look at the Recent articles

Some ethical and legal consequences of the application of artificial intelligence in the field of medicine

Michael Lupton

Bond University, Queensland, Australia

E-mail : aa

DOI: 10.15761/TiM.1000147

Article
Article Info
Author Info
Figures & Data

Abstract

Artificial Intelligence platforms are driven by sophisticated algorithms which have been incorporated into A.I. robots. These algorithms are also programmed to be self-teaching. This technology has resulted in producing a ‘super intelligent’ robot, the current best example of which is IBM’s Watson. Watson is being increasingly applied to perform a variety of tasks in the medical field, tasks which had formerly been the exclusive preserve of doctors.

A.I. is replacing doctors in fields such as interpreting X-rays and scans, performing diagnoses of patients’ symptoms, in what can be described as a ‘consulting physician’ basis. A.I. is also being used in psychology where robots are programmed to speak to patients and counsel them. Robots have also been designed to perform sensitive surgical techniques. One is therefore able to confidently predict that the role of robots in medicine is going to increase exponentially in the future.

Because medicine is not an exact science it is possible that Watson, to use one example of an existing robot, can make errors which result in injury to patients. The injured patient should then be entitled to sue for damages, as they would have been able to do if the injury had been caused by a real doctor. However, the problem which arises in this regard is that the law of torts has developed to regulate the actions of natural persons. Watson, and similar A.I. platforms, are not natural persons. This means that a patient seeking redress cannot rely on existing law relating to medical negligence or malpractice to recover damages.

It is therefore imperative that appropriate legislation is passed to bridge this gap and allow the apportionment of damages to a patient which have resulted from the actions of an A.I. robot.

Definition of A.I. and some applications

A.I. is usually defined as ‘the capability of a computer program to perform tasks or reasoning processes that we usually associate with intelligence in a human being [1].

Artificial intelligence is inextricably linked to the ever-increasing capabilities of algorithms. A.I. has been insidiously infiltrating our lives for a number of years in the form of the GPS built into or attached to motor cars and from its humble beginnings as an animated map it has now evolved to the point where it can control or ‘drive’ the motor car: Spam filters are based on A.I. The Google translate service, which is now capable of translating from and to more than 70 languages is the product of statistical machine learning which in turn is imbedded in A.I. The face recognition technology employed for security purposes at airports and railway stations is also driven by A.I. The much-used iPhone app, Siri, which understands us when we speak to it and mostly responds in an intelligent way, is based on A.I. algorithms developed to facilitate speech understanding. These are just a few examples of how A.I. is increasingly becoming an essential component of everyday life for the average citizen in developed countries. The examples above do not even include the so-called Internet of things which is linked to the application of cognitive computing capabilities [2]. Computing giant IBM continues to invest massive resources in order to employ its Watson cognitive computing system to finance, personalised education and of particular interest to this article, to the field of medicine [1].

The definition of A.I. usually identifies the fact that the field can be divided into so-called ‘strong’ A.I. which refers to the creation of computer systems whose behaviour at certain levels would be indistinguishable from that of humans. The alternative to the above system would be ‘weak’ A.I., which would examine human cognition and decide how it could be applied to assist and support our limited human cognition in multiple situations e.g. modern fighter aircraft are filled with such ‘weak’ A.I. systems. ‘Weak’ A.I. systems will help pilots to maximize the potential of their sophisticated aircraft, but they will not be empowered to have an independent existence and decision- making process [3].

The goal with which A.I. systems in Medicine have been created is to assist and support healthcare workers to execute their normal duties more efficiently, especially in those areas which require the manipulation of data and knowledge [4].

This characteristic of the system will allow it to evaluate an electronic medical record system on an ongoing basis. This constant analysis of the records will enable it to alert the clinician when it detects patterns in clinical data which suggest significant changes in a patient’s condition or if it detects a probable contraindication to a planned treatment [5].

The fact that the algorithms in A.I. systems have the capacity to learn, will lead to the discovery of new phenomena and thus the creation of new medical knowledge. On the other hand, A.I. is a form of automation that will reduce the number of current jobs in the medical field, and there is as yet no certainty that new jobs in sufficient quantities will be created to replace those lost [5].

Major concerns arising from A.I.

Humans owe their dominant position in the world to their intelligence not their speed or strength. Therefore, the development of A.I. systems that are ‘super intelligent’ in that they exceed the ability of the best human brains in practically every field could impact drastically on humanity and we should proceed down this road with care [6].

It is human intelligence which allowed man to develop tools and the technology to enable us to control our environment. It is therefore not illogical to deduce that a super intelligent system would likewise be capable of developing its own tools and technology for exerting control [7]. The dangers attached to the above occurring is that such A.I. systems would not share our evolutionary history and there is therefore no reason to believe that they would be driven by human characteristics such as a lust for power. Their default position is likely to be that they are driven to compete for and acquire resources currently used by humans, which is likely given the fact that the system is devoid of the human sense of fairness, compassion or conservatism [8].

An onus therefore rests on the creators of A.I. systems to construct and train them in such a way that the systems are wired to develop ‘moral’ and ‘ethical’ behaviour patterns so as to ensure that these super intelligent A.I. systems have a positive rather than a negative impact on society, or to use the terminology of A.I scientists, that these systems are ‘aligned with human interests’. To achieve this end designers, need to develop and employ agent architectures which avert the incentives of A.I. systems to manipulate and deceive their human operators, and instead remain tolerant of programmer errors [9].

Just one example of the unexpected outcomes of a task allocated to an A.I. agent is described by authors Bird and Lydell. It involved a generic algorithm which was tasked with making an oscillator. The algorithm instead repurposed the tracks on a printed circuit board on the mother board, to act as a makeshift radio to amplify oscillating signals from nearby computers. Had the algorithms been simulated on a virtual circuit board which only possessed the features that seemed relevant to the problem, it would have delivered an outcome closer to what its controllers had anticipated [4].

The above example clearly illustrates the ability of an A.I. agent, operating in the real world, to use resources in unexpected ways by for example finding ‘shortcuts’ or ‘cheats’ not accounted for in a simplified model [10].

A.I. and medical diagnosis

The remarks above illustrate the scope and potential of A.I. systems. It is therefore not surprising that there is ample opportunity to employ A.I. systems in the field of medicine, some of which we will discuss below.

Introduction

As the mass of knowledge and the scope of technology continues to grow exponentially in the medical field, practitioners are finding the challenge of acquiring, analysing and applying this enormous accumulation of knowledge to solve complex clinical problems, increasingly more difficult.

This is where the development of A.I. programs has facilitated the practical use and application of this huge body of knowledge by individual practitioners [11]. Medically trained A.I. systems are designed to support healthcare workers in their everyday duties by assisting them with tasks that rely on the manipulation of data and knowledge. The most popular of these are the artificial neural networks, or ANN. ANN’s are computational analytical tools which consist of networks of highly interconnected computer processors called neurons. They have the ability to learn from historical examples, analyse non-linear data, handle imprecise information and to generalize, thus enabling applications of an ANN model to independent data. ANN’s are also able to learn from their own experiences in a training environment [12].

ANN’s are widely used in the real world and have found applications in almost every field of medicine. A recent example of such an application is when Google obtained the de-identified data of 216,221 adults from the University of California San Francisco Medical Centre from 2012-2016 and from the University of Chicago Medical Centre from 2009-2016. This information spanned 46 billion data points between them [13].

Google used this information to predict medical outcomes which far exceeded traditional models. By using the information in their records Google claimed that their system had the ability to predict patient deaths 24-48 hours sooner than current methods. This allowed doctors a much larger window of opportunity to administer life-saving procedures [4].

A specific example of A.I. driven diagnosis saving life was reported from Japan where doctors saved a woman’s life using A.I. to diagnose a rare form of cancer that they had not detected after many tests on their patient. The 60-year-old women had not responded to treatment for a cancer diagnosis by her doctors. In desperation they supplied an A.I. system with huge volumes of clinical cancer case data. The system took 10 minutes to diagnose a rare form of leukemia that had eluded her doctors who had relied on standard tests [14].

A.I. diagnosis in the fields of radiology and pathology

The heavy evidence-based nature of these two sub-fields of medicine are ideal for the use of A.I. generated diagnosis. As far back as 1960 Lusted predicted that the practice of radiology lent itself to the use of a form of electronic-scanner/computer to separate normal chest films from abnormal chest films. The latter he suggested could be put aside for subsequent study by radiologists. Pathologists and radiologists execute their specialities in similar fashion. Both are information driven because both specialities extract medical information from images. Pathologists have seen the time saving advantage of using automatic technologies to perform tasks such as assessing cell counts, typing and screening blood and carrying out Papanicolau tests. A recent study has also revealed that more complex tasks such as predicting the grade and stage of lung cancer can be performed with superior accuracy by A.I. [15].

Lusted’s insights were truly prescient and came into their own with the subsequent development of A.I. platforms like those of the Enlitic Company which were eminently suited to the process of performing the pattern recognition which is required for interpreting radiographs (X-Rays). Enlitic’s algorithms mined its database of images of normal radiographs as well as radiographs of fractures. Then by employing a system of deep learning which amounts to a refined version of artificial neural networks, the computer was able to develop rules that not only identified radiographs with fractures but was also able to highlight the fractures [16].The technology of imaging has also progressed rapidly and now includes both tomography (C T scans) and cross-sectional imaging in the form of magnetic resonance (MRI Scans) which are able to reveal anatomy with great clarity and made diagnosis simpler in many instances. For example, it was only possible to infer a ruptured aneurysm on a chest X-Ray, but it can actually be seen on a CT scan. To make this diagnosis, however, a radiologist will typically have to study many more images. A CT scan of a patient with multiple trauma could involve the radiologist studying as many as 4000 images [17]. Such a time- consuming test is well suited to the capacity of an A.I. platform to perform quicker than a human can.

The IBM Prototype for artificial Intelligence, Watson, which has a boundless capacity for learning, is able to identify a pulmonary embolism on a CT scan and is also able to detect abnormal wall motion in a patient’s heart on an echocardiograph. With a data base of 30 billion images to review, Watson has the capacity to become the equivalent of a general radiologist with enhanced -specialist skills in every domain of image analysis [18].

The current state of the A.I. scanning available to radiologists and pathologists has demonstrated that A.I is adept at screening for lung and breast cancer. A.I. is indefatigable and could screen populations faster and much cheaper than a radiologist could.

This would be of particular significance in Africa for example, where a single A.I. platform could feasibly screen an entire town [4]. Embracing A.I. in radiology and pathology would mean that jobs are not lost, but rather, roles are redefined. Humans will focus on tasks needing a human element [19].

Ethics and trust

If A.I. systems are to play a useful and constructive role in our current and future society then these systems should function in accordance with a set of values that are aligned to those of humans. A.I. systems are designed and built by humans and this fact places an obligation on all A.I. designers and engineers to observe a set of ethical principles when embarking on the construction of such systems. The old adage about computer programs viz what you put in determines what you get out, is complicated and compounded when you are dealing with self-learning algorithms . The ethics of technology includes the ethics of artificial intelligence which applies to robots and other artificially intelligent beings.

Roboethics is concerned with the morality of how humans design, construct and use robots. It also prescribes how artificially intelligent beings may be used to both harm and benefit humans [21]. To achieve this end, they should be tasked with building elements of moral behaviour into their algorithms, in order to create artificial moral agents. This quality is very important where an A.I. system is being used in mental health care. These A.I. systems can be characterised as either implicit ethical agents, which means that they are machines which can do only what programmers have constrained them to do. Alternatively, we have explicit ethical agents which are programmed to calculate what is ethical on their own by applying ethical principles to complex situations such as diagnosing depression in a patient and then prescribing treatment as well as providing ongoing monitoring of the patients’ symptoms. This means that the system will have to have the ability to make decisions and select courses of action which are conversant with the codes of ethics prescribed by the Psychiatric Association for its practitioners. Professional ethics and machine ethics must thus operate in tandem and to the benefit of the patient [22].

Ethical relationships between doctor and patient

If an artificial agent assumed the role of a doctor, then the patient is joined with such agent in a therapeutic relationship which must duplicate that which would have existed with a human doctor. The success of any doctor/patient relationship is that they are based on trust. If there is a failure of trust the patient will not feel comfortable to communicate all the relevant clinical information which the algorithms require to compare against their data base for diagnostic purposes [23].

The relationship between human psychiatrists and their patients gives rise to legal and ethical obligations because of the psychiatrist’s position of power vis-a-vis their patient. It follows that the potential exists for the care giver to harm or exploit his patient, hence the significance of the ethical codes applicable to mental health practitioners, be they human or machine .

There are two aspects of machine ethics that must be considered. The first is in regard to the moral behavior of artificial intelligence agents. Such agents must be constructed in order to ensure that they behave in a moral and ethical fashion towards both human users or other artificial moral agents. Second there is a duty on humans to observe ethical standards when it comes to designing, creating and interacting with artificial intelligence entities. This is reflected in Isaac Asimov’s Three Laws of Robotics [25].

The imperatives described above have received a measure of formalisation via the publication of a set of ethical principles to be observed by the designers, builders and users of robots in the United Kingdom [26]. The following table is taken from the EPRSC and AHRC report:

  1. Robots should not be designed solely or primarily to to kill or harm humans.
  2. Humans, not robots, are responsible agents. Robots are tools designed to achieve human goals.
  3. Robots should be designed in ways that assure their safety and security.
  4. Robots are artefacts they should not be designed to exploit vulnerable users by working on emotional responses or dependency. It should always be possible to tell a robot from a human.
  5. It should always be possible to find out who is legally responsible for a robot [4].

The key to forming and maintaining a therapeutic relationship is empathetic understanding of what the patient is feeling and experiencing [27]. When the case provider communicates their feelings to the patient it is known as reflection [4]. Even when the patient is aware that he/she is interacting with a machine it is still possible that this interaction will evoke intense emotions which can, in an appropriate therapeutic context, be desirable. This direction of the patient’s feelings toward an object albeit a machine is known as transference. This can result in some patients becoming overly attached or attracted to an AICP system and some patients may even believe that the machine is ‘alive’ [28].

The submissions above place a duty of care on psychotherapists to ensure that therapeutic relationships with their patients are concluded in such a way that it does not cause harm to or distress their patients. It follows that an AICP system must be designed to likewise end its relationship with a patient in a similar empathetic fashion [29].

Summary

In order to be fully accepted into society, A.I. systems must develop values which generate trust in the societies they function in. To achieve this end, they must build significant social capabilities into their AMA’s and robots. This is essential because their ever-increasing presence in our lives will have a major impact on our emotions and our decision-making capacities. A.I systems with a deep learning capacity must be programmed in such a way that the new knowledge they learn is not deviant but is aligned with the accepted moral values and social norms of that society [30].

This outcome can at least be guaranteed if builders and designers of A.I systems were not only expected to adhere to guidelines similar to those published by the EPRSC and the AHRC, but if those guidelines were elevated to the force of law and that users as well as builders and designers were only permitted to operate on a licensed basis. Companies like IBM are to be congratulated on their initiative of creating an Internal IBM Cognitive Ethics Board to guide and advise the company on the ethical development and deployment of A.I systems. However, damage and dangers inherent in rogue operators functioning in this burgeoning field is too great to rely on self-regulation [31].

Can A.I. systems be guilty of medical malpractice

If we are rightfully going to demand that ethical principles are programmed into all the A.I. platforms which interact with humans, then the next step is to determine under which circumstances they can be legally responsible for their actions.

Definitions

Medical Malpractice is a sub-category of general negligence law. Accordingly, if a physician who has a duty to a patient breaches that duty by failing to act as a reasonably prudent physician would, he is deemed to have been negligent. To be held liable for his negligence a number of elements need to be satisfied viz that the physician owed a duty of care to the patient who suffered the harm, that he breached that duty, that the breach caused the harm, and that the harm was not too remote a consequence of the breach [32]. The elements listed are well established in common law [33]. However, when an artificial intelligence system is linked to the doctor – patient relationship, matters become a great deal more complex [34], as we will discover in the rest of this article. The Path to Accountability for A.I systems.

Can a patient sue a robot for malpractice? As this technology is still relatively new, litigation in cases of this nature constitute a grey area, as the law is justifiably lagging behind at the moment, and it is unlikely that courts will be able to find appropriate solutions to all the legal conundrums which this technology poses by merely adapting existing principles and precedents to the imminent new problems [4].

2021 Copyright OAT. All rights reserv

The ever-increasing public concern for the many risks to which individuals are exposed as a result of decisions made by computers, not humans, needs to be addressed. Policy makers when seized with drafting legislation in this area will perforce have to navigate between legislation that provides adequate protection to the public for the risks presented when computer judgement replaces that of humans, on the one hand, and not stifling A.I. innovation on the other hand. Applied to the field of medicine a fine balance must be struck between the best interests of the patient and the cardinal rule of non-malfeasance, in the proposed legislation [4,35].

The concept of negligence (by a physician) implies an element of awareness which is a quality that A.I inherently lacks. While it is conceivable that robots could be held to performance standards of some kind, no such standards currently exist. It would be difficult and time consuming for courts to create such standards, which means that the only other source of legally enforceable standards would be by way of legislation. Therefore, if the A.I robot cannot be held liable who takes the blame? Can it be one or more of the following:

  • The human surgeon who oversees the robot.
  • The company which manufactures the robot.
  • The specific engineer who designed it [36].

The culpability of each of the above protagonists will be discussed below.

Can an A.I. systems’ liability be linked to the role it plays in relation to patients?

Let us align our analysis of systems liability to the capabilities of the IBM medical robot known as Watson because it represents the most advanced of all A.I systems used in the field of medical diagnosis. Liability in the doctor/patient relationship can be categorised into the following classes:

  • Medical malpractice usually applies to healthcare providers.
  • Vicarious Liability attaches to institutions like hospitals, which employ healthcare providers.
  • Product Liability is ascribed to defective equipment and medical devices which healthcare providers may use [37].

It is submitted that Watson’s capabilities place it partially into all three of the above categories. Were we to attempt to ascribe liability to Watson it soon becomes apparent that no single currently existing legal theory of recovery is adequate to the task of apportioning liability to a computer system capable of practicing medicine, so how is the law to proceed when faced with an alleged malpractice by Watson? [38]

Watson is cast in the role of a consulting physician

Watson is currently used by medical practitioners to assist them to make an accurate diagnosis of a patient’s illness. Other A.I platforms have to be programmed and trained for specific tasks such as analysing X-Rays or scans, Watson’s role is thus to provide information at the request /command of a doctor who has a normal doctor patient relationship with his patient. Watson perforce does not, or cannot at this stage of its development, have such a relationship with a patient. Watson thus clearly performs the role of a consultant physician from the view point of legal liability. Because Watson does not interact with the patient per se, Watson does not owe the patient a duty of care which it could breach and thus be guilty of a malpractice [39].

The following practical examples of a consulting physician’s role will illustrate how liability is apportioned: -

(A) In telemedicine. Telemedicine has been defined as the use of electronic information and communications technologies to provide support for health care practitioners when distance separates the participants [40].

In telemedicine the consulting doctor (analogous to Watson) is consulted by a primary treating physician’ to obtain a diagnosis of his patient’s symptoms. The consulting doctor in telemedicine situations is not the assistant of the ‘primary treating physician’. There is therefore no doctor patient relationship which is created by the consultation and therefore there is no duty of care which can be breached, as normally exists between the patient and the consulting doctor (Watson) [41].

The position is different where a patient is admitted to a hospital for treatment. Legal liability will arise according to the terms of the contract between the patient and the hospital. If the contract can be classed as a ‘total contract’ with non-delegable duties attaching to the hospital, then all doctors, including Watson, who are involved in treating the patient are deemed to be assistants of the hospital. The hospital will then be held liable for any malpractice arising from implementing Watson’s diagnosis and suggested treatment via the doctrine of vicarious liability [42]. In the event of a ‘split contract’ whereby the consulting doctor (Watson or his agent) has his own contract with the patient, the consulting doctor (Watson) will also assume legal responsibility via its owner or agent [43].

It is clear from the above that the physician seeking advice must satisfy himself that the source of the advice (Watson) has a high success rate in diagnosis, secondly that the physician implements the advice obtained appropriately, and third, that the questions put to Watson are appropriately formulated to encompass his patient’s symptoms [44].

(B) Strict liability. If Watson were to be categorised as a product and not a consulting physician, then any malpractice claim arising from it making a negligent diagnosis could be determined by applying a rule which differs from the breach of a duty of care, namely via strict liability i.e. a liability which arises without requiring a patient who is injured having to prove fault or negligence on the part of the physician. Under this heading all that must be proved is that the product or activity was unreasonably dangerous, and that they resulted in the patient’s injuries. Therefore, under a strict liability regime IBM, the manufacturer and developer of Watson, could be held liable for their product’s actions. This will in turn create a major incentive for IBM to ensure that, Watson, their product is safe to be used on the general public [45].

The primary physician who ‘consults’ Watson may not necessarily escape liability, if the symptoms the physician poses to Watson are inaccurate or if some symptoms were omitted from the search. In a situation like this the liability could be classed as joint and several, in which case liability would be split jointly and severally, unless evidence can be led which separates the physician’s individual acts of negligence in causing the injury from those of the consultant Watson [46].

The above principle can be illustrated by referring to the modus operandi of pathologists. Their daily job does not merely involve consulting with physicians who refer samples to them for testing, but also to make independent examinations of such blood and tissue samples, and then to personally make medical judgements, based on their tests. Such a pathologist cannot escape liability for any negligence in their tests merely because the patient from whom the tissue sample was taken only consulted with his GP and did not enter into a formal doctor – patient relationship with the pathologist [47].

Strict liability is already entrenched in certain areas of product claims in Australian law. We could build on this basis in order to create a no-fault liability system for the A.I. Industry and to ease the load for manufacturers by legislating for the creation of a levy on A.I. products to create a claims pool, administered by an Ombudsman, from which injured patients could seek redress for their Watson induced injuries [48],

Medical malpractice and informed consent

We have seen supra, that if Watson is cast in the role of a consulting physician he does not owe a legal duty of care to a patient and hence liability for a medical malpractice claim [49].

Could Watson be held liable for a medical malpractice claim based on not providing sufficient information of the risks involved in a suggested therapy, which in turn means that the patient would not have provided a true informed consent [50].

Where a primary physician uses Watson as a consultant it would place an onus on him, to disclose to his patient that he is using Watson and what Watson’s diagnosis amounts to. This onus on the physician would also include a duty to inform his patient of the alternatives to Watson’s proposed treatment or diagnosis as well as the reasonably foreseeable risks and benefits of such treatment. The patient, it is submitted, would require this information in order to arrive at an informed consent [51].

As physicians increasingly use Watson it is likely that the criteria for determining their legal liability will also adapt to their changed practice conditions and that physicians who use Watson will be held to higher standards than those physicians who do not. The reason for this likely change is that the Watson-using physicians have access to vastly more information than non-users. Thus, in the same way as specialist physicians are held to a higher standard of care than G.Ps because of their higher levels of training and proficiency, it would be justifiable to demand higher standards from Watson-using physicians as well. The traditional action for medical malpractice will thus perforce have to evolve in order to adapt to the new breed of robots who practice medicine. Our courts or our legislators, will have to respond and establish a cause of action for software malpractice [34,52].

Vicarious liability and Watson

Vicarious Liability is most commonly applied to situations where one individual can be held legally responsible for the acts of another. Practically it is mainly found in an employer – employee relationship where an employer is held liable for the tortious actions of an employee .

The most common application of vicarious liability in the medical field is to be found in situations where hospitals can be held liable for the negligent acts of the doctors or nurses whom they employ . As this form of liability in the medical field is well established vicarious liability for A.I systems like Watson is a logical extension of the principle. Liability will only attach to the hospital if Watson can be categorised as an employee rather than a machine. Because of Watson’s diagnostic role within a medical team a court may, it is submitted, find Watson to be analogous to a physician rather than a piece of equipment . If that were to be the case it would require hospitals to extend their insurance policies to cover the risk of Watson causing an injury in the same way they insure their other medical/nursing employees. Watson could not be held financially responsible in his own right for any claims or restitution, hence Watson’s vicarious liability will have to be shouldered by the hospital .

The availability of insurance to cover Watson is likely to encourage hospitals to invest in and use artificial intelligence systems .

How to determine liability

The current Common Law systems relating to negligence in the medical field do not ideally translate to regulating the injection of artificial intelligence into the practice of medicine for the following reasons:

  1. Can Watson form a physician/patient relationship which could create an independent duty to the patient?
  2. Will guidelines for informed consent and the standard of care required have to change to accommodate Watson?
  3. How would the law properly assess causation and fault against Watson and his team of healthcare assistants? [58]

Thus, for artificial Intelligence to flourish in delivering services in the field of medicine, legal clarity and certainty is essential before hospitals will adopt this emerging technology . Watson will most likely always be used within a team of physicians, and this fact will, in the light of current law, complicate the ability to distinguish how fault and causation is to be allocated between the various actors . One possible solution would be that where a medical clinic comprising a number of doctors, provides services, to a patient, the clinic could be classified as a business or enterprise. If this is the case, then fault and causation are ascribed to the team of actors comprising the ‘enterprise’, rather than against the individuals who comprise the team. The head of the enterprise will then make restitution . The downside of this approach is that it places a disproportionate burden on the enterprise, and despite simplifying the liability aspect it will create an economic disincentive to the expanded use of Watson .

The final solution – legislative intervention

Instead of the medical/healthcare industry enduring a long period of uncertainty regarding liability for Watson’s diagnosis and treatment recommendations, while the courts grapple with adapting existing common law principles of negligence and malpractice to machines/robots’ like Watson, it is submitted that the best interests of all participants would be served if the legislature steps in and settles the issues described above via legislation .

It is suggested that the legislature should achieve this end by creating a unique cause of action for patients wanting to sue an artificial intelligence system . The following framework is suggested: -

  1. The statute should require the court at first instance to determine the direct cause of the plaintiff’s injury.
  2. Arising out of the above determination, the case could either proceed under a claim for medical malpractice or product lability.

The advantage of this procedure is that because the court is forced to make an initial assessment of the direct cause of the plaintiff’s injury it will result in fewer actions against Watson.

Based on the proposed Legislation, actions against an A.I. system like Watson should be confined to a single action, but, should still allow alternative remedies for recovery. This would entail the following: -

  1. Does the case arise out of a defect in the A.I systems’ hardware? If a panel of experts determine that there was a hardware failure and that the latter was the cause of the plaintiff’s injury, then the case should proceed against the manufacturer [63].
  2. If the plaintiff’s injury arises out of a failure to properly maintain Watson, then the indications are that Watson’s owner could incur liability on the basis of contributory negligence. It would also be wise for the legislature to mandate all owner/users of A.I. systems to carry insurance against any claim for negligence [64].
  3. If no hardware fault can be detected then any prospective action should be lodged against the ‘enterprise’ be it a Medical Centre, Private Hospital or Public Hospital, because in most instances Watson would be included in such an enterprise as would its owner and the cluster of physicians who use Watson in such an enterprise [4].

As a result of the Ipp Report all states and territory’s in Australia passed legislation which amongst other things placed a limit (or cap) on certain aspects of compensation which could be claimed for medical malpractice. The result is that the law of negligence is now governed by a mixture of legislative rules and the common law. The existing caps in the Civil Liability Act , in Queensland for example, could also be extended to A.I. platforms in order to act as a disincentive to excessive claims on the one hand, while not negating avenues of recovery for plaintiffs on the other .

Separating liability and restitution would be advisable in any action against Watson because an A.I. entity cannot own property or earn wages . In an action for negligence a court first determines whether causation has been established before it assesses damages . However, because of the nature of Watson as a respondent it would be imperative that the heads of fault and causation should be assessed against the enterprise as a whole rather than against the individual comprising it. By following the course suggested above, the act of separating the cause of harm from the source of money, which will pay for the harm, will clarify the enterprise parties’ role in the action against Watson .

It is submitted that by employing enterprise liability it will allow the courts to analyse the enterprise team’s fault without having to penetrate the interrelated actions of the individual participants .

Conclusion

Artificial Intelligence will become the stethoscope of the 21st Century if there is a high level of co-operation between the law, technology and the medical community . Legal systems around the world will soon be faced with fundamental decisions regarding A.I systems. These systems will be prone to incurring liability for their decisions regarding the treatment of patients, in the same way as doctors are who treat their patients .

Watson and other similar A.I. systems are already so well entrenched in, and relied upon by medical practitioners, that the option of imposing an outright ban on such systems has already passed. The only option now available to legislators around the world is to impose a sensible form of regulation of such systems to protect the rights of the millions of patients who will be exposed to the diagnosis and therapies suggested by their A.I systems .

Therefore, it is submitted that to encourage the safe use of the skills these systems undoubtedly have, a new legal action based on enterprise liability should be created which will also encompass all relevant aspects of medical malpractice, products liability and vicarious liability . Certainty and stability through far sighted and appropriate legislation will stimulate and entrench this vital new asset within the medical profession .

References

  1. Francesca Rossi (2016) European Parliament Report ‘Artificial Intelligence: Potential Benefits and Ethical Considerations. PE 571.380.
  2. Editorial ‘Artificial Intelligence as a Positive and Negative Factor in Global Risk.’ MIRI - Machine Intelligence Research Institute.
  3. Coiera E (2003) Artificial Intelligence in Medicine - An Introduction. (2nd edition), Chap 19.
  4. ibid
  5. Dale Van Demark ‘Artificial Intelligence in Health Care: Framework Needed” in International News 14,15.
  6. Bostram N (2014) Superintelligence: Paths, Dangers, Strategies. Oxford, New York.
  7. Muehlhauser L, Salomon A (2012) Intelligence explosion: Evidence and Import. Singularity Hypotheses: A Scientific and Philosophical Assessment eds. Eden A, et al. Berlin.
  8. de Blanc P (2011) Ontological crises in Artificial Agents Value Systems at X iv 1105.
  9. de Blanc P, op cit 1105; Andrew Ng and Stuart Russell (2000) Algorithms for Inverse Reinforcement Learning’ In 17th International Conference on Machine Learning, pp: 663-670.
  10. Fallenstein B, Soares N (2014) Problems of Self-Reference in Self-improving Space-Time Embedded Intelligence. Artificial General Intelligence 17th International Conference, pp: 21-32.
  11. The Medical Futurist (2017) Editorial - Artificial Intelligence is the Stethoscope of the 21st Century.
  12. Bushenbacher K (2018) Can machine with artificial intelligence help look after Chinas Aging Population? Global Times.
  13. Gershorn D (2018) Google is using 46 billion data points to predict the medical outcomes of hospital patients. Quartz.
  14. Johnson O (2016) A.I can excel at medical diagnosis, but the harder test is to win hearts and minds first. The Conservative.
  15. Yu KH, Zhang C, Berry GJ (2016) Predicting non-small lung cancer prognosis by fully automated pathology image features. Nature Communications 12474.
  16. Lusted LB (1960) Logical analysis in roentgen diagnosis. Radiology 74: 178-193. [Crossref] 
  17. Jha S (2016) Will computers replace radiologists? Medscape.
  18. McMillan, Dwoskin R (2015) IBM Crafts a Role for Artificial Intelligence in Medicine. Wall Street Journal.
  19. Jha S, Topal EJ (2016) Adapting to Artificial Intelligence, Radiologists and Pathologists as Information Specialists. Journal of the American Medical Association.
  20. Verruggio G (2007) The Robotics Roadmap. Scula di Robotica.
  21. McGee G (2007) A Robot Code of Ethics. 5: 30.
  22. Wallach W, Allen C (2010) Moral Machines, teaching robots right from wrong; Verrugio G, Operto F (2008) Roboethics: Social and Ethical Implications of robotics. Springer, Handbook of Robotics, pp: 1499-1524
  23. Dugdale DC, Epstein R, Pantilant SZ (1999) Time and the patient – physician relationship. J Gen Intern Medicine 14: 34-40. Sullivan JP (2011) When is a robot a moral agent. Machine Ethics by Anderson & Anderson CUP 2011.
  24. Veruggio G, Operto F (2008) Roboethics: social and ethical implications of roboethics. Springer, Handbook of Roboethics, pp: 499-1524.
  25. Asimov Runaround, Street & Smith (1942) and I Asimov Robots and Empire. New York (1985)
  26. Engineering and Physical Sciences Research Council (EPRSC) (2011) 2011 and the Arts and Humanities Research Council (AHRC) Great Britain Report, titled Ethical principles regarding robots.
  27. Rogers C (1959) A theory of therapy, personality and interpersonal relationships as developed in the client – centered framework. Psychology; a Study of Science 3 (S Koch ed) New York.
  28. Reik LD, Watson RNW (2010) The age of Avatar realism. IEEE Robot Autom 17: 37-42.
  29. Miller KW (2010) It is not nice to fool humans. IT Professional.
  30. Sullivan JP (2011) When is a robot a moral agent? Machine Ethics.
  31. European Parliament op cit at 5
  32. Allan J, Blake M (2014) The Patient and The Practitioner: Health Law and Ethics in Australia. Lexis Nexis pp: 187.
  33. Whitaker RV (1992) HCA 58; Sidaway v Board of Governors of Bethlem Royal Hospital [1985] I All ER 643; Canterbury v Spence [1972] 464 F 2d 772.
  34. Tobey D (2018) Software Malpractice in the Age of A.I ‘A guide for the wary tech Company. AAAI/ACM Conference on Artificial Intelligence, Ethics and Society, New Orleans
  35. IBM Watson: ushering in a new era in computing, at http://www.03.ipm.com/innovation/us/watson/index.html
  36. Norman A (2018) Your Future Doctor May Not be Human’ This is the Rise of A.I in Medicine’ 2018 Futurism https://futurism.com/author/abbyon.
  37. McIlwraith J, Madden B (2017) Health Care and the Law 5th Thomson 247-254; Alan S, Blake M (2013) The Patient and the Practitioner, Health Law & Ethics in Australia. LexisNexis Australia pp: 188-217; Skene L (2017) Law & Medical Practice. Lexus Nexis pp: 217-236; Farrell AM, et al. (2017) Health Law Cambridge pp: 138-148.
  38. Thompson CL (1995) Imposing Strict Products Liability on Medical Care’ Providers. Modern Law Review 65: 115-718; Mclean TR (2002) Cybersurgery – An argument for Enterprise Liability. J Legal Medicine 167: 181
  39. Moore TA (2004) Medical Malpractice: Discovery and Trial.
  40. Angaran D (1999) Telemedicine and Telepharmacy, current status and future implications. American Journal of Health Systems and Pharmacy 1405-1426. Berek B, Canna M (1994) Telemedicine on the Move: healthcare heads down the information super highway. Hospital Technological Services.
  41. Hill v Kokosky 463 N W 2d 265.266 (Mich. BT App. 1990) See also Ellis v Wallsend District Hospital (1989) 17 NSW & R 553
  42. Ellis v Wallsend District Hospital Supra.
  43. Irvin v Smith 31 P. 3d 934,941 (Kan. 2001) K Ulsenheimer and R Erlinger ‘Liability of the Consulting Physician’ (2001) 95 Z Arztl & Fortbild Qualitatssich 9
  44. Groves RH, et al. (2008) Intensive Care telemedicine: Evaluating a model for proactive remote monitoring intervention in the critical Care setting. Stud Health Technote Inform pp: 131-146
  45. Ulsenheimer K, Erlinger R (2001) Liability of The Consulting Physician. Pub Med 95: 609-615. Hill v Kokosky, 463 NW 2d 265, 266 (Mitch CT App. 1990)
  46. Irvin Smith Supra at n 51. St John v Pope 901 S.W. 2d 420, 424 (Tex). 1995. Kugnar v Rakisha Corp 75O NW 2d 121, 128 (Mich. 2008)
  47. Craig TA (2014) What is the Pathologists’ legal Liability? Archives of Pathology and Laboratory Medicine.
  48. McLean TR (2002) Cybersurgery-an argument for enterprise liability. J Leg Med 23: 167-210. [crossref] 
  49. Irvin vs Smith. Supra at m 51.
  50. Rogers v Whitaker [1992] HC A 58. Sidaway v Governors of Governors of Bethlem Royal Hospital [1985] 1 ALL E R 643.
  51. Chappell v Hart (1998) HCA 55
  52. Diversified Graphics Ltd v Groves 868 F 2d 293, 8th 1989.
  53. John James Memorial Hospital Ltd v Keys [1999] FCA 678
  54. Ellis v Wallsend District Hospital (1989) 17 NSWLR 553
  55. Pegalis SE (2005) American Law of Medical Malpractice 1:1.
  56. Sullivan T (2017) Half of Hospitals to adopt artificial intelligence within 5 years. Healthcare IT News.
  57. Susskind R, Susskind D (2016) Technology Will Replace Many Doctors, Lawyers and other Professionals. Harvard Business Review.
  58. Goertzel K (2016) Supply Chain Risks in Critical Infrastructure; Legal Liability for Bad Software’ (2016) Cross-Talk, Sep-Oct Ed. See also Diversified Graphics Ltd v Groves, 868 F. 2d 293 (8th Cir. 1989) Contra Superior Edge Inc. v Monsanto Co. 44 F Supp. 3rd 890 912 (D. Min. 2014)
  59. McLean op cit 188.
  60. Tappan K (2005) Medical Malpractice Reform: Is Interactive Liability or No -Fault a better Reform? British Columbia Law Review 1095: 1104.
  61. Injury law negligence liability: who is responsible. http://injuryfindlaw.com
  62. Lea G (2015) Who’s to blame when artificial intelligence systems go wrong? The Conversation.
  63. Owen D, Madden MS, Davis MJ (2000) Products Liability.
  64. Pengalis S (2005) American Law of Medical Malpractice.
  65. Queensland, ‘The Civil Liability Act’ 2003.
  66. Alain op cit 1076.
  67. Tappan op cit 1104.
  68. Allen TC (2014) Intradepartmental consultation: what is the pathologist's legal liability? Arch Pathol Lab Med 138: 589-591. [crossref] 
  69. Thomas S (2017) Artificial Intelligence, Medical Malpractice and the End of Defensive Medicine. Harvard Law in Bill of Health.

Editorial Information

Editor-in-Chief

Ying-Fu Chen
Kaohsiung Medical University, Taiwan

Article Type

Research Article

Publication history

Received date: June 19, 2018
Accepted date: July 04, 2018
Published date: July 09, 2018

Copyright

©2018 Lupton M. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Citation

Lupton M (2018) Some ethical and legal consequences of the application of artificial intelligence in the field of medicine. Trends Med 18: doi: 10.15761/TiM.1000147

Corresponding author

Michael Lupton

Professor, Bond University, Queensland, Australia

No Data.