top of page
Writer's picturePriti Nag

Covid-19 and Artificial Intelligence: Using AI to overcome COVID-19 in an ethical manner

The efficient use of AI in pandemic management requires a principled approach.



In a crisis like the covid-19 pandemic, governments and health-care providers must respond rapidly and effectively to halt the disease's spread. Artificial intelligence (AI), which in this context primarily refers to increasingly efficient data-driven algorithms, may play a key role in the action, for example, by assisting in the tracking of a virus's progress or prioritizing scarce resources. It may be tempting to deploy these innovations at a rapid and large scale in order to save lives. However, AI deployment has the potential to impact a broad variety of fundamental principles, including autonomy, privacy, and fairness. Even in emergency situations, AI is far more likely to be useful if those commissioning, developing, and implementing it take a systematic ethical approach from the outset.


Ethics entails weighing the risks and benefits of a decision in a principled manner. This will lay a foundation of trustworthiness on which to develop a commonly used technology. Ethical deployment necessitates broad and open consultation, deep and broad consideration of possible consequences, and transparency about goals sought, trade-offs are taken, and principles guiding these decisions. In the event of a pandemic, certain processes should be accelerated rather than halted. Otherwise, there are two major risks: first, the technology's advantages may be outweighed by negative side effects, and second, public confidence may be lost.


The first risk is that the potential benefits increase the motivation to deploy AI systems quickly and at scale, while also emphasizing the value of taking an ethical approach. The amount of time available to test and evaluate new technology is limited by development speed, while the scale of implementation magnifies any negative consequences. Without forethought, this may lead to issues such as a one-size-fits-all solution that disadvantages already marginalized groups.


Second, public trust in AI is critical. Contact tracing apps, for example, depend on widespread adoption to be effective. However, both technology firms and policymakers are having difficulty persuading the public that AI and data can be used responsibly. Following the controversy surrounding, for example, the relationship between DeepMind and the Royal Free London NHS Foundation Trust, privacy advocates have warned against proposals to expand access to NHS data. Concerns have also been issued in China about the distribution of data and control of the Health QR code system to private companies. Overpromising on technological benefits or relaxing ethical standards, as has happened at times during this crisis5, both risks undermining long-term confidence in the sector's reputation. Whether potential harms are apparent right away or take a long time to manifest, taking a consistent ethical approach from the start will put us in a much better position to reap the full benefits of AI now and in the future.


Bringing AI ethics and health ethics together


Artificial intelligence (AI) is a broad term that refers to digital systems that can make complex decisions or recommendations based on data inputs. This straightforward definition enumerates three reasons why such systems give rise to ethical issues.


To begin with, AI applications, especially in healthcare, often necessitate a large amount of personal data, which raises all of the issues surrounding responsible data management, including privacy, consent, security, and ownership.


Second, AI systems are being used to automate previously manual decision-making processes. This automation raises ethical issues such as who should be held responsible for these decisions and how stakeholders can determine which value judgments are guiding them. Is the system, for example, optimizing a commercial value, a government's interests, or an individual's health? Automation bias—the tendency for people to suspend their own judgment and over-rely on automated systems—can raise these concerns even when an AI system is only recommending a course of action.


Thirdly, due to the complexity of the data or the algorithm, the operations of AI systems are often unclear (especially many powerful and popular algorithms used in machine learning). This ambiguity, combined with accountability issues, can make it difficult to evaluate ethically relevant factors like unintended biases in the system or the consistency of results across different populations.


Of course, ethical decision-making is already ingrained in healthcare, where it is often organized around the four pillars of biomedical ethics: beneficence, non-maleficence, autonomy, and justice. When considering the use of AI in a public health setting, such as a pandemic, it's important to think about how the unique challenges posed by AI relate to these four well-known principles.


Beneficence


It may appear self-evident that using AI to manage a pandemic is beneficial: it is designed to save lives. However, there is a risk that the vague promise that new technology will "save lives" will be used as a blanket justification for interventions that we would not otherwise consider acceptable, such as the widespread use of facial recognition software. Those responsible for designing or implementing such a system must be clear about who and how their intervention will benefit. Only by explicitly stating this will it be possible to ensure that the intervention is proportionate to its benefit. For example, if a data-driven contact tracing app does not require large quantities of location data to be collected and stored indefinitely, large-scale data collection that we would normally consider excessive would not be proportionate. Even if there is a potential for an additional benefit, one must consider whether this benefit is sufficient to justify the creation of such a database.


Non-maleficence


It is critical to carefully consider the potential implications of proposed interventions to prevent unintended harms from the use of AI in pandemic management. Some interventions, such as imposing self-isolation, may exacerbate mental health issues in those who are already vulnerable (e.g., the elderly) or impose high financial costs on individuals. AI systems strive to optimize a specific objective function, which is a mathematical function that represents the objectives they were created to achieve. Any potential harms that aren't represented by this function won't be taken into account by the system's predictions. Some hospital resource prioritization systems, for example, are optimized to predict death from covid-19, but not other potential patient harms (e.g., "long covid"). If these other harms do not correlate with the risk of death, determining how to prioritize health resources solely on the basis of this system could result in a significant accumulation of harm (depending on the incidence and severity of the other harms). Furthermore, since these systems will be widely used, they must consistently perform as expected across a variety of populations and potentially changing conditions. Trying to develop AI systems quickly while our knowledge of the virus is still limited, and with less time than normal to ensure the quality and representativeness of the data used, risks developing systems that are based on simplified assumptions and datasets that do not cover all real-world scenarios. For example, a recent systematic review of 145 prediction models for covid-19 diagnosis (including 57 that used AI for image analysis) found that all of them were statistically biased. Inaccurate diagnoses or inappropriate interventions resulting from these models may result in more deaths than they save.


Self-determination


The benefits of new technologies are almost always contingent on how they influence people's behavior and decision-making: from personal precautions to healthcare professionals' treatment decisions and politicians' prioritization of various policy responses. As a result, it is critical to respect people's autonomy. People feel the need to be in control and accept the use of technology across cultures and age groups, according to evidence. Otherwise, technology's impact on their behavior is likely to be limited. A particular challenge for AI systems is that they may have a more subtle and individualized impact on patients, healthcare professionals, and other stakeholders than, say, a mask or vaccine, where the desired behaviors are apparent. Designers should assist users in understanding and trusting AI systems so that they feel confident in their ability to use them independently. In a pandemic, for example, diagnostic support systems should provide enough information about the assumptions behind, and uncertainty surrounding, a recommendation so that it can be integrated into their professional judgement.


Justice


As is well known, data-driven AI systems can have different effects on different groups. When there is a lack of data of adequate quality for certain groups, AI systems may become biased, often in ways that discriminate against already marginalized groups, such as racial and ethnic minorities. Smartphone apps, for example, are increasingly being lauded as monitoring and diagnostic tools, such as the MIT-Harvard model for diagnosing covid-19 based on the sound of coughs. However, smartphone adoption is uneven across countries and demographics, with global smartphone penetration estimated to be 41.5 percent in 2019. This restricts who has access to the service as well as whose data is used to create such apps. Using AI for pandemic management could exacerbate health inequalities if care is not taken to detect and counteract any biases. The speed and scale with which systems might be deployed in response to a pandemic exacerbate these risks, making foresight and vigilance even more important.


In general, when AI systems are proposed as a pandemic response, complicated value trade-offs may be implemented. Leading UK public health officials, for example, argued in the design of the NHS digital contact tracing app for a centralized approach to data collection, arguing that machine learning could be applied to the resulting dataset to aid in disease prediction. Legal and security experts, on the other hand, argued for a decentralized approach, citing privacy and data security issues. The United Kingdom has chosen to use a decentralized app. These are inherently value-laden decisions, and rational people may disagree, and some groups may have much more reason to be concerned (eg, owing to worry about surveillance or historic discrimination). These risks can be reduced by involving diverse groups in decision-making and being open about values and trade-offs.


A participatory approach to ethics in practice


Politicians and public health officials are in charge of making final decisions about AI deployment, so they must address these ethical concerns. However, they must rely on the expertise of designers, engineers, and healthcare professionals, as well as the perspectives of those who will be affected. There is no single checklist that these decision-makers can use to ensure that AI is used ethically and responsibly. There will be a trade-off between the use of AI for good and the need to mitigate its harms, especially during a crisis. To ensure that decisions about the use of AI are equitable, public decision-makers should not tackle these trade-offs alone, but rather engage with a variety of stakeholder groups. To ensure that this can be done quickly and efficiently, even in the midst of a fast-moving crisis, procedures must be in place ahead of time, detailing who should be consulted and how they should be consulted if a public health crisis occurs. Decision-makers can then be held accountable for adhering to those processes and making the reasoning behind their AI deployments transparent.


To better understand potential trade-offs involved in deploying a system and appropriate ways to resolve them, broad stakeholder engagement entails consulting with a wide range of experts and diverse groups from across society. Talking to engineers building AI systems to gain a better understanding of their weaknesses, constraints, and risks; experts in domains such as human-centered design or value-sensitive design to understand how the system's intended benefits may be dependent on human behavior and how to support adherence; experts in domains such as human-centered design or value-sensitive design to understand how the system's intended benefits may be dependent on human behavior and how to support adherence; experts in domains such as human-centered design or value-sensitive design to understand how the system; and ethicists to figure out where AI systems could incorporate value judgments into decision-making processes. Consultation with a variety of public groups can reveal blind spots, identify previously overlooked harms or benefits to different groups, and assist decision-makers in understanding how different communities perceive trade-offs.


AI has the potential to assist us in solving increasingly critical global issues, but deploying strong new technologies for the first time in a crisis is always fraught with danger. The better prepared we are to deal with ethical issues ahead of time, the easier it will be to maintain public trust and quickly deploy technology for the greater good.



Objectives As follows

  • AI-based technologies hold promise for combating pandemics such as covid-19, but they also pose ethical issues for developers and decision-makers.

  • If a moral approach is not taken, the chances of unintended negative consequences and a loss of stakeholder confidence rise.

  • Because AI systems often require large quantities of personal data, automate decisions previously made by humans, and can be extremely complex and opaque, they raise ethical concerns.

  • The four pillars of biomedical ethics—beneficence, nonmaleficence, autonomy, and justice—can help us understand how these issues arise in public health.

  • The best way to address these issues is to have open and transparent communication with various stakeholder groups during the development of AI systems.


 

To help their work, Newsmusk allows writers to use primary sources. White papers, government data, initial reporting, and interviews with industry experts are only a few examples. Where relevant, we also cite original research from other respected publishers.




Source- TheBMJ

23 views0 comments

Comments


bottom of page