To ethicize or not to ethicize…

Few minutes to read
Tagged as Technologies
Published on

Ethical decision making isn’t just another form of problem solving. As artificial intelligence (AI) grows in capability and influence, experts are treading uncharted territory to develop the International Standards for ethical AI, addressing its challenges from the onset.

 

 

A Waymo car on the road.

Waymo began as the Google Self-Driving Car Project in 2009.

As algorithms become more sophisticated and autonomous, there is a risk that they begin to make important decisions on our behalf. The technology is already capable of automating decisions, such as medical diagnostics or smart manufacturing, that would normally be done by human beings.

When it comes to artificial intelligence, automotive technology is ahead of the curve. Autonomous cars are a popular research domain in AI. Big names such as Google, Uber and Tesla have been investigating how to make cars learn to drive correctly using deep reinforcement learning, i.e. learning by trial and error. But self-learning machines are vulnerable to failure, bringing ethical considerations to the fore. This challenges the conventional conception of moral responsibility: Who is responsible? And who holds the key to best practice?

In the absence of a global standard on AI, how do we bring more awareness about such issues? ISO and the International Electrotechnical Commission (IEC) have launched a range of work items through their joint technical subcommittee ISO/IEC JTC 1/SC 42, Artificial intelligence. Here, Mikael Hjalmarson of SIS, ISO’s member for Sweden, a leading expert in SC 42, explains how International Standards will help create an ethical foundation for building and using AI systems in the future.

ISOfocus: Techniques such as AI promise to be very transformative. Why are ethical and societal problems necessary considerations for AI?
Mikael Hjalmarson

Mikael Hjalmarson of SIS, ISO’s member for Sweden, is a leading expert in subcommittee ISO/IEC JTC 1/SC 42.

Mikael Hjalmarson: AI uses technologies that enable information to be collected and processed in new ways and more automatically. Nowadays, the capacity to handle a lot more data than in the past has increased – a potential that is prone to have ethical and societal consequences. It is when the data is managed in the hidden layers of an AI network, such as a neural or machine-learning implementation, that ethical and societal issues – which need not always be negative! – have to be considered. That is to say, the decisions and considerations that were previously handled outside the systems now have to be dealt with within the systems. It may also be that an AI application, no matter how “self-learning” it is, has preconceived biases that were inadvertently added when we developed and built the system.

It is imperative that we understand the ethical and societal considerations of the technology so that we can develop trustworthy systems that include assurances of transparency, explainability, bias mitigation, traceability, and so forth, as these are key to accelerating AI adoption and acceptance in the future. International Standards could play a role in identifying these ethical issues and provide the necessary framework to address them.

What are the biggest challenges facing AI ethics and societal issues? What are some of the consensus areas?
Robot cleaning vehicle with yellow body and cartoonish face photographed in Singapore's Changi Airport.

Robot helpers keep Singapore’s Changi Airport spick and span.

AI presents new and unique challenges to ethics. The main challenge is that systems leveraging AI can be implemented by many different users in different ways across various application verticals – from healthcare to mobility – with completely different requirements, and sometimes with market and regional differences as well. An AI technology becomes a “black box” that can answer questions… But can it tell you why one option is better than another? Can it provide alternatives? Then, there are the different policies, directives and environmental aspects to consider, for example those governing how data can and should be collected and used.

Another challenge is ensuring that aspects such as accountability, responsibility, trust, traceability and human values are handled equally (in the same way) so that they gain wide acceptance, even though we are not talking about creating value systems. An illustration of this might be that in one application domain it is permitted to capture and evaluate a given set of data while in another domain it is forbidden. For instance, a financial platform would be keen to avoid unintended bias rather than “AI eavesdropping” while healthcare would likely put the emphasis on transparency of the types of data captured. The system needs to be able to manage these differences.

WHAT TYPES OF ETHICAL AND SOCIAL STANDARDS IS SC 42 WORKING ON?

SC 42’s working group WG 3, Trustworthiness, is currently busy with a newly approved project. The idea is to collect and identify the ethical and societal considerations related to AI and link back to the trustworthiness projects we are working on. These efforts will culminate in a future technical report, ISO/IEC TR 24368, which aims to highlight the specifics of ethics and societal concerns in relation to other more generic ongoing projects on trustworthiness, risk management and bias, among others.

The ethics and societal aspects are examined from an ecosystem point of view, which could lead to more work being carried out in SC 42 in the future, as well as provide guidance to other ISO and IEC technical committees developing standards for domain-specific applications that use AI.

What are some of the regulatory issues in this area and how does SC 42 plan to overcome them? One of the challenges with ethical standards is that they are often voluntary, which means some AI technology creators may not follow them. Any thoughts?
Close-up of a navigation tablet on a motorcycle.

ISO, IEC and JTC 1 develop voluntary consensus standards across the board, not just on ethics. Our concern right now is that the technology is changing faster than regulators can keep up. This results in a cat-and-mouse game between the increasing use of AI in various types of systems and environments and the development of rules and legislation to control it. Since we are looking at the entire ecosystem, we have a cross-sector participation that represents the concerns of a variety of viewpoints in the field, including regulatory requirements.

One illustration of this is the navigation system in your car. It is perfectly acceptable for a GPS giving directions on the best route to get from A to B to go wrong from time to time, as we will probably still reach our destination eventually. But is it OK if an AI application chooses to give a patient a more effective drug (A) with a higher risk of side effects rather than the less effective drug (B) with fewer chances of side effects? This may work well in a hospital that has patients under control and doctors on site, but may be more inconvenient, not to mention risky, in an elderly care home. Had the drug been prescribed by a doctor, it would have been possible to ask why the more effective drug (A) was chosen, but an AI application that is only supposed to deliver a drug may not even be able to answer why drug (A) is more appropriate than drug (B).

International Standards, including those dealing with ethics, could serve as guidance to assist regulators in their work. For example, when building new systems that will be connected to other new or existing systems, they will increase the possibility of these being widely adopted and used. Standards, by their very nature, are developed for the long term; yet, today, the call is often for research and development, which means other types of documents may be needed on an ongoing basis. Alongside the new ethics project, a vocabulary establishing clear terms and definitions would therefore be a valuable asset to ensure a common understanding across the various parties involved and a good starting point for developing such documents.

When is it unethical not to use AI?

This is a difficult question because what is ethical or unethical very much depends on the context in which AI is used, which may also differ between regions. For instance, not using AI in the study of diseases could be deemed unethical since its very use might have increased the ability of finding a cure faster than if it had not been used.

It is important to remember the potential of AI to help solve some of our biggest challenges, in particular when they relate to human safety. But it’s a difficult judgement call. For example, is it more “ethically acceptable” when a self-driving car kills a number of individuals per year than when we humans are driving? Such is the dilemma of AI ethics.

Small self-driving yellow bus on the road in Berlin.

Berlin tests its first driverless buses in August 2019.

Default ISOfocus
Elizabeth Gasiorowski-Denis
Editor-in-Chief of ISOfocus