Softwiring – The role of Human Centred Design in Ethical Artificial Intelligence

I recently revised my earlier articles about the human centred design contribution to automated decision systems that have ethical outcomes. I presented this as a keynote at our annual tech showcase this year, D61+Live.

Introduction

“To ensure autonomous and intelligent systems (A/IS) are aligned to benefit humanity A/IS research and design must be underpinned by ethical and legal norms as well as methods. We strongly believe that a value-based design methodology should become the essential focus for the modern A/IS organization. Value-based system design methods put human advancement at the core of A/IS development. Such methods recognize that machines should serve humans, and not the other way around. A/IS developers should employ value-based design methods to create sustainable systems that are thoroughly scrutinized for social costs and advantages that will also increase economic value for organizations. To create A/IS that enhances human well-being and freedom, system design methodologies should also be enriched by putting greater emphasis on internationally recognized human rights, as a primary form of human values.”
~ IEEE The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems – Methodologies to Guide Ethical Research and Design

“New ethical frameworks for AI need to move beyond individual responsibility to hold powerful industrial, governmental and military interests accountable as they design and employ AI… When tech giants build AI products, too often user consent, privacy and transparency are overlooked in favor of frictionless functionality that supports profit-driven business models based on aggregated data profiles… Meanwhile, AI systems are being introduced in policing, education, healthcare, and other environments where the misfiring of an algorithm could ruin a life.”
~AI NOW 2017 Report

In the HCD response to this topic, “ethical” means autonomous systems mindfully designed and developed that minimise harm, inequality and/or discrimination.

This response is applied when an automated system (AI, ML etc) moves from applied research to commercialisation and wide uptake for purposes beyond knowledge creation and dissemination. This means anything looking to move into operational use for clients or customers. It also not a binary decision, it would include both utilitarian and deontological approaches (or wider, eg “care ethics”).

There are at least three broad components to solving ethical AI questions. In each there are important considerations governing input (data, privacy constraints, purpose) and output (opportunity, impact, products/services).

Regulatory – Human Rights, International and National law, Compliance eg organisational code of conduct

Technological – Research and Development that seeks to resolve data collection/generation, privacy, encoded ethics, models, algorithms

Human Centred Design – Ecosystems of People, Data and Digital, Experience, Empathy,  Context, Culture, Diversity

This article is the last part, HCD or the “softwiring”

1.0 Ecosystems

Outside of academic research, digital technologies don’t exist in bubbles or vacuums for their own purpose. People make it with an intent for someone else (specialised groups with specific tasks eg forensic investigation; or a broad range of less skilled users seeking to fulfil certain tasks eg dynamic mapping of travel plan) to find information to assist with making decisions.

These systems interact with other systems, drawing on and creating data generation by the users. The point of this section isn’t to break down the ecosystems but to set the point of view that technology needs to be considered less as a tool, and more like a network of systems with complex relationships.

Diagram 1: The ecosystem of roles and responsibilities for AI development

To date, AI has been created by data, computer scientists and software engineers but when we move towards broader application development we can call on the expertise to an ecosystem of experts with special roles and responsibilities. AI production becomes a collective activity, unburdening the engineers from the ethical questions and introducing diverse knowledge and skills to reduce the incidence of unintentional consequences. This can be broken down into clear areas, as mentioned in the opening section (Regulatory, Technical, HCD).

Values

Questions regarding legal criteria or “can we..?” would be reviewed in regard to the the UN Declaration of Human Rights, concerning human dignity, social cohesion and environmental sustainability. Existing Australian data and privacy laws are available to examine potential, unintentional violations or abuses of personal data as well as many other international laws on the same.

This ought to assist establishing revisions of corporate values or creation of new ones (including SMEs and startups) that then assist the executive and their business decisions or “should we…?”.

Like most things, the devil is in the detail when it comes to actual development work and the embodiment of values into production requires further exploration particular to each project, program of work or product. This section only examines “who and why…? leaving “how and what” to technical expert contributions detailed in.

As mentioned above, AI software is made by teams, not just computer or data scientists or software engineers and is interacted with by a range of users, not just one operational kind. Regarding teams: an understanding of central (full time, dedicated project teams) and orbiting (consultation expertise eg lawyers or ethics councils) team members can assist in understanding the responsibilities and provide wayfinding for critical points of decision making during the operationalisation or production of ethical AI.

Leadership

Leaders have several challenges, both in understand the various kinds of AI and how it can be used (eg is it a prediction problem or a causation one) as well as how to resolve the ethical challenges these new services propose (eg is it low risk like having to talk to a chatbot, or high risk like “pre-crime”?).

An important point worth noting is there is evidence that leaders are more likely to use a tacit response to ethical challenges than compliance or regulation. The evidence also shows that organisations that support information sharing networks between leaders and/or other levels of staff resolve ethical dilemmas more successfully than those with structures that leave leaders isolated (eg due to status threats). Also worth noting is significant organisational changes can trigger ethical dilemmas as the lack of or poor inclusion of appropriate or new expertise coupled with historical approaches creates new situations without a familiar framework (the introduction of AI for business outcomes would be a clear example of this as it would cause a massive internal disruption in both finding new markets and required skills). This would seem to further support an ecosystem and diversity approach to ethics to share the load of the decision.

1.2 Data Quality

Diagram 2: “Revealing Uncertainty for Information Visualisation” Skeels, Lee, Smith, Robertson

Data has levels of quality, that a user can challenge if the outcomes don’t meet their mental models or expectations and this can lead to a drop in trust of the results. This model not only helps with understanding the relationships of data to itself but also can serve to ‘de-bug’ the issues when challenged by the user in where the breakdown might happen.

For example, if the data is poorly captured then all the following levels will augment this problem. If the data is good and complete then the issues might be in the inferences. This is why user feedback using real content (ie not pretend, old or substituted data) is important to test the system with.

1.3 Data Inclusion

Another point that provides context is the a considerable barrier to ethical technology development is uneven internet connectivity either through infrastructure or affordability (IEEE Ethically Aligned Design – Economics and Humanitarian Issues; Australian Digital Inclusion Index 2017) resulting in data point generation that reflects a certain location/socio-economic bias. While connectivity in Australia is improving there there are groups who are not connected well enough to use digital services that correlate to low education, low income, mobile only, disability and region.

“Across the nation, digital inclusion follows some clear economic and social contours. In general, Australians with low levels of income, education, and employment are significantly less digitally included. There is consequently a substantial digital divide between richer and poorer Australians. The gap between people in low income households and high income households has widened since 2014, as has the gap between older and younger Australians, and those employed and those outside the labour force.”

~ Australian digital inclusion index 2018

2.0 Operationalisation

Operationalising ethical AI development requires multidisciplinary approaches. As mentioned above, there are legal and technical constraints, below are details for a human centred component. Unlike the first two, which are either either fixed by law or encoded and therefore “hard wired”,  this is about softskills and could be referred to as “soft wired”.

Soft wiring is applied during the later stages of applied research when technologies are looking to into early stage production and balances utilitarian and deontological (duty of care) philosophies. There are four parts:

2.1 Empathy

Empathy is the ability to understand the point of view of the “other” and alone it won’t ensure an ethical AI outcome, it forms part of a suite of approaches (mentioned in the opening section of this document).

Unfortunately, empathy isn’t natural, easy or even possible for some people due to conditioning or biological reasons, stress, or other factors like perception of business needs.

However, the good news is technology production already has experts in capturing and communicating empathy. Their work in entirely focussed on understanding people, their needs and motivations within context. They are:

    • User Experience Researchers and Designers
    • Behavioural scientists
    • Psychologists
    • Ethnographic researchers
    • Ethicists

These roles may already exist in house or could be contracted to assist with project planning as well as audits/reviews. In some cases, these skills are also a mind set, and non practitioners can use them just as effectively.

2.2 Experience Led

Experience design starts with a clearly defined problem and examines all the people affected by this problem and relies on the empathy work done to capture the differing needs of various participants in this “ecosystem of effect – from highly influential stakeholders right through to passive recipients who are deeply affected by the AI but who have little or no influence over the decisions made on their behalf.

Experience led places “Who” and “Why” before “How and What”. This work aims to sufficiently understand the motivations and expectations from the different user types, not just the project sponsor or technology makers.

These perspectives also provide context for clearly defined use cases – facial recognition surveillance might be acceptable for airport security but is it for citizens travelling in public places?

Conceptual diagram illustrating how different people described in this document might interact with a system
Diagram 3: Simplified, conceptual diagram illustrating how different people interact with a system

An ethical AI product also needs to be holistically considered in both strategy and solution design.

2.2.1 Strategy

The following questions can be used as a framework to assist in troubleshooting potential problems in advance:

  • User Research
    • Do you know who will use this system?
    • Do you know why they would use it?
    • Is the AI product “better” than a human or existing systems?
  • Harm reduction
    • How aware or sensitive are development teams and stakeholders about the “ethical quality” of the technology?
    • Is utility v’s privacy appropriate?
    • Who owns the “duty of care”?
    • What happens if an error is made, who is responsible?
  • Trade off
    • Who benefits from this AI service, and who is left behind?
    • Is this gap acceptable?
    • Who will answer for this gap if it needs defending?

2.2.2 Solution

  • Assist comprehension
    • Is the apply of perceived affordances helpful?
    • Apply “cultural conventions”, visual feedback or signifiers
    • Apply Nielsen’s heuristics
  • Validation
    • Are you testing with a variety of users?
    • Are you reviewing and applying revisions to the solution in response to testing?
    • Build – measure – learn

2.2.3 Trust

Trust is established by treating all users with dignity. Trust is also easily lost when users don’t understand the decisions made by a system.

  • Clarity
    • Is it clear to a person using the digital produce/service why a decision has been made?
    • Are they sufficiently informed to at their level of comprehension?
  • Right of reply
    • Is there the feeling of a standard of due process they recognise or can understand?
    • Can that person participate/engage in that due process?
  • Accountability
    • Is there a feeling that the provider of the decision/service is owning responsibility of the consequences deliver by the system they have deployed?
    • Is there some form of algorithmic transparency provided (end of black box AI)?
    • What happens if the system breaks down? (power, code, data failure)

2.3 Practise

Processes and checklists don’t catch ethical nuances, only a living practise of exploring, testing and validation can. Establishing a good ethical practise encourages this to be done every project, to pursue the best possible outcomes and auditability.

2.3.1 Constructive Discourse

  • Top down
    • Leaders are empowered to seek input on ethical quandaries from various sources (personal networks and regulatory)
    • Management welcomes advice from their teams
  • Bottom up
    • Development teams are encouraged to self educate and share knowledge
    • Members intelligently challenge the ethical proposition

2.4 Team Diversity

Diversity has been identified in multiple publications (IEED, AINow and other National AI Frameworks eg France) as critical to reducing the errors of bias AI can deliver.  This is both development teams and users. Development teams need not only gender but cultural, cognitive, social and expertise and skills. This friction is deliberate and we need to be mindful of “big” voices dominating. There are many conventional team communication techniques already in use to facilitate healthy discussion and input, so they won’t be listed here.

Credits and References

This version is revision of http://hilarycinis.com/user-experience-design-guide-for-crafting-ethical-ai-products-and-services/ – aimed at designers and product managers.