Ethical Technology Crafting: Part 2 Proposed Methods

The following people need thanking for their advice, support and input: Cam Grant, Ellen Broad, Bob Williamson, Guido Governatori, Lachlan McCalman, Mitch Harris, Liz Gilleran, Phil Grimmett.

Special thanks to Ellen for her input on power relationships and Lachlan for his on user communication.

Introduction

In attempting to set out how to tackle establishing an ethical AI mindset with the tech industry and how to start approaching the production side of technology innovation that uses AI, machine learning, algorithms and the large and/or sensitive data sets they work across I feel the role of user experience designer’s would be intensive early in. We are well placed to do this work as we are already skilled in the qualitative investigation work comprising of needs elicitation and empathy establishment. 

The work would continue through to ramp up again during the solution build iterations.

The work falls into two areas, at the beginning during “discovery” to define context and surface power relationships and later when “solutions” are being implemented to assist communication to the range of people using the systems.

The audience for this guide are User Experience Designers and Product Managers who have been tasked to provide assistance on development of products and systems using data (sensitive or public) and Machine Learning (algorithms that make predictions, assist with decision making, reveal insights from data, or act autonomously), because these products are expected to deliver information to a range of users and provide the basis for contextually supported decisions. 

This guide is not intended to “teach” anyone user experience methods but support those working as professional user experience practitioners and product managers. It is therefore assumed the target audience is already familiar with the methodologies outlined throughout this document.

Machine Learning computer scientists, software engineers, data scientists, anthropologists or other highly skilled technical or social science professionals are very welcome to read this guide in order to increase and enhance their understanding of user experience concerns and maybe even refer to it.

I welcome feedback from trials.

Goals

The goal of this guide is to provide a method by which teams can ensure ethical impacts are considered as standard practice when engaged in working on any digital product, service or system deve

This guide is not intended to replace legal, corporate/instutional ethics frameworks or Australian Government personal information privacy laws but work within them as part of a shared approach.

How User Experience Design Fits In

The set of methods proposed in this guide shouldn’t be an overhead but ensure best practice is applied and with an ethical lens. Each project needs it’s own definitions of ethical implications dependant on the people interacting with and affected by it, the data in use and the context in which both of these sit.

Questions we intend to answer as user experience designers are:

  • How can we ensure an ethical approach is holistically considered in both product strategy and solution design?
  • How can we capture upfront and measure the ethical implications (tradeoffs and compromises)?
  • How can we provide “perceived affordances” for trust in outcomes delivered by the product using “cultural conventions”, visual feedback or signifiers.

Along with the typical design constraints of balancing competing business priorities, user requirements for adoption and technology pushes for innovation, there is the additional lens of “understanding social expectations and accounting for different types of disadvantage”. We need to deliver outcomes that fosters and rewards trust on behalf of the various user groups interacting with it.

This means UX Designers and Product Managers need to research and capture an understanding of the power relationships between people where discrimination or manipulation could occur, understand the context where abuses can happen (intentionally or unintentionally) and create mechanisms where the risk of harm is not carried over to the parties with the least amount of power to control it.

The next section of this article proposes practical applications of UX methods.

Application of UX Methods

The techniques proposed are versions of existing methods and practises, aiming to include a specific ethical lens in the design discovery and solution exploration phases.

UX practitioners are tasked with representing a range of people interacting with digital systems in varying contexts. These systems are usually part of an ecosystem of digital solutions and the UX practitioner’s influence many only extend to the immediate problem being tackled. Just as with ‘traditional’ digital products and services, it is vitally important to include project teams and external stakeholders throughout, as they have specific ethical approaches to computer science and software engineering work at Data61.

UX work would assist teams with product strategy and user empathy where needed, while also informing designing interactions and user interfaces for these systems however the insights gathered are not limited to graphical user interfaces (“Human-in-the-loop” interactions). The user research and benchmarking can also form part of a Machine-to-Machine interaction (eg. a report or set of constraints articulated in a product strategy) for a software engineer or for specialised data governance expert to implement.

Its also important to view this UX work from either the data or the algorithm angles. Data is historical and predictions made with data attempt to accommodate certainty or confidence based on various factors. Unintentional biases occur within the data collection and cultural norms can be unintenionally built into algorithms.

Data confidence hierachy of dependancy.
Diagram 1: Revealing Uncertainty for Information Visualisation Meredith Skeels (Biomedical and Health Informatics, University of Washington) Bongshin Lee, Greg Smith and George G. Robertson (Microsoft Research)

This diagram helps to identify where issues can live. If any of the “levels” are in question by the person relying on the information delivered, credibility of the insights is diminished or in the case of ethics, the outcomes could be skewed.

The Humans We Are Designing With and For

User centred and ethnographic research starts with identifying and crafting questions that would become insights and design constraints for various clusters of people sharing similar goals.

Currently we see four broad user groupings sourced from various Data61 projects, papers, articles and observations. Further ethnographic user research is required to develop in detail the broad descriptions below and likely will open up other clusters defined by common characteristics and objectives. While they should not be relied on as “personas” they are listed here to help quickly communicate how different people have different roles and objectives.

Conceptual diagram illustrating how different people described in this document might interact with a system
Simplified, conceptual diagram illustrating how different people interact with a system

Enabler/sponsor (funder or client)

  • Owns the purpose/intent
  • Communicates the value proposition
  • Has ultimate accountability
  • Would be a trustee of public good
  • Has a higher level in the power relationship

Operator (primary/active): Tech expert

  • Algorithm author
  • Data set provider
  • Data ingestor
  • Output creator
  • Trustee of application of an ethical framework in the build

Operator (primary/active): Engaged professional

  • Data custodian/provider
  • Data interrogator/informed decision maker
  • Trustee of ethical practices

Passive recipient

  • Desires a feeling of control over their own data (as defined within regionally legislative constraints)
  • Has a lesser role in the power relationship
  • Is impacted or directed by data and algorithmic decisions
  • Needs access to decision rationale, right of reply and evidence (e.g. data) supporting decision rationale

It is expected that a similar group of people could be any combination of these in the same project with time or context being the differentiator; or that the same project could have different applications for groups of people with quite different goals (eg data collection or analysis or consuming an output). This also implies there could be a power relationship between different groups.

Usual user discovery activities (eg generative “who and why”) should always be undertaken rather than relying on this taxonomy alone.

Methods In Detail

The outcomes from these activities, as outlined below, are intended to help a development team design solutions that serve people using the proposed product or service. (They could also inform customer discovery or marketing campaigns but those are secondary considerations once fit-for-purpose has been validated.)

The application of existing good user experience research and design practices can be employed or adapted to focus on the requirements for both active users and passive recipients of a proposed system:

  • User group and motivation generation
  • Contextual Inquiry questions specific to the topic
  • Problem definition and success
  • Use case/user stories/Jobs To Be Done
  • Risk register/Red Team (think negative, go hard)
  • Testing for impact (user acceptance/usability)

It is important to include all project and development team members in this work to ensure goals are aligned and the journey of user discovery is shared by the team. Good practice user experience discovery, exploration and validation methods support this involvement so no further notes will be added here on how to engage team members or stakeholders.

1. Discovery

As part of the problem definition, user research consultations would also aim to:

  • Sufficiently understand the motivations and expectations from the different user types, not just the project sponsor or technologists.
  • Capture the level of understanding about data sets desired for their decision making.
  • How aware or sensitive are development teams and stakeholders about appropriate diversity and completeness of data sets, and methods of collection?
  • Capture the level of concern about reduction of bias in the technology.
  • How aware or sensitive are development teams and stakeholders about the “quality” of the technology?
  • What is the tolerance for compromise or risks of harm? What is an acceptable trade-off (within the legal parameters)?
  • Understand the positives and negatives of current state systems so that any digital intervention can be compared back for improvements or unforeseen harm

Some questions regarding trust building we might need to measure:

  • Is it clear to a person using the digital produce/service why a decision has been made?
  • Is there the feeling of a standard of due process they recognise or can understand?
  • Can that person participate/engage in that due process?
  • Is there a feeling that the provider of the decision is owning responsibility of the consequences?

Workshop/Interview/Contextual Inquiries

This section provides question templates to help focus on ethical data use topics while avoiding asking the question directly. Using typical interview guide questions within the context of the project, acquired through non-leading questions and observation.

Typically you could reframe these questions to not have a digital or data focus and include them alongside other ethnographic investigations.

    • How do we support the [operator’s] position of being a trusted party? eg How do support trust from your clients when they interact with you?
    • How can we help you display your expertise? eg what is a typical or key activity for your expertise range?
    • How do we help build trust? eg Why would you trust this [entity]? Why would you not?
    •  When using this system, how can we ensure you act with respect for public duties/interests? eg What are your organisations/agencies public duties?
    • What is the proposition/problem/opportunity enabled or enhanced by the technology? eg What pain points are in your current workflow/system
    • Who are the individuals affected by it? eg Who benefits from the decision you make in your role? Who is left behind?

2. Solution Design

The insights collected would be folded into domain expertise for production/service design strategies.

Product strategy assumptions

Some questions used to define the strategy of the product or service and reframed as hypothesis for testing could be:

    • How might people change their behaviour as a result of your technology eg Decreased antisocial, increased paranoia, online alternative personality development?
    • What world views could be changed? eg Govt dept reputation, beliefs about safety
    • How could relationships between groups of people be affected? eg Trust, communication
    • What would happen if the technology failed? eg Complete breakdown, partial breakdown, hacks
    • How can we avoid harm from the planned operations? eg Un/intentional discrimination, unmitigated production and processing of data, iterative use overtime removed from the original intent

Product or service strategy

Using the UX Research set direction and benchmarks for validation. It would be highly recommended to work through the Data Ethics Canvas with the stakeholders and development team. User Experience Research is critical in capturing the perspectives of affected parties outside the project team. Culturally diverse ethics considerations cannot and should not be made by people not part of that particular cultural group.

Context

As directed by user research or domain expert assumptions

Baseline for validation activities eg usability testing or UAT

As directed by user research, domain expert assumptions and product/service strategy.

Heuristics

The following are adapted (and in most, still include) the 10 Nielsen heuristics for user interaction and interface design, and relate to any UI that a human operates. (Heuristics for machine-to-machine interactions are not included here.)

They would assist in the visual communication of trust “signifiers” identified in the research done prior.

  1. Visibility of system status

    – The system should always keep people informed about what is going on, through appropriate feedback within reasonable time. Visibility of the data in (within privacy preserving constraints)

  2. Match between system and the real world

    – The system should speak the language, with words, phrases and concepts familiar to the person using the system, rather than system-oriented terms.
    – Follow real-world conventions, making information appear in a natural and logical order.

  3. User control and freedom

    – People often choose system functions by mistake and will need a clearly marked “emergency exit” to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.
    – People can request access to methods used by algorithms and data that affects them for explanations and rationale.
    – People can request a copy of their data in a format or way that is in line with data privacy and access laws
    – People can withdraw their data that is in line with data privacy and access laws
    – People can edit or update their data that is in line with data privacy and access laws

  4. Consistency and standards

    – Currently there are no global standards for ethical ML. Law, regulation and inclusive/empathetic practises ought to set standards particular to the project. Trade offs are an important consideration which would make standardising difficult. Other ‘local’ standards could be:
    – People should not have to wonder whether different words, situations, or actions mean the same thing. Establish a common vocabulary.
    – Provide glossaries and alternative explanations

  5. Error prevention

    – Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present people with a confirmation option before they commit to the action.
    – Request a revision of an outcome.
    – Run a test across a snapshot or subset for human-in-the-loop checks.
    – Describe range of uncertainty in predictions, data that predictions are being enacted on and associated risks if acted upon

  6. Recognition rather than recall

    – Minimize memory load by making objects, actions, and options visible.
    – People should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.
    – Provide a proxy or synthetic alternative for private data sets.

  7. Flexibility and efficiency of use

    – Accelerators — unseen by the novice skill set — may often speed up the interaction for the expert skill set such that the system can cater to both inexperienced and experienced skill sets. Allow people to tailor frequent actions.
    – Provide alerts for any impacts tailoring short cuts may incur eg skipping a feature matching step may result in mistakes if the schema across two data sets aren’t identical, but the expert user has set up short-cuts as usually there are matching schemes.

  8. Aesthetic and minimalist design

    – Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.
    – Provide dialogues in context to activity. This could include the system “understanding” the goal, rather than be a passive tool.?
    – Visualisation to lift cognitive or comprehension

  9. Help people recognize, diagnose, and recover from errors

    – Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.

  10. Help and documentation

    – Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the task or purpose easily scanned.

Validation – Solution Design

Using standard usability methods, design and run tests against all assumptions made in the preceding steps:

  • Product or service strategy
  • Context
  • Baselines
  • Heuristics

References and Further Reading

Legal

Papers and Reports

Emerging Practise

Tools