Leading Design Conference 2019

In November this year I had the fortune of attending the Leading Design Conference held in London, UK. I’d not attended this conference hosted by Clearleft before so was really excited about attending and what I might walk away with.

Take a look at the speakers and topics https://leadingdesign.com/conferences/london-2019

The New Design Frontier Report

Invision had commissioned and provided a report “The New Design Frontier Report” that has a useful breakdown of current state and trends around:

  • People: Team, Key partners, Executives and employees
  • Practises: User Research, Design Strategy, Experimentation, UI Design
  • Platforms: Design Operations, Design Systems

I think the breakdowns in the report can definitely help with figuring out where a team is at and also how to measure what’s appropriate for an organisation. Download it and take a look: https://www.invisionapp.com/design-better/design-maturity-model/

Conference themes and takeaways

It might help if I share what I was hoping to get from the conference. At the time of booking the tickets I was heading up the UX practise at CSIRO’s Data61. When the conference happened I had already moved to an IC role at Salesforce in Experience Architecture. (My reasons for the role change are many and complex and you’ll see some evidence of that in what I got from the conference.)

Initially I was looking for some well informed insights about how to run a design team in a consulting organisation, especially one working in true innovation – making new technologies. I still think the conference content was relevant for my new role as I’m setting up design capabilities for clients, leading designers on  projects and establishing a design community of practise within Salesforce. As I have some residual anxiety from the time at Data61, I was also seeking some validation that all the hard decisions, initiatives and learning from failures had had some lasting impact for the business, my old team and myself. Lastly I was looking for guidance on solid metrics to measure a design team’s success that wasn’t NPS, CSAT or usability and adoption measures at a specific project level.

The themes throughout the conference presentations echo’d the report regarding people, practises and platforms. Anyway my take-aways are:

There is no design leadership playbook
We are all making it up as we go dependant on the businesses we are in. Most presentations were from design leaders in product organisations and consulting agency with no R&D and little enterprise represented. This was eye opening because you’d expect more maturity in leadership patterns from product and consulting. It appears design maturity is uneven across organisations and not necessarily tied to team size or age of the business.

Design as facilitator of change
Either up front or behind the scenes, designers are the agitators of change. Design is a risk-taking activity but if led well it can be holistic and inclusive activity. Leveraging the democratisation of design can assist in uplifting the role of designers through awareness. Probably not new news is that silos were called out as innovation killers. When considering digital transformation, it’s easy to forget that digital and associated technologies are incidental and that transformation is an organisational and behavioural change. Transformational leadership may come from non-digital or non-design background eg Sales, if financials are key to change. And if it’s driven by an external lead like a consultant, it needs to be very deliberately co-created with internal stakeholders. We have to be very careful of creating an “us and them” dynamic, as this can affect confidence in any high risk work. We need to be wary of unproductive coalitions or our own cultural silos. If we think about transformation goals without considering the complex adaptive systems we work within, this can result in unhappy people outside of the delivery team – the very people we are aiming to transform a business for.

Design is still an unknown capability
Design suffers from a lack of confidence in selling its value. But the key is remembering that design remains fluid – and things are moving fast and problems can’t be solved that are fixed in time and need people who can see patterns. Designers have this capability, especially those who can make a call in the absence of information and take it forward iteratively.

Measuring success
Perhaps if we work on answering why we can’t articulate the value of design this could lead to establishing that value, constrained by the organisation we are working in. Other tips were to start every new engagement with customer data to set business goals and benchmarks. It’s important to make sure design outputs and design management occurs in tandem as operation is the key to success – this means delivery. We need to be intentional in understanding what are we measuring and why, as there can be conflict between what we need to measure v’s what we can measure. There is some evidence that companies that link non-financial and value creation are more successful.

Be patient, impact and changes take longer than we think
My biggest flaw is impatience, I underestimate how long change can take and often get demotivated as a result. I really needed to hear this tip. Related to this is another gem that we simply cannot communicate too much. Lastly on this theme we were reminded that we can’t win everything and be prepared to cut losses. Two years into your leadership, expect a crash because failure is inevitable and recovery is hard.

Intentional team design
Design teams need to be thoughtfully created with a sense of purpose and longevity from the outset. Organic, unfocussed growth can lead to problems for leaders and organisations down the track. We saw many presentations of team structures and skills profiles and how they change over time as teams grow. What appeared to be a pattern was the establishment of sub teams and leadership roles earlier rather than later and also splitting out capability broadly into design ops and strategy. This aligns with the delegation topic below. Well structured collaboration was cited as a positive way to create team goals and rituals including career paths.

This is an area I have had my hardest lessons. Not having any kind of “playbook” means this knowledge is hard to come by without trial and error but it is also very difficult as most teams evolve in response to demand and their leader is usually a strong performer who has some natural relationship skills. I feel the design mentoring network could do with some work now there is an emerging class of experienced technology design leaders.

Ruthlessly prioritise, delegate as much as possible
Many presentations were internal facing and focussed on building team cultures. Once you’re a design leader your time in the field needs to be limited as energy goes to leading for impact and supporting your team. Also, we were cautioned to be authentic from the outset because faking it until you make it is exhausting.

Building trust and relationships
It is critical we understanding our audiences – our teams, our internal partners, our executive. Speaking their language and as delivering as promised is key.

The insights I’ve captured were from presentations and workshops by:

  • Maria Giudice — Founder, Author, Coach, Hot Studio
  • Kristin Skinner — Co-author, Org Design for Design OrgsFounder, & GSD
  • Jason Mesut — Design Partner, Group Of Humans
  • Melissa Hajj — Director of Product Design, Facebook
  • Julia Whitney — Executive Coach, Whitney and Associates
  • Alberta Soranzo — Director, Transformation Design, Lloyds Banking Group

I’d suggest following them on social media and blogs.

Leading Design 2019:  https://leadingdesign.com/conferences/london-2019

Ethical Technology Crafting: Part 2 Proposed Methods

The following people need thanking for their advice, support and input: Cam Grant, Ellen Broad, Bob Williamson, Guido Governatori, Lachlan McCalman, Mitch Harris, Liz Gilleran, Phil Grimmett.

Special thanks to Ellen for her input on power relationships and Lachlan for his on user communication.

Introduction

In attempting to set out how to tackle establishing an ethical AI mindset with the tech industry and how to start approaching the production side of technology innovation that uses AI, machine learning, algorithms and the large and/or sensitive data sets they work across I feel the role of user experience designer’s would be intensive early in. We are well placed to do this work as we are already skilled in the qualitative investigation work comprising of needs elicitation and empathy establishment. 

The work would continue through to ramp up again during the solution build iterations.

The work falls into two areas, at the beginning during “discovery” to define context and surface power relationships and later when “solutions” are being implemented to assist communication to the range of people using the systems.

The audience for this guide are User Experience Designers and Product Managers who have been tasked to provide assistance on development of products and systems using data (sensitive or public) and Machine Learning (algorithms that make predictions, assist with decision making, reveal insights from data, or act autonomously), because these products are expected to deliver information to a range of users and provide the basis for contextually supported decisions. 

This guide is not intended to “teach” anyone user experience methods but support those working as professional user experience practitioners and product managers. It is therefore assumed the target audience is already familiar with the methodologies outlined throughout this document.

Machine Learning computer scientists, software engineers, data scientists, anthropologists or other highly skilled technical or social science professionals are very welcome to read this guide in order to increase and enhance their understanding of user experience concerns and maybe even refer to it.

I welcome feedback from trials.

Goals

The goal of this guide is to provide a method by which teams can ensure ethical impacts are considered as standard practice when engaged in working on any digital product, service or system deve

This guide is not intended to replace legal, corporate/instutional ethics frameworks or Australian Government personal information privacy laws but work within them as part of a shared approach.

How User Experience Design Fits In

The set of methods proposed in this guide shouldn’t be an overhead but ensure best practice is applied and with an ethical lens. Each project needs it’s own definitions of ethical implications dependant on the people interacting with and affected by it, the data in use and the context in which both of these sit.

Questions we intend to answer as user experience designers are:

  • How can we ensure an ethical approach is holistically considered in both product strategy and solution design?
  • How can we capture upfront and measure the ethical implications (tradeoffs and compromises)?
  • How can we provide “perceived affordances” for trust in outcomes delivered by the product using “cultural conventions”, visual feedback or signifiers.

Along with the typical design constraints of balancing competing business priorities, user requirements for adoption and technology pushes for innovation, there is the additional lens of “understanding social expectations and accounting for different types of disadvantage”. We need to deliver outcomes that fosters and rewards trust on behalf of the various user groups interacting with it.

This means UX Designers and Product Managers need to research and capture an understanding of the power relationships between people where discrimination or manipulation could occur, understand the context where abuses can happen (intentionally or unintentionally) and create mechanisms where the risk of harm is not carried over to the parties with the least amount of power to control it.

The next section of this article proposes practical applications of UX methods.

Application of UX Methods

The techniques proposed are versions of existing methods and practises, aiming to include a specific ethical lens in the design discovery and solution exploration phases.

UX practitioners are tasked with representing a range of people interacting with digital systems in varying contexts. These systems are usually part of an ecosystem of digital solutions and the UX practitioner’s influence many only extend to the immediate problem being tackled. Just as with ‘traditional’ digital products and services, it is vitally important to include project teams and external stakeholders throughout, as they have specific ethical approaches to computer science and software engineering work at Data61.

UX work would assist teams with product strategy and user empathy where needed, while also informing designing interactions and user interfaces for these systems however the insights gathered are not limited to graphical user interfaces (“Human-in-the-loop” interactions). The user research and benchmarking can also form part of a Machine-to-Machine interaction (eg. a report or set of constraints articulated in a product strategy) for a software engineer or for specialised data governance expert to implement.

Its also important to view this UX work from either the data or the algorithm angles. Data is historical and predictions made with data attempt to accommodate certainty or confidence based on various factors. Unintentional biases occur within the data collection and cultural norms can be unintenionally built into algorithms.

Data confidence hierachy of dependancy.
Diagram 1: Revealing Uncertainty for Information Visualisation Meredith Skeels (Biomedical and Health Informatics, University of Washington) Bongshin Lee, Greg Smith and George G. Robertson (Microsoft Research)

This diagram helps to identify where issues can live. If any of the “levels” are in question by the person relying on the information delivered, credibility of the insights is diminished or in the case of ethics, the outcomes could be skewed.

The Humans We Are Designing With and For

User centred and ethnographic research starts with identifying and crafting questions that would become insights and design constraints for various clusters of people sharing similar goals.

Currently we see four broad user groupings sourced from various Data61 projects, papers, articles and observations. Further ethnographic user research is required to develop in detail the broad descriptions below and likely will open up other clusters defined by common characteristics and objectives. While they should not be relied on as “personas” they are listed here to help quickly communicate how different people have different roles and objectives.

Conceptual diagram illustrating how different people described in this document might interact with a system
Simplified, conceptual diagram illustrating how different people interact with a system

Enabler/sponsor (funder or client)

  • Owns the purpose/intent
  • Communicates the value proposition
  • Has ultimate accountability
  • Would be a trustee of public good
  • Has a higher level in the power relationship

Operator (primary/active): Tech expert

  • Algorithm author
  • Data set provider
  • Data ingestor
  • Output creator
  • Trustee of application of an ethical framework in the build

Operator (primary/active): Engaged professional

  • Data custodian/provider
  • Data interrogator/informed decision maker
  • Trustee of ethical practices

Passive recipient

  • Desires a feeling of control over their own data (as defined within regionally legislative constraints)
  • Has a lesser role in the power relationship
  • Is impacted or directed by data and algorithmic decisions
  • Needs access to decision rationale, right of reply and evidence (e.g. data) supporting decision rationale

It is expected that a similar group of people could be any combination of these in the same project with time or context being the differentiator; or that the same project could have different applications for groups of people with quite different goals (eg data collection or analysis or consuming an output). This also implies there could be a power relationship between different groups.

Usual user discovery activities (eg generative “who and why”) should always be undertaken rather than relying on this taxonomy alone.

Methods In Detail

The outcomes from these activities, as outlined below, are intended to help a development team design solutions that serve people using the proposed product or service. (They could also inform customer discovery or marketing campaigns but those are secondary considerations once fit-for-purpose has been validated.)

The application of existing good user experience research and design practices can be employed or adapted to focus on the requirements for both active users and passive recipients of a proposed system:

  • User group and motivation generation
  • Contextual Inquiry questions specific to the topic
  • Problem definition and success
  • Use case/user stories/Jobs To Be Done
  • Risk register/Red Team (think negative, go hard)
  • Testing for impact (user acceptance/usability)

It is important to include all project and development team members in this work to ensure goals are aligned and the journey of user discovery is shared by the team. Good practice user experience discovery, exploration and validation methods support this involvement so no further notes will be added here on how to engage team members or stakeholders.

1. Discovery

As part of the problem definition, user research consultations would also aim to:

  • Sufficiently understand the motivations and expectations from the different user types, not just the project sponsor or technologists.
  • Capture the level of understanding about data sets desired for their decision making.
  • How aware or sensitive are development teams and stakeholders about appropriate diversity and completeness of data sets, and methods of collection?
  • Capture the level of concern about reduction of bias in the technology.
  • How aware or sensitive are development teams and stakeholders about the “quality” of the technology?
  • What is the tolerance for compromise or risks of harm? What is an acceptable trade-off (within the legal parameters)?
  • Understand the positives and negatives of current state systems so that any digital intervention can be compared back for improvements or unforeseen harm

Some questions regarding trust building we might need to measure:

  • Is it clear to a person using the digital produce/service why a decision has been made?
  • Is there the feeling of a standard of due process they recognise or can understand?
  • Can that person participate/engage in that due process?
  • Is there a feeling that the provider of the decision is owning responsibility of the consequences?

Workshop/Interview/Contextual Inquiries

This section provides question templates to help focus on ethical data use topics while avoiding asking the question directly. Using typical interview guide questions within the context of the project, acquired through non-leading questions and observation.

Typically you could reframe these questions to not have a digital or data focus and include them alongside other ethnographic investigations.

    • How do we support the [operator’s] position of being a trusted party? eg How do support trust from your clients when they interact with you?
    • How can we help you display your expertise? eg what is a typical or key activity for your expertise range?
    • How do we help build trust? eg Why would you trust this [entity]? Why would you not?
    •  When using this system, how can we ensure you act with respect for public duties/interests? eg What are your organisations/agencies public duties?
    • What is the proposition/problem/opportunity enabled or enhanced by the technology? eg What pain points are in your current workflow/system
    • Who are the individuals affected by it? eg Who benefits from the decision you make in your role? Who is left behind?

2. Solution Design

The insights collected would be folded into domain expertise for production/service design strategies.

Product strategy assumptions

Some questions used to define the strategy of the product or service and reframed as hypothesis for testing could be:

    • How might people change their behaviour as a result of your technology eg Decreased antisocial, increased paranoia, online alternative personality development?
    • What world views could be changed? eg Govt dept reputation, beliefs about safety
    • How could relationships between groups of people be affected? eg Trust, communication
    • What would happen if the technology failed? eg Complete breakdown, partial breakdown, hacks
    • How can we avoid harm from the planned operations? eg Un/intentional discrimination, unmitigated production and processing of data, iterative use overtime removed from the original intent

Product or service strategy

Using the UX Research set direction and benchmarks for validation. It would be highly recommended to work through the Data Ethics Canvas with the stakeholders and development team. User Experience Research is critical in capturing the perspectives of affected parties outside the project team. Culturally diverse ethics considerations cannot and should not be made by people not part of that particular cultural group.

Context

As directed by user research or domain expert assumptions

Baseline for validation activities eg usability testing or UAT

As directed by user research, domain expert assumptions and product/service strategy.

Heuristics

The following are adapted (and in most, still include) the 10 Nielsen heuristics for user interaction and interface design, and relate to any UI that a human operates. (Heuristics for machine-to-machine interactions are not included here.)

They would assist in the visual communication of trust “signifiers” identified in the research done prior.

  1. Visibility of system status

    – The system should always keep people informed about what is going on, through appropriate feedback within reasonable time. Visibility of the data in (within privacy preserving constraints)

  2. Match between system and the real world

    – The system should speak the language, with words, phrases and concepts familiar to the person using the system, rather than system-oriented terms.
    – Follow real-world conventions, making information appear in a natural and logical order.

  3. User control and freedom

    – People often choose system functions by mistake and will need a clearly marked “emergency exit” to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.
    – People can request access to methods used by algorithms and data that affects them for explanations and rationale.
    – People can request a copy of their data in a format or way that is in line with data privacy and access laws
    – People can withdraw their data that is in line with data privacy and access laws
    – People can edit or update their data that is in line with data privacy and access laws

  4. Consistency and standards

    – Currently there are no global standards for ethical ML. Law, regulation and inclusive/empathetic practises ought to set standards particular to the project. Trade offs are an important consideration which would make standardising difficult. Other ‘local’ standards could be:
    – People should not have to wonder whether different words, situations, or actions mean the same thing. Establish a common vocabulary.
    – Provide glossaries and alternative explanations

  5. Error prevention

    – Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present people with a confirmation option before they commit to the action.
    – Request a revision of an outcome.
    – Run a test across a snapshot or subset for human-in-the-loop checks.
    – Describe range of uncertainty in predictions, data that predictions are being enacted on and associated risks if acted upon

  6. Recognition rather than recall

    – Minimize memory load by making objects, actions, and options visible.
    – People should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.
    – Provide a proxy or synthetic alternative for private data sets.

  7. Flexibility and efficiency of use

    – Accelerators — unseen by the novice skill set — may often speed up the interaction for the expert skill set such that the system can cater to both inexperienced and experienced skill sets. Allow people to tailor frequent actions.
    – Provide alerts for any impacts tailoring short cuts may incur eg skipping a feature matching step may result in mistakes if the schema across two data sets aren’t identical, but the expert user has set up short-cuts as usually there are matching schemes.

  8. Aesthetic and minimalist design

    – Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.
    – Provide dialogues in context to activity. This could include the system “understanding” the goal, rather than be a passive tool.?
    – Visualisation to lift cognitive or comprehension

  9. Help people recognize, diagnose, and recover from errors

    – Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.

  10. Help and documentation

    – Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the task or purpose easily scanned.

Validation – Solution Design

Using standard usability methods, design and run tests against all assumptions made in the preceding steps:

  • Product or service strategy
  • Context
  • Baselines
  • Heuristics

References and Further Reading

Legal

Papers and Reports

Emerging Practise

Tools

 

Ethical Technology Crafting: Part 1 Purpose and Position

The following people need thanking for their advice, support and input: Cam Grant, Ellen Broad, Bob Williamson, Guido Governatori, Lachlan McCalman, Dr Matt Beard, Mitch Harris, Liz Gilleran, Phil Grimmett.

Special thanks to Ellen for her input on power relationships, Lachlan for his advice on user communication and Matt for his advice on ethical schools of thought.

Introduction

Computer Scientists, software engineers and academics are currently carrying the load of responsibility for the ethical implications of AI (Artificial Intelligence) in application. I strongly believe this issue belongs to a wider group – namely development teams and their parent organisations – and it turns out I’m not alone as leading think tanks also suggest diversity as key to reducing the risks associated to automated decision making as well as “designers” being called out specifically to address these potential breaches of trust. I am assuming “designers” means teams of Developers, Data Scientists and Product as well as actual Designers.

Lets start with a wider concern of how often AI is inferred as having its own agency. This emerging separation of technology from people is alarming, considering it is people who are making it. The language often used proposes a lack of control. This is why its important to not only have cross discipline teams making tech but also communicating this process ongoing with teams, customers, clients and society so the mental model of humans + AI is adjusted away from this notion of the “other” having it’s own agency.

Image of the Gorignak from the film Galaxy Quest
Gorignak – it’s a kind of golem… Something that is acting with intent by it’s creators and has no consciousness. In this case it’s out to mash Commander Peter Quincy Taggart (Galaxy Quest).

Ethics

When we discuss technology and ethics the conversation can flip over to philosophy very easily. This discussion is an important part of establishing the values your organisation and it’s products or services adhere to.

It can make things a bit easier to have a bit of ethical philosophy education – I’m by no means a trained ethicist but as am armchair enthusiast here is my quick reference as a starting point.

There are two classical schools of ethical thought – utilitarian which focuses on outcomes (“it’s for the greater good”) and deontological which focuses on duty and the “rightness” of act itself.

The town council of Sandford weren’t concerned about their ruthless acts, it was the outcome for a great good that mattered
John McClane was driven by duty and doing the right thing each step of the way, without a clear plan and high risk of failure to save the hostages in the Nakatomi Tower

Along with these there is an extensive list of post-modern and applied ethics including “care ethics” (aka feminist ethics) where caring and nurturing are held as virtues. This is a post-modern ethical approach that accommodates what designers are familiar with – people are messy, reject lack of control over their lives and context is key.

My colleagues at Data61 are regularly writing and speaking on this topic, see references at the end. There are also a lot of philosophical writings emerging that attempt to redefine ethics for humanity. While I find these inspiring, I’ll be clear that this article is not attempting to create a new field of ethics, but adapting theory into practise in our work as technology makers.

From what I understand computer scientists and engineers are currently required to take a utilitarian approach due to the nature of software coding. I am not well placed to explain this but feel the combination of designers working through the qualitative investigations of need with a deontological and care ethics lens can then assist engineers with the translation of that into a utilitarian applications are compatible and appropriate.

For example, if a numerical value has to be placed against a trade off, what is that amount? Is 10% risk of harm acceptable if  90% have an improved outcome acceptable? A client most likely isn’t going to answer that directly but we can elicit a desirable position on an acceptable tradeoff using typical qualitative UX methods during discovery and then communicate that risk during solution design.

Why have an ethical design framework for user experience and product design?

“To ensure autonomous and intelligent systems (A/IS) are aligned to benefit humanity A/IS research and design must be underpinned by ethical and legal norms as well as methods. We strongly believe that a value-based design methodology should become the essential focus for the modern A/IS organization. Value-based system design methods put human advancement at the core of A/IS development. Such methods recognize that machines should serve humans, and not the other way around. A/IS developers should employ value-based design methods to create sustainable systems that are thoroughly scrutinized for social costs and advantages that will also increase economic value for organizations. To create A/IS that enhances human well-being and freedom, system design methodologies should also be enriched by putting greater emphasis on internationally recognized human rights, as a primary form of human values.” IEEE The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems – Methodologies to Guide Ethical Research and Design

“New ethical frameworks for AI need to move beyond individual responsibility to hold powerful industrial, governmental and military interests accountable as they design and employ AI… When tech giants build AI products, too often user consent, privacy and transparency are overlooked in favor of frictionless functionality that supports profit-driven business models based on aggregated data profiles… Meanwhile, AI systems are being introduced in policing, education, healthcare, and other environments where the misfiring of an algorithm could ruin a life.” AI NOW 2017 Report.

We ought to aim for a defined ethical practice rather than defining what an ethical product is. This will help us discuss and evaluate engagements that align with our business values and social impact goals. The interpretation of an “ethical framework” at Data61 could be a system that “provides transparency, interpretability, due process and accountability through understanding the issues of power, control and potential harm to individuals, communities and business”.

I believe a discussion about the potential harm risks and thresholds of trust ought to happen each time a new product is initiated and throughout the production and maintenance of it. This evaluation would work with this top line statement of organisation values as well as the more contextual values gathered during the discovery work to set baselines for testing and an audit trail.

Multidisciplinary Teams

Inclusion of designers and product managers reduces the risk of biases by virtue of their own particular lenses. Along with personal experience, the best evidence I can find for a wider, shared approach to this problem is stated in the AI Now Report 2017:

“The AI industry should hire experts from disciplines beyond computer science and engineering and ensure they have decision making power. As AI moves into diverse social and institutional domains, influencing increasingly high stakes decisions, efforts must be made to integrate social scientists, legal scholars, and others with domain expertise that can guide the creation and integration of AI into long-standing systems with established practices and norms.”

Ethical codes meant to steer the AI field should be accompanied by strong oversight and accountability mechanisms. More work is needed on how to substantively connect high level ethical principles and guidelines for best practices to everyday development processes, promotion and product release cycles.”

The recently released IEEE A/IS Standards Report also lists the importance of cross discipline, top down and bottom up cultural shifts to bring an ethical mindset to technology organisatisations.

Application of an ethical practice becomes operationalised as constraints for project delivery. This interpretation would also inform other parts of the business as acceptance criteria for a client or market facing product engagement before reaching project delivery stages.

Each project needs it’s own definitions of ethical implications dependant on the people interacting with and affected by it, the data in use and the context in which both of these sit. These questions and the work to discuss and answer them are owned by all parts of the business, not just engineers and designers.

At Data61 we are fortunate to have an Ethics Group to help us work through harm mitigation

“Trust is often a proxy for ethics” (Dr Matt Beard) and the main risk of harm and trust breaches sits with data, especially highly valuable and highly sensitive PII (personally identifiable information). The more private the data, the higher it’s utility and the higher the risk of trust breaches or harm from insights in the wrong hands (either deliberately or accidentally). There are other data sources like sensor collected (eg air quality) and these would also benefit from the usual question of what insights are being generated, for whom and for what purpose? For example: Is particle matter being used to assist asthma sufferers or is it being collection to penalise air polluters?

Discussion is necessary with all parts of the business – not just the designers or developers – and a strong understanding of the legal position around the data, it’s intended use and how it is collected/sourced/stored and what decisions will ultimately be made from it.

Conclusion

This article explains why ethics is important in technology creation and who is responsible for that work.

I also propose that User Experience Designers are well positioned to contribute to these outcomes by virtue of their specialist skillset in qualitative research, ability to communicate with empathy and skills in synthesising these “soft” insights into viable and testable solutions.

Please read Part 2: Proposed Methods  for more on this.

Further Reading

 

User Experience Design Guide for Crafting Ethical AI Products and Services

This page leads to a series of related about the human centred design contribution to automated decision systems that have ethical outcomes.

The audience for these articles was initially the Data61 User Experience Designers and Product Managers who have been tasked to provide assistance on development of products and systems using data (sensitive or public) and Machine Learning (algorithms that make predictions, assist with decision making, reveal insights from data, or act autonomously), because these products are expected to deliver information to a range of users and provide the basis for contextually supported decisions.

However it’s always been hope that a wider UX and Product audience will find it helpful. 

Machine Learning computer scientists, software engineers, data scientists, anthropologists or other highly skilled technical or social science professionals are very welcome to read this guide in order to increase and enhance their understanding of user experience concerns and maybe even refer to it.

Crafting Ethical AI Products and Services Part 1: Purpose and Position
This article looks at the reasons why an ethical mindset and practise is key to technology production and positions the ownership as a multidisciplinary activity.

Crafting Ethical AI Products and Services Part 2: Proposed Methods
This article is a set of proposed methods for user experience designers and product managers working in businesses that are building new technologies specifically with machine learning AI.

Softwiring – The role of Human Centred Design in Ethical Artificial Intelligence
This is a keynote converted to an article and is more of a summary of the potential for UX to have meaningful contributions and impact in ethical technology (D61+Live 2018 keynote)

Reading List

This list that contributed to the articles above was compiled between 2017 – 2018.

Legal

Papers and Reports

Articles

Emerging Practise

Tools