Ethical Technology Crafting: Part 1 Purpose and Position

The following people need thanking for their advice, support and input: Cam Grant, Ellen Broad, Bob Williamson, Guido Governatori, Lachlan McCalman, Dr Matt Beard, Mitch Harris, Liz Gilleran, Phil Grimmett.

Special thanks to Ellen for her input on power relationships, Lachlan for his advice on user communication and Matt for his advice on ethical schools of thought.

Introduction

Computer Scientists, software engineers and academics are currently carrying the load of responsibility for the ethical implications of AI (Artificial Intelligence) in application. I strongly believe this issue belongs to a wider group – namely development teams and their parent organisations – and it turns out I’m not alone as leading think tanks also suggest diversity as key to reducing the risks associated to automated decision making as well as “designers” being called out specifically to address these potential breaches of trust. I am assuming “designers” means teams of Developers, Data Scientists and Product as well as actual Designers.

Lets start with a wider concern of how often AI is inferred as having its own agency. This emerging separation of technology from people is alarming, considering it is people who are making it. The language often used proposes a lack of control. This is why its important to not only have cross discipline teams making tech but also communicating this process ongoing with teams, customers, clients and society so the mental model of humans + AI is adjusted away from this notion of the “other” having it’s own agency.

Image of the Gorignak from the film Galaxy Quest
Gorignak – it’s a kind of golem… Something that is acting with intent by it’s creators and has no consciousness. In this case it’s out to mash Commander Peter Quincy Taggart (Galaxy Quest).

Ethics

When we discuss technology and ethics the conversation can flip over to philosophy very easily. This discussion is an important part of establishing the values your organisation and it’s products or services adhere to.

It can make things a bit easier to have a bit of ethical philosophy education – I’m by no means a trained ethicist but as am armchair enthusiast here is my quick reference as a starting point.

There are two classical schools of ethical thought – utilitarian which focuses on outcomes (“it’s for the greater good”) and deontological which focuses on duty and the “rightness” of act itself.

The town council of Sandford weren’t concerned about their ruthless acts, it was the outcome for a great good that mattered
John McClane was driven by duty and doing the right thing each step of the way, without a clear plan and high risk of failure to save the hostages in the Nakatomi Tower

Along with these there is an extensive list of post-modern and applied ethics including “care ethics” (aka feminist ethics) where caring and nurturing are held as virtues. This is a post-modern ethical approach that accommodates what designers are familiar with – people are messy, reject lack of control over their lives and context is key.

My colleagues at Data61 are regularly writing and speaking on this topic, see references at the end. There are also a lot of philosophical writings emerging that attempt to redefine ethics for humanity. While I find these inspiring, I’ll be clear that this article is not attempting to create a new field of ethics, but adapting theory into practise in our work as technology makers.

From what I understand computer scientists and engineers are currently required to take a utilitarian approach due to the nature of software coding. I am not well placed to explain this but feel the combination of designers working through the qualitative investigations of need with a deontological and care ethics lens can then assist engineers with the translation of that into a utilitarian applications are compatible and appropriate.

For example, if a numerical value has to be placed against a trade off, what is that amount? Is 10% risk of harm acceptable if  90% have an improved outcome acceptable? A client most likely isn’t going to answer that directly but we can elicit a desirable position on an acceptable tradeoff using typical qualitative UX methods during discovery and then communicate that risk during solution design.

Why have an ethical design framework for user experience and product design?

“To ensure autonomous and intelligent systems (A/IS) are aligned to benefit humanity A/IS research and design must be underpinned by ethical and legal norms as well as methods. We strongly believe that a value-based design methodology should become the essential focus for the modern A/IS organization. Value-based system design methods put human advancement at the core of A/IS development. Such methods recognize that machines should serve humans, and not the other way around. A/IS developers should employ value-based design methods to create sustainable systems that are thoroughly scrutinized for social costs and advantages that will also increase economic value for organizations. To create A/IS that enhances human well-being and freedom, system design methodologies should also be enriched by putting greater emphasis on internationally recognized human rights, as a primary form of human values.” IEEE The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems – Methodologies to Guide Ethical Research and Design

“New ethical frameworks for AI need to move beyond individual responsibility to hold powerful industrial, governmental and military interests accountable as they design and employ AI… When tech giants build AI products, too often user consent, privacy and transparency are overlooked in favor of frictionless functionality that supports profit-driven business models based on aggregated data profiles… Meanwhile, AI systems are being introduced in policing, education, healthcare, and other environments where the misfiring of an algorithm could ruin a life.” AI NOW 2017 Report.

We ought to aim for a defined ethical practice rather than defining what an ethical product is. This will help us discuss and evaluate engagements that align with our business values and social impact goals. The interpretation of an “ethical framework” at Data61 could be a system that “provides transparency, interpretability, due process and accountability through understanding the issues of power, control and potential harm to individuals, communities and business”.

I believe a discussion about the potential harm risks and thresholds of trust ought to happen each time a new product is initiated and throughout the production and maintenance of it. This evaluation would work with this top line statement of organisation values as well as the more contextual values gathered during the discovery work to set baselines for testing and an audit trail.

Multidisciplinary Teams

Inclusion of designers and product managers reduces the risk of biases by virtue of their own particular lenses. Along with personal experience, the best evidence I can find for a wider, shared approach to this problem is stated in the AI Now Report 2017:

“The AI industry should hire experts from disciplines beyond computer science and engineering and ensure they have decision making power. As AI moves into diverse social and institutional domains, influencing increasingly high stakes decisions, efforts must be made to integrate social scientists, legal scholars, and others with domain expertise that can guide the creation and integration of AI into long-standing systems with established practices and norms.”

Ethical codes meant to steer the AI field should be accompanied by strong oversight and accountability mechanisms. More work is needed on how to substantively connect high level ethical principles and guidelines for best practices to everyday development processes, promotion and product release cycles.”

The recently released IEEE A/IS Standards Report also lists the importance of cross discipline, top down and bottom up cultural shifts to bring an ethical mindset to technology organisatisations.

Application of an ethical practice becomes operationalised as constraints for project delivery. This interpretation would also inform other parts of the business as acceptance criteria for a client or market facing product engagement before reaching project delivery stages.

Each project needs it’s own definitions of ethical implications dependant on the people interacting with and affected by it, the data in use and the context in which both of these sit. These questions and the work to discuss and answer them are owned by all parts of the business, not just engineers and designers.

At Data61 we are fortunate to have an Ethics Group to help us work through harm mitigation

“Trust is often a proxy for ethics” (Dr Matt Beard) and the main risk of harm and trust breaches sits with data, especially highly valuable and highly sensitive PII (personally identifiable information). The more private the data, the higher it’s utility and the higher the risk of trust breaches or harm from insights in the wrong hands (either deliberately or accidentally). There are other data sources like sensor collected (eg air quality) and these would also benefit from the usual question of what insights are being generated, for whom and for what purpose? For example: Is particle matter being used to assist asthma sufferers or is it being collection to penalise air polluters?

Discussion is necessary with all parts of the business – not just the designers or developers – and a strong understanding of the legal position around the data, it’s intended use and how it is collected/sourced/stored and what decisions will ultimately be made from it.

Conclusion

This article explains why ethics is important in technology creation and who is responsible for that work.

I also propose that User Experience Designers are well positioned to contribute to these outcomes by virtue of their specialist skillset in qualitative research, ability to communicate with empathy and skills in synthesising these “soft” insights into viable and testable solutions.

Please read Part 2: Proposed Methods  for more on this.

Further Reading