Leading Design Conference 2019

In November this year I had the fortune of attending the Leading Design Conference held in London, UK. I’d not attended this conference hosted by Clearleft before so was really excited about attending and what I might walk away with.

Take a look at the speakers and topics https://leadingdesign.com/conferences/london-2019

The New Design Frontier Report

Invision had commissioned and provided a report “The New Design Frontier Report” that has a useful breakdown of current state and trends around:

  • People: Team, Key partners, Executives and employees
  • Practises: User Research, Design Strategy, Experimentation, UI Design
  • Platforms: Design Operations, Design Systems

I think the breakdowns in the report can definitely help with figuring out where a team is at and also how to measure what’s appropriate for an organisation. Download it and take a look: https://www.invisionapp.com/design-better/design-maturity-model/

Conference themes and takeaways

It might help if I share what I was hoping to get from the conference. At the time of booking the tickets I was heading up the UX practise at CSIRO’s Data61. When the conference happened I had already moved to an IC role at Salesforce in Experience Architecture. (My reasons for the role change are many and complex and you’ll see some evidence of that in what I got from the conference.)

Initially I was looking for some well informed insights about how to run a design team in a consulting organisation, especially one working in true innovation – making new technologies. I still think the conference content was relevant for my new role as I’m setting up design capabilities for clients, leading designers on  projects and establishing a design community of practise within Salesforce. As I have some residual anxiety from the time at Data61, I was also seeking some validation that all the hard decisions, initiatives and learning from failures had had some lasting impact for the business, my old team and myself. Lastly I was looking for guidance on solid metrics to measure a design team’s success that wasn’t NPS, CSAT or usability and adoption measures at a specific project level.

The themes throughout the conference presentations echo’d the report regarding people, practises and platforms. Anyway my take-aways are:

There is no design leadership playbook
We are all making it up as we go dependant on the businesses we are in. Most presentations were from design leaders in product organisations and consulting agency with no R&D and little enterprise represented. This was eye opening because you’d expect more maturity in leadership patterns from product and consulting. It appears design maturity is uneven across organisations and not necessarily tied to team size or age of the business.

Design as facilitator of change
Either up front or behind the scenes, designers are the agitators of change. Design is a risk-taking activity but if led well it can be holistic and inclusive activity. Leveraging the democratisation of design can assist in uplifting the role of designers through awareness. Probably not new news is that silos were called out as innovation killers. When considering digital transformation, it’s easy to forget that digital and associated technologies are incidental and that transformation is an organisational and behavioural change. Transformational leadership may come from non-digital or non-design background eg Sales, if financials are key to change. And if it’s driven by an external lead like a consultant, it needs to be very deliberately co-created with internal stakeholders. We have to be very careful of creating an “us and them” dynamic, as this can affect confidence in any high risk work. We need to be wary of unproductive coalitions or our own cultural silos. If we think about transformation goals without considering the complex adaptive systems we work within, this can result in unhappy people outside of the delivery team – the very people we are aiming to transform a business for.

Design is still an unknown capability
Design suffers from a lack of confidence in selling its value. But the key is remembering that design remains fluid – and things are moving fast and problems can’t be solved that are fixed in time and need people who can see patterns. Designers have this capability, especially those who can make a call in the absence of information and take it forward iteratively.

Measuring success
Perhaps if we work on answering why we can’t articulate the value of design this could lead to establishing that value, constrained by the organisation we are working in. Other tips were to start every new engagement with customer data to set business goals and benchmarks. It’s important to make sure design outputs and design management occurs in tandem as operation is the key to success – this means delivery. We need to be intentional in understanding what are we measuring and why, as there can be conflict between what we need to measure v’s what we can measure. There is some evidence that companies that link non-financial and value creation are more successful.

Be patient, impact and changes take longer than we think
My biggest flaw is impatience, I underestimate how long change can take and often get demotivated as a result. I really needed to hear this tip. Related to this is another gem that we simply cannot communicate too much. Lastly on this theme we were reminded that we can’t win everything and be prepared to cut losses. Two years into your leadership, expect a crash because failure is inevitable and recovery is hard.

Intentional team design
Design teams need to be thoughtfully created with a sense of purpose and longevity from the outset. Organic, unfocussed growth can lead to problems for leaders and organisations down the track. We saw many presentations of team structures and skills profiles and how they change over time as teams grow. What appeared to be a pattern was the establishment of sub teams and leadership roles earlier rather than later and also splitting out capability broadly into design ops and strategy. This aligns with the delegation topic below. Well structured collaboration was cited as a positive way to create team goals and rituals including career paths.

This is an area I have had my hardest lessons. Not having any kind of “playbook” means this knowledge is hard to come by without trial and error but it is also very difficult as most teams evolve in response to demand and their leader is usually a strong performer who has some natural relationship skills. I feel the design mentoring network could do with some work now there is an emerging class of experienced technology design leaders.

Ruthlessly prioritise, delegate as much as possible
Many presentations were internal facing and focussed on building team cultures. Once you’re a design leader your time in the field needs to be limited as energy goes to leading for impact and supporting your team. Also, we were cautioned to be authentic from the outset because faking it until you make it is exhausting.

Building trust and relationships
It is critical we understanding our audiences – our teams, our internal partners, our executive. Speaking their language and as delivering as promised is key.

The insights I’ve captured were from presentations and workshops by:

  • Maria Giudice — Founder, Author, Coach, Hot Studio
  • Kristin Skinner — Co-author, Org Design for Design OrgsFounder, & GSD
  • Jason Mesut — Design Partner, Group Of Humans
  • Melissa Hajj — Director of Product Design, Facebook
  • Julia Whitney — Executive Coach, Whitney and Associates
  • Alberta Soranzo — Director, Transformation Design, Lloyds Banking Group

I’d suggest following them on social media and blogs.

Leading Design 2019:  https://leadingdesign.com/conferences/london-2019

Softwiring – The role of Human Centred Design in Ethical Artificial Intelligence

I recently revised my earlier articles about the human centred design contribution to automated decision systems that have ethical outcomes. I presented this as a keynote at our annual tech showcase this year, D61+Live.

Introduction

“To ensure autonomous and intelligent systems (A/IS) are aligned to benefit humanity A/IS research and design must be underpinned by ethical and legal norms as well as methods. We strongly believe that a value-based design methodology should become the essential focus for the modern A/IS organization. Value-based system design methods put human advancement at the core of A/IS development. Such methods recognize that machines should serve humans, and not the other way around. A/IS developers should employ value-based design methods to create sustainable systems that are thoroughly scrutinized for social costs and advantages that will also increase economic value for organizations. To create A/IS that enhances human well-being and freedom, system design methodologies should also be enriched by putting greater emphasis on internationally recognized human rights, as a primary form of human values.”
~ IEEE The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems – Methodologies to Guide Ethical Research and Design

“New ethical frameworks for AI need to move beyond individual responsibility to hold powerful industrial, governmental and military interests accountable as they design and employ AI… When tech giants build AI products, too often user consent, privacy and transparency are overlooked in favor of frictionless functionality that supports profit-driven business models based on aggregated data profiles… Meanwhile, AI systems are being introduced in policing, education, healthcare, and other environments where the misfiring of an algorithm could ruin a life.”
~AI NOW 2017 Report

In the HCD response to this topic, “ethical” means autonomous systems mindfully designed and developed that minimise harm, inequality and/or discrimination.

This response is applied when an automated system (AI, ML etc) moves from applied research to commercialisation and wide uptake for purposes beyond knowledge creation and dissemination. This means anything looking to move into operational use for clients or customers. It also not a binary decision, it would include both utilitarian and deontological approaches (or wider, eg “care ethics”).

There are at least three broad components to solving ethical AI questions. In each there are important considerations governing input (data, privacy constraints, purpose) and output (opportunity, impact, products/services).

Regulatory – Human Rights, International and National law, Compliance eg organisational code of conduct

Technological – Research and Development that seeks to resolve data collection/generation, privacy, encoded ethics, models, algorithms

Human Centred Design – Ecosystems of People, Data and Digital, Experience, Empathy,  Context, Culture, Diversity

This article is the last part, HCD or the “softwiring”

1.0 Ecosystems

Outside of academic research, digital technologies don’t exist in bubbles or vacuums for their own purpose. People make it with an intent for someone else (specialised groups with specific tasks eg forensic investigation; or a broad range of less skilled users seeking to fulfil certain tasks eg dynamic mapping of travel plan) to find information to assist with making decisions.

These systems interact with other systems, drawing on and creating data generation by the users. The point of this section isn’t to break down the ecosystems but to set the point of view that technology needs to be considered less as a tool, and more like a network of systems with complex relationships.

Diagram 1: The ecosystem of roles and responsibilities for AI development

To date, AI has been created by data, computer scientists and software engineers but when we move towards broader application development we can call on the expertise to an ecosystem of experts with special roles and responsibilities. AI production becomes a collective activity, unburdening the engineers from the ethical questions and introducing diverse knowledge and skills to reduce the incidence of unintentional consequences. This can be broken down into clear areas, as mentioned in the opening section (Regulatory, Technical, HCD).

Values

Questions regarding legal criteria or “can we..?” would be reviewed in regard to the the UN Declaration of Human Rights, concerning human dignity, social cohesion and environmental sustainability. Existing Australian data and privacy laws are available to examine potential, unintentional violations or abuses of personal data as well as many other international laws on the same.

This ought to assist establishing revisions of corporate values or creation of new ones (including SMEs and startups) that then assist the executive and their business decisions or “should we…?”.

Like most things, the devil is in the detail when it comes to actual development work and the embodiment of values into production requires further exploration particular to each project, program of work or product. This section only examines “who and why…? leaving “how and what” to technical expert contributions detailed in.

As mentioned above, AI software is made by teams, not just computer or data scientists or software engineers and is interacted with by a range of users, not just one operational kind. Regarding teams: an understanding of central (full time, dedicated project teams) and orbiting (consultation expertise eg lawyers or ethics councils) team members can assist in understanding the responsibilities and provide wayfinding for critical points of decision making during the operationalisation or production of ethical AI.

Leadership

Leaders have several challenges, both in understand the various kinds of AI and how it can be used (eg is it a prediction problem or a causation one) as well as how to resolve the ethical challenges these new services propose (eg is it low risk like having to talk to a chatbot, or high risk like “pre-crime”?).

An important point worth noting is there is evidence that leaders are more likely to use a tacit response to ethical challenges than compliance or regulation. The evidence also shows that organisations that support information sharing networks between leaders and/or other levels of staff resolve ethical dilemmas more successfully than those with structures that leave leaders isolated (eg due to status threats). Also worth noting is significant organisational changes can trigger ethical dilemmas as the lack of or poor inclusion of appropriate or new expertise coupled with historical approaches creates new situations without a familiar framework (the introduction of AI for business outcomes would be a clear example of this as it would cause a massive internal disruption in both finding new markets and required skills). This would seem to further support an ecosystem and diversity approach to ethics to share the load of the decision.

1.2 Data Quality

Diagram 2: “Revealing Uncertainty for Information Visualisation” Skeels, Lee, Smith, Robertson

Data has levels of quality, that a user can challenge if the outcomes don’t meet their mental models or expectations and this can lead to a drop in trust of the results. This model not only helps with understanding the relationships of data to itself but also can serve to ‘de-bug’ the issues when challenged by the user in where the breakdown might happen.

For example, if the data is poorly captured then all the following levels will augment this problem. If the data is good and complete then the issues might be in the inferences. This is why user feedback using real content (ie not pretend, old or substituted data) is important to test the system with.

1.3 Data Inclusion

Another point that provides context is the a considerable barrier to ethical technology development is uneven internet connectivity either through infrastructure or affordability (IEEE Ethically Aligned Design – Economics and Humanitarian Issues; Australian Digital Inclusion Index 2017) resulting in data point generation that reflects a certain location/socio-economic bias. While connectivity in Australia is improving there there are groups who are not connected well enough to use digital services that correlate to low education, low income, mobile only, disability and region.

“Across the nation, digital inclusion follows some clear economic and social contours. In general, Australians with low levels of income, education, and employment are significantly less digitally included. There is consequently a substantial digital divide between richer and poorer Australians. The gap between people in low income households and high income households has widened since 2014, as has the gap between older and younger Australians, and those employed and those outside the labour force.”

~ Australian digital inclusion index 2018

2.0 Operationalisation

Operationalising ethical AI development requires multidisciplinary approaches. As mentioned above, there are legal and technical constraints, below are details for a human centred component. Unlike the first two, which are either either fixed by law or encoded and therefore “hard wired”,  this is about softskills and could be referred to as “soft wired”.

Soft wiring is applied during the later stages of applied research when technologies are looking to into early stage production and balances utilitarian and deontological (duty of care) philosophies. There are four parts:

2.1 Empathy

Empathy is the ability to understand the point of view of the “other” and alone it won’t ensure an ethical AI outcome, it forms part of a suite of approaches (mentioned in the opening section of this document).

Unfortunately, empathy isn’t natural, easy or even possible for some people due to conditioning or biological reasons, stress, or other factors like perception of business needs.

However, the good news is technology production already has experts in capturing and communicating empathy. Their work in entirely focussed on understanding people, their needs and motivations within context. They are:

    • User Experience Researchers and Designers
    • Behavioural scientists
    • Psychologists
    • Ethnographic researchers
    • Ethicists

These roles may already exist in house or could be contracted to assist with project planning as well as audits/reviews. In some cases, these skills are also a mind set, and non practitioners can use them just as effectively.

2.2 Experience Led

Experience design starts with a clearly defined problem and examines all the people affected by this problem and relies on the empathy work done to capture the differing needs of various participants in this “ecosystem of effect – from highly influential stakeholders right through to passive recipients who are deeply affected by the AI but who have little or no influence over the decisions made on their behalf.

Experience led places “Who” and “Why” before “How and What”. This work aims to sufficiently understand the motivations and expectations from the different user types, not just the project sponsor or technology makers.

These perspectives also provide context for clearly defined use cases – facial recognition surveillance might be acceptable for airport security but is it for citizens travelling in public places?

Conceptual diagram illustrating how different people described in this document might interact with a system
Diagram 3: Simplified, conceptual diagram illustrating how different people interact with a system

An ethical AI product also needs to be holistically considered in both strategy and solution design.

2.2.1 Strategy

The following questions can be used as a framework to assist in troubleshooting potential problems in advance:

  • User Research
    • Do you know who will use this system?
    • Do you know why they would use it?
    • Is the AI product “better” than a human or existing systems?
  • Harm reduction
    • How aware or sensitive are development teams and stakeholders about the “ethical quality” of the technology?
    • Is utility v’s privacy appropriate?
    • Who owns the “duty of care”?
    • What happens if an error is made, who is responsible?
  • Trade off
    • Who benefits from this AI service, and who is left behind?
    • Is this gap acceptable?
    • Who will answer for this gap if it needs defending?

2.2.2 Solution

  • Assist comprehension
    • Is the apply of perceived affordances helpful?
    • Apply “cultural conventions”, visual feedback or signifiers
    • Apply Nielsen’s heuristics
  • Validation
    • Are you testing with a variety of users?
    • Are you reviewing and applying revisions to the solution in response to testing?
    • Build – measure – learn

2.2.3 Trust

Trust is established by treating all users with dignity. Trust is also easily lost when users don’t understand the decisions made by a system.

  • Clarity
    • Is it clear to a person using the digital produce/service why a decision has been made?
    • Are they sufficiently informed to at their level of comprehension?
  • Right of reply
    • Is there the feeling of a standard of due process they recognise or can understand?
    • Can that person participate/engage in that due process?
  • Accountability
    • Is there a feeling that the provider of the decision/service is owning responsibility of the consequences deliver by the system they have deployed?
    • Is there some form of algorithmic transparency provided (end of black box AI)?
    • What happens if the system breaks down? (power, code, data failure)

2.3 Practise

Processes and checklists don’t catch ethical nuances, only a living practise of exploring, testing and validation can. Establishing a good ethical practise encourages this to be done every project, to pursue the best possible outcomes and auditability.

2.3.1 Constructive Discourse

  • Top down
    • Leaders are empowered to seek input on ethical quandaries from various sources (personal networks and regulatory)
    • Management welcomes advice from their teams
  • Bottom up
    • Development teams are encouraged to self educate and share knowledge
    • Members intelligently challenge the ethical proposition

2.4 Team Diversity

Diversity has been identified in multiple publications (IEED, AINow and other National AI Frameworks eg France) as critical to reducing the errors of bias AI can deliver.  This is both development teams and users. Development teams need not only gender but cultural, cognitive, social and expertise and skills. This friction is deliberate and we need to be mindful of “big” voices dominating. There are many conventional team communication techniques already in use to facilitate healthy discussion and input, so they won’t be listed here.

Credits and References

This version is revision of http://hilarycinis.com/user-experience-design-guide-for-crafting-ethical-ai-products-and-services/ – aimed at designers and product managers.

 

Ethical Technology Crafting: Part 2 Proposed Methods

The following people need thanking for their advice, support and input: Cam Grant, Ellen Broad, Bob Williamson, Guido Governatori, Lachlan McCalman, Mitch Harris, Liz Gilleran, Phil Grimmett.

Special thanks to Ellen for her input on power relationships and Lachlan for his on user communication.

Introduction

In attempting to set out how to tackle establishing an ethical AI mindset with the tech industry and how to start approaching the production side of technology innovation that uses AI, machine learning, algorithms and the large and/or sensitive data sets they work across I feel the role of user experience designer’s would be intensive early in. We are well placed to do this work as we are already skilled in the qualitative investigation work comprising of needs elicitation and empathy establishment. 

The work would continue through to ramp up again during the solution build iterations.

The work falls into two areas, at the beginning during “discovery” to define context and surface power relationships and later when “solutions” are being implemented to assist communication to the range of people using the systems.

The audience for this guide are User Experience Designers and Product Managers who have been tasked to provide assistance on development of products and systems using data (sensitive or public) and Machine Learning (algorithms that make predictions, assist with decision making, reveal insights from data, or act autonomously), because these products are expected to deliver information to a range of users and provide the basis for contextually supported decisions. 

This guide is not intended to “teach” anyone user experience methods but support those working as professional user experience practitioners and product managers. It is therefore assumed the target audience is already familiar with the methodologies outlined throughout this document.

Machine Learning computer scientists, software engineers, data scientists, anthropologists or other highly skilled technical or social science professionals are very welcome to read this guide in order to increase and enhance their understanding of user experience concerns and maybe even refer to it.

I welcome feedback from trials.

Goals

The goal of this guide is to provide a method by which teams can ensure ethical impacts are considered as standard practice when engaged in working on any digital product, service or system deve

This guide is not intended to replace legal, corporate/instutional ethics frameworks or Australian Government personal information privacy laws but work within them as part of a shared approach.

How User Experience Design Fits In

The set of methods proposed in this guide shouldn’t be an overhead but ensure best practice is applied and with an ethical lens. Each project needs it’s own definitions of ethical implications dependant on the people interacting with and affected by it, the data in use and the context in which both of these sit.

Questions we intend to answer as user experience designers are:

  • How can we ensure an ethical approach is holistically considered in both product strategy and solution design?
  • How can we capture upfront and measure the ethical implications (tradeoffs and compromises)?
  • How can we provide “perceived affordances” for trust in outcomes delivered by the product using “cultural conventions”, visual feedback or signifiers.

Along with the typical design constraints of balancing competing business priorities, user requirements for adoption and technology pushes for innovation, there is the additional lens of “understanding social expectations and accounting for different types of disadvantage”. We need to deliver outcomes that fosters and rewards trust on behalf of the various user groups interacting with it.

This means UX Designers and Product Managers need to research and capture an understanding of the power relationships between people where discrimination or manipulation could occur, understand the context where abuses can happen (intentionally or unintentionally) and create mechanisms where the risk of harm is not carried over to the parties with the least amount of power to control it.

The next section of this article proposes practical applications of UX methods.

Application of UX Methods

The techniques proposed are versions of existing methods and practises, aiming to include a specific ethical lens in the design discovery and solution exploration phases.

UX practitioners are tasked with representing a range of people interacting with digital systems in varying contexts. These systems are usually part of an ecosystem of digital solutions and the UX practitioner’s influence many only extend to the immediate problem being tackled. Just as with ‘traditional’ digital products and services, it is vitally important to include project teams and external stakeholders throughout, as they have specific ethical approaches to computer science and software engineering work at Data61.

UX work would assist teams with product strategy and user empathy where needed, while also informing designing interactions and user interfaces for these systems however the insights gathered are not limited to graphical user interfaces (“Human-in-the-loop” interactions). The user research and benchmarking can also form part of a Machine-to-Machine interaction (eg. a report or set of constraints articulated in a product strategy) for a software engineer or for specialised data governance expert to implement.

Its also important to view this UX work from either the data or the algorithm angles. Data is historical and predictions made with data attempt to accommodate certainty or confidence based on various factors. Unintentional biases occur within the data collection and cultural norms can be unintenionally built into algorithms.

Data confidence hierachy of dependancy.
Diagram 1: Revealing Uncertainty for Information Visualisation Meredith Skeels (Biomedical and Health Informatics, University of Washington) Bongshin Lee, Greg Smith and George G. Robertson (Microsoft Research)

This diagram helps to identify where issues can live. If any of the “levels” are in question by the person relying on the information delivered, credibility of the insights is diminished or in the case of ethics, the outcomes could be skewed.

The Humans We Are Designing With and For

User centred and ethnographic research starts with identifying and crafting questions that would become insights and design constraints for various clusters of people sharing similar goals.

Currently we see four broad user groupings sourced from various Data61 projects, papers, articles and observations. Further ethnographic user research is required to develop in detail the broad descriptions below and likely will open up other clusters defined by common characteristics and objectives. While they should not be relied on as “personas” they are listed here to help quickly communicate how different people have different roles and objectives.

Conceptual diagram illustrating how different people described in this document might interact with a system
Simplified, conceptual diagram illustrating how different people interact with a system

Enabler/sponsor (funder or client)

  • Owns the purpose/intent
  • Communicates the value proposition
  • Has ultimate accountability
  • Would be a trustee of public good
  • Has a higher level in the power relationship

Operator (primary/active): Tech expert

  • Algorithm author
  • Data set provider
  • Data ingestor
  • Output creator
  • Trustee of application of an ethical framework in the build

Operator (primary/active): Engaged professional

  • Data custodian/provider
  • Data interrogator/informed decision maker
  • Trustee of ethical practices

Passive recipient

  • Desires a feeling of control over their own data (as defined within regionally legislative constraints)
  • Has a lesser role in the power relationship
  • Is impacted or directed by data and algorithmic decisions
  • Needs access to decision rationale, right of reply and evidence (e.g. data) supporting decision rationale

It is expected that a similar group of people could be any combination of these in the same project with time or context being the differentiator; or that the same project could have different applications for groups of people with quite different goals (eg data collection or analysis or consuming an output). This also implies there could be a power relationship between different groups.

Usual user discovery activities (eg generative “who and why”) should always be undertaken rather than relying on this taxonomy alone.

Methods In Detail

The outcomes from these activities, as outlined below, are intended to help a development team design solutions that serve people using the proposed product or service. (They could also inform customer discovery or marketing campaigns but those are secondary considerations once fit-for-purpose has been validated.)

The application of existing good user experience research and design practices can be employed or adapted to focus on the requirements for both active users and passive recipients of a proposed system:

  • User group and motivation generation
  • Contextual Inquiry questions specific to the topic
  • Problem definition and success
  • Use case/user stories/Jobs To Be Done
  • Risk register/Red Team (think negative, go hard)
  • Testing for impact (user acceptance/usability)

It is important to include all project and development team members in this work to ensure goals are aligned and the journey of user discovery is shared by the team. Good practice user experience discovery, exploration and validation methods support this involvement so no further notes will be added here on how to engage team members or stakeholders.

1. Discovery

As part of the problem definition, user research consultations would also aim to:

  • Sufficiently understand the motivations and expectations from the different user types, not just the project sponsor or technologists.
  • Capture the level of understanding about data sets desired for their decision making.
  • How aware or sensitive are development teams and stakeholders about appropriate diversity and completeness of data sets, and methods of collection?
  • Capture the level of concern about reduction of bias in the technology.
  • How aware or sensitive are development teams and stakeholders about the “quality” of the technology?
  • What is the tolerance for compromise or risks of harm? What is an acceptable trade-off (within the legal parameters)?
  • Understand the positives and negatives of current state systems so that any digital intervention can be compared back for improvements or unforeseen harm

Some questions regarding trust building we might need to measure:

  • Is it clear to a person using the digital produce/service why a decision has been made?
  • Is there the feeling of a standard of due process they recognise or can understand?
  • Can that person participate/engage in that due process?
  • Is there a feeling that the provider of the decision is owning responsibility of the consequences?

Workshop/Interview/Contextual Inquiries

This section provides question templates to help focus on ethical data use topics while avoiding asking the question directly. Using typical interview guide questions within the context of the project, acquired through non-leading questions and observation.

Typically you could reframe these questions to not have a digital or data focus and include them alongside other ethnographic investigations.

    • How do we support the [operator’s] position of being a trusted party? eg How do support trust from your clients when they interact with you?
    • How can we help you display your expertise? eg what is a typical or key activity for your expertise range?
    • How do we help build trust? eg Why would you trust this [entity]? Why would you not?
    •  When using this system, how can we ensure you act with respect for public duties/interests? eg What are your organisations/agencies public duties?
    • What is the proposition/problem/opportunity enabled or enhanced by the technology? eg What pain points are in your current workflow/system
    • Who are the individuals affected by it? eg Who benefits from the decision you make in your role? Who is left behind?

2. Solution Design

The insights collected would be folded into domain expertise for production/service design strategies.

Product strategy assumptions

Some questions used to define the strategy of the product or service and reframed as hypothesis for testing could be:

    • How might people change their behaviour as a result of your technology eg Decreased antisocial, increased paranoia, online alternative personality development?
    • What world views could be changed? eg Govt dept reputation, beliefs about safety
    • How could relationships between groups of people be affected? eg Trust, communication
    • What would happen if the technology failed? eg Complete breakdown, partial breakdown, hacks
    • How can we avoid harm from the planned operations? eg Un/intentional discrimination, unmitigated production and processing of data, iterative use overtime removed from the original intent

Product or service strategy

Using the UX Research set direction and benchmarks for validation. It would be highly recommended to work through the Data Ethics Canvas with the stakeholders and development team. User Experience Research is critical in capturing the perspectives of affected parties outside the project team. Culturally diverse ethics considerations cannot and should not be made by people not part of that particular cultural group.

Context

As directed by user research or domain expert assumptions

Baseline for validation activities eg usability testing or UAT

As directed by user research, domain expert assumptions and product/service strategy.

Heuristics

The following are adapted (and in most, still include) the 10 Nielsen heuristics for user interaction and interface design, and relate to any UI that a human operates. (Heuristics for machine-to-machine interactions are not included here.)

They would assist in the visual communication of trust “signifiers” identified in the research done prior.

  1. Visibility of system status

    – The system should always keep people informed about what is going on, through appropriate feedback within reasonable time. Visibility of the data in (within privacy preserving constraints)

  2. Match between system and the real world

    – The system should speak the language, with words, phrases and concepts familiar to the person using the system, rather than system-oriented terms.
    – Follow real-world conventions, making information appear in a natural and logical order.

  3. User control and freedom

    – People often choose system functions by mistake and will need a clearly marked “emergency exit” to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.
    – People can request access to methods used by algorithms and data that affects them for explanations and rationale.
    – People can request a copy of their data in a format or way that is in line with data privacy and access laws
    – People can withdraw their data that is in line with data privacy and access laws
    – People can edit or update their data that is in line with data privacy and access laws

  4. Consistency and standards

    – Currently there are no global standards for ethical ML. Law, regulation and inclusive/empathetic practises ought to set standards particular to the project. Trade offs are an important consideration which would make standardising difficult. Other ‘local’ standards could be:
    – People should not have to wonder whether different words, situations, or actions mean the same thing. Establish a common vocabulary.
    – Provide glossaries and alternative explanations

  5. Error prevention

    – Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present people with a confirmation option before they commit to the action.
    – Request a revision of an outcome.
    – Run a test across a snapshot or subset for human-in-the-loop checks.
    – Describe range of uncertainty in predictions, data that predictions are being enacted on and associated risks if acted upon

  6. Recognition rather than recall

    – Minimize memory load by making objects, actions, and options visible.
    – People should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.
    – Provide a proxy or synthetic alternative for private data sets.

  7. Flexibility and efficiency of use

    – Accelerators — unseen by the novice skill set — may often speed up the interaction for the expert skill set such that the system can cater to both inexperienced and experienced skill sets. Allow people to tailor frequent actions.
    – Provide alerts for any impacts tailoring short cuts may incur eg skipping a feature matching step may result in mistakes if the schema across two data sets aren’t identical, but the expert user has set up short-cuts as usually there are matching schemes.

  8. Aesthetic and minimalist design

    – Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.
    – Provide dialogues in context to activity. This could include the system “understanding” the goal, rather than be a passive tool.?
    – Visualisation to lift cognitive or comprehension

  9. Help people recognize, diagnose, and recover from errors

    – Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.

  10. Help and documentation

    – Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the task or purpose easily scanned.

Validation – Solution Design

Using standard usability methods, design and run tests against all assumptions made in the preceding steps:

  • Product or service strategy
  • Context
  • Baselines
  • Heuristics

References and Further Reading

Legal

Papers and Reports

Emerging Practise

Tools

 

Ethical Technology Crafting: Part 1 Purpose and Position

The following people need thanking for their advice, support and input: Cam Grant, Ellen Broad, Bob Williamson, Guido Governatori, Lachlan McCalman, Dr Matt Beard, Mitch Harris, Liz Gilleran, Phil Grimmett.

Special thanks to Ellen for her input on power relationships, Lachlan for his advice on user communication and Matt for his advice on ethical schools of thought.

Introduction

Computer Scientists, software engineers and academics are currently carrying the load of responsibility for the ethical implications of AI (Artificial Intelligence) in application. I strongly believe this issue belongs to a wider group – namely development teams and their parent organisations – and it turns out I’m not alone as leading think tanks also suggest diversity as key to reducing the risks associated to automated decision making as well as “designers” being called out specifically to address these potential breaches of trust. I am assuming “designers” means teams of Developers, Data Scientists and Product as well as actual Designers.

Lets start with a wider concern of how often AI is inferred as having its own agency. This emerging separation of technology from people is alarming, considering it is people who are making it. The language often used proposes a lack of control. This is why its important to not only have cross discipline teams making tech but also communicating this process ongoing with teams, customers, clients and society so the mental model of humans + AI is adjusted away from this notion of the “other” having it’s own agency.

Image of the Gorignak from the film Galaxy Quest
Gorignak – it’s a kind of golem… Something that is acting with intent by it’s creators and has no consciousness. In this case it’s out to mash Commander Peter Quincy Taggart (Galaxy Quest).

Ethics

When we discuss technology and ethics the conversation can flip over to philosophy very easily. This discussion is an important part of establishing the values your organisation and it’s products or services adhere to.

It can make things a bit easier to have a bit of ethical philosophy education – I’m by no means a trained ethicist but as am armchair enthusiast here is my quick reference as a starting point.

There are two classical schools of ethical thought – utilitarian which focuses on outcomes (“it’s for the greater good”) and deontological which focuses on duty and the “rightness” of act itself.

The town council of Sandford weren’t concerned about their ruthless acts, it was the outcome for a great good that mattered
John McClane was driven by duty and doing the right thing each step of the way, without a clear plan and high risk of failure to save the hostages in the Nakatomi Tower

Along with these there is an extensive list of post-modern and applied ethics including “care ethics” (aka feminist ethics) where caring and nurturing are held as virtues. This is a post-modern ethical approach that accommodates what designers are familiar with – people are messy, reject lack of control over their lives and context is key.

My colleagues at Data61 are regularly writing and speaking on this topic, see references at the end. There are also a lot of philosophical writings emerging that attempt to redefine ethics for humanity. While I find these inspiring, I’ll be clear that this article is not attempting to create a new field of ethics, but adapting theory into practise in our work as technology makers.

From what I understand computer scientists and engineers are currently required to take a utilitarian approach due to the nature of software coding. I am not well placed to explain this but feel the combination of designers working through the qualitative investigations of need with a deontological and care ethics lens can then assist engineers with the translation of that into a utilitarian applications are compatible and appropriate.

For example, if a numerical value has to be placed against a trade off, what is that amount? Is 10% risk of harm acceptable if  90% have an improved outcome acceptable? A client most likely isn’t going to answer that directly but we can elicit a desirable position on an acceptable tradeoff using typical qualitative UX methods during discovery and then communicate that risk during solution design.

Why have an ethical design framework for user experience and product design?

“To ensure autonomous and intelligent systems (A/IS) are aligned to benefit humanity A/IS research and design must be underpinned by ethical and legal norms as well as methods. We strongly believe that a value-based design methodology should become the essential focus for the modern A/IS organization. Value-based system design methods put human advancement at the core of A/IS development. Such methods recognize that machines should serve humans, and not the other way around. A/IS developers should employ value-based design methods to create sustainable systems that are thoroughly scrutinized for social costs and advantages that will also increase economic value for organizations. To create A/IS that enhances human well-being and freedom, system design methodologies should also be enriched by putting greater emphasis on internationally recognized human rights, as a primary form of human values.” IEEE The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems – Methodologies to Guide Ethical Research and Design

“New ethical frameworks for AI need to move beyond individual responsibility to hold powerful industrial, governmental and military interests accountable as they design and employ AI… When tech giants build AI products, too often user consent, privacy and transparency are overlooked in favor of frictionless functionality that supports profit-driven business models based on aggregated data profiles… Meanwhile, AI systems are being introduced in policing, education, healthcare, and other environments where the misfiring of an algorithm could ruin a life.” AI NOW 2017 Report.

We ought to aim for a defined ethical practice rather than defining what an ethical product is. This will help us discuss and evaluate engagements that align with our business values and social impact goals. The interpretation of an “ethical framework” at Data61 could be a system that “provides transparency, interpretability, due process and accountability through understanding the issues of power, control and potential harm to individuals, communities and business”.

I believe a discussion about the potential harm risks and thresholds of trust ought to happen each time a new product is initiated and throughout the production and maintenance of it. This evaluation would work with this top line statement of organisation values as well as the more contextual values gathered during the discovery work to set baselines for testing and an audit trail.

Multidisciplinary Teams

Inclusion of designers and product managers reduces the risk of biases by virtue of their own particular lenses. Along with personal experience, the best evidence I can find for a wider, shared approach to this problem is stated in the AI Now Report 2017:

“The AI industry should hire experts from disciplines beyond computer science and engineering and ensure they have decision making power. As AI moves into diverse social and institutional domains, influencing increasingly high stakes decisions, efforts must be made to integrate social scientists, legal scholars, and others with domain expertise that can guide the creation and integration of AI into long-standing systems with established practices and norms.”

Ethical codes meant to steer the AI field should be accompanied by strong oversight and accountability mechanisms. More work is needed on how to substantively connect high level ethical principles and guidelines for best practices to everyday development processes, promotion and product release cycles.”

The recently released IEEE A/IS Standards Report also lists the importance of cross discipline, top down and bottom up cultural shifts to bring an ethical mindset to technology organisatisations.

Application of an ethical practice becomes operationalised as constraints for project delivery. This interpretation would also inform other parts of the business as acceptance criteria for a client or market facing product engagement before reaching project delivery stages.

Each project needs it’s own definitions of ethical implications dependant on the people interacting with and affected by it, the data in use and the context in which both of these sit. These questions and the work to discuss and answer them are owned by all parts of the business, not just engineers and designers.

At Data61 we are fortunate to have an Ethics Group to help us work through harm mitigation

“Trust is often a proxy for ethics” (Dr Matt Beard) and the main risk of harm and trust breaches sits with data, especially highly valuable and highly sensitive PII (personally identifiable information). The more private the data, the higher it’s utility and the higher the risk of trust breaches or harm from insights in the wrong hands (either deliberately or accidentally). There are other data sources like sensor collected (eg air quality) and these would also benefit from the usual question of what insights are being generated, for whom and for what purpose? For example: Is particle matter being used to assist asthma sufferers or is it being collection to penalise air polluters?

Discussion is necessary with all parts of the business – not just the designers or developers – and a strong understanding of the legal position around the data, it’s intended use and how it is collected/sourced/stored and what decisions will ultimately be made from it.

Conclusion

This article explains why ethics is important in technology creation and who is responsible for that work.

I also propose that User Experience Designers are well positioned to contribute to these outcomes by virtue of their specialist skillset in qualitative research, ability to communicate with empathy and skills in synthesising these “soft” insights into viable and testable solutions.

Please read Part 2: Proposed Methods  for more on this.

Further Reading

 

User Experience Design Guide for Crafting Ethical AI Products and Services

This page leads to a series of related about the human centred design contribution to automated decision systems that have ethical outcomes.

The audience for these articles was initially the Data61 User Experience Designers and Product Managers who have been tasked to provide assistance on development of products and systems using data (sensitive or public) and Machine Learning (algorithms that make predictions, assist with decision making, reveal insights from data, or act autonomously), because these products are expected to deliver information to a range of users and provide the basis for contextually supported decisions.

However it’s always been hope that a wider UX and Product audience will find it helpful. 

Machine Learning computer scientists, software engineers, data scientists, anthropologists or other highly skilled technical or social science professionals are very welcome to read this guide in order to increase and enhance their understanding of user experience concerns and maybe even refer to it.

Crafting Ethical AI Products and Services Part 1: Purpose and Position
This article looks at the reasons why an ethical mindset and practise is key to technology production and positions the ownership as a multidisciplinary activity.

Crafting Ethical AI Products and Services Part 2: Proposed Methods
This article is a set of proposed methods for user experience designers and product managers working in businesses that are building new technologies specifically with machine learning AI.

Softwiring – The role of Human Centred Design in Ethical Artificial Intelligence
This is a keynote converted to an article and is more of a summary of the potential for UX to have meaningful contributions and impact in ethical technology (D61+Live 2018 keynote)

Reading List

This list that contributed to the articles above was compiled between 2017 – 2018.

Legal

Papers and Reports

Articles

Emerging Practise

Tools

Which and What Data, When?!

Updated April 15, with thanks to Georgina Ibarra for proof reading and edits and David Anning for links to the UN and Forbes.

I’ve noticed a common reaction to the word “data” when observing commentators delivering news stories or politicians evangelising the benefits of open data initiatives. While some of us implicitly understand data use in context from our domain expertise and regular exposure to the varying types of data (including how hard it is to get at times) generally speaking, people get freaked out because they assume the worse.

Granted, there are nefarious types out there collecting and selling personal details that they shouldn’t and this is sort of the point – to educate people about the data in use in a way they can grasp easily. Once we remove this knee jerk reaction about the word data, people can focus on what they can do with data rather than what someone else might do to them with it.

I was at the KnowledgeNation event at the ATP yesterday and this kind of hit home when Angus Taylor (Assistant Minister for Cities and Digital Transformation) talked about the “open data” initiative underway. After he finished his speech the first question from a member of the audience was about citizen’s personal details being released. He of course answered it expertly, but at first I was quite astonished at the leap the audience member made from “open data” to “personal data”. But afterwards I thought: well should it be that astonishing considering the vast ocean of “data” out there and how little most of us know about it?

So that got me thinking – how can we provide clearer descriptors for data that deliver an expectation of use and immediately set the tone for the ongoing discussion? As a user experience professional I see this as a responsibility and am now embarking on a proposed solution to try it out.

Like Eskimos have with snow, we might need more words for data or be more conscious of the type of data we are referencing when we talk about it (and when we talk about the stories we tell with data).

I think we’re all in agreement that the term “big data” is vague and unhelpful so I’m making some suggestions to introduce a commonly used vernacular for different types of data:

  • Private data – the citizen owns it, gives permissions, expiration times, and it’s protected from any other use
  • Secure data – sharable but with mind blowing encryption
  • Market data – anything used to sell products to you
  • Hybrid data – some kind of private and non-private mix
  • triangulated data – those seemingly harmless sets that are used to identify people
  • Near-to-Real Time data – because real-time is rarely actually real-time
  • Real Time data
  • Legacy data – old stuff
  • Curated data – deliberately created data sets serving a single purpose
  • Active: Photos, Videos, Searches (search terms) Communications (email, text, comments, blogs)
  • Passive: Health, Financial, Spending, External environmental, Domestic environmental, Location, Logs

Examples in use could be –

“Google are tracking your Real Time Location data when you use maps”

“The Australian Open Data initiatives makes Curated data from the ABS available”

Private and Secure Financial data will not be shared with any third parties”

At Data61 we are face to face with this too so it will be part of our UX work to discover patterns in attitudes and communication.

I’m currently investigating this idea, and I’d love your thoughts! Is there anything published, either academically or otherwise that might have attempted to do this already?

Refs: http://www1.unece.org/stat/platform/display/bigdata/Classification+of+Types+of+Big+Data

Review of NICTA/Data61 Year 4

Well 2015 was an interesting year to say the least. I won’t go into the funding stuff, it’s easy to read up, the results of which we merged or absorbed by (but definitely co-branded) with CSIRO. NICTA is now Data61.

I worked really hard this year at ensuring user experience is understood and utilised to the best of our capacity, especially with the potential increase in demand from the merger. And the business is undergoing the inevitable restructuring that happens so there has been a lot of opportunity to take advantage of to get things off on the right footing for 2016.

But the really exciting thing for me, is the impact on the User Experience team. We’ve now grown to 8 and are in Sydney, Canberra and Melbourne.

There has been an massive increase of our expertise and time in early stage (discovery and exploration) work with external clients. This has been satisfying, resulting from a lot of evangelising and demonstration of capability and probably also the general interest init from the market. While a lot of work has been for Federal and State government, also included are banking and the energy industries. These activities are also great strategic partnership starters and it’s assisted forming some solid relationships with key partners.

My team are a diverse and flexible bunch and I can only admire them as they teach me so much and deliver so consistently.

Some of our projects go for several years and roll over each year but in total, old and new projects we worked on 27 projects:

  • 5 platforms
  • 7 early stage and discovery
  • 8 products for clients
  • 7 products for spinouts/internal startups.

On these projects plus others, we:

  • surveyed 173 users
  • interviewed 76 users
  • ran 49 usability tests
  • facilitated 23 “Discovery” workshops with 414 attendees
  • ran 1 “Exploration” workshop with 12 participants

Things I learnt this year:

  • UX design is not the same across every organisation, so don’t apologise for doing things differently if it’s how your business needs to be served.
  • Design leadership is best treated as it’s own design problem/opportunity. Ultimately we all need to figure this stuff on our own. The number of people in your wider professional circle that you can rely on for council (or even just returning a message) is a lot smaller than you think.
  • In the same way a client would go to a particular design agency for a certain kind of job, each designer in a team has their own special skills suitable for certain projects, client needs and team dynamic.
  • UX needs champions within business development and engineering to gain traction. Keep an eye out for them, chat with them, learn from them.
  • Preparation needs patience … a future state proposal can take a long time to bear fruit… and don’t expect it to be the fruit that was on the label.
  • Definitions of success are best reviewed in terms of outcomes, not entirely the originally proposed steps or operational requirements.

 

 

Data61 (exNICTA) Design Methodology

For years I’ve attended conferences, read articles and seen presentations about the unquestionable importance of research led user experience design. Assumptions and imagination were downgraded as unworthy practises. We hear how research saves time in the long run, provides focus and yields hidden gems; it helps gain buy-in from teams and creates strong foundations for the decisions made so development can go ahead with less challenges. And yes, it does.

But as a sole approach, this is bullshit. In my pretty long career experience hypothesis and subject matter driven design is as much a contributor to success as research. The trick is not to rely on just one mode of operating. How much you do of each fluctuates depending on the work and the domain.

Its ok to make up stuff and see if it flies. I suspect everyone does it.

I’ve worked quite successfully using this methodology and I’ve been studying and experimenting with it specifically for the four years I’ve been at NICTA. I think I have a nice model emerging that explains how and when to combine domain/hypothesis driven design and insights led user centred design as related frameworks but with their own specific cadences. The model accommodates vision, research experiments, business requirements, customer inclusion (participatory), agile solution design and testing and validation.

NICTA project taxonomy and relationship sketch
NICTA project taxonomy and relationship sketch

The current explanation of the methodology comes as the result of a long term design project in itself, reviewing and analysing the 50+ projects over 4 years that have had ux and design contributions.

Initially I was attempting to categorise projects into what looked like the a taxonomy to assist with setting design requirements and team expectations but found too many differences. There were too many cross overs of activities and this weird need by humans to have a neat progression of design work and deliverable stages as a project matured into a product which simply didn’t exist.

What I took as project level design failures were actually fine. There were simultaneous activities happening. It was messy but there was a pattern. Our time to market is quite long so my impatience was getting the way. It all just needed reframing to make it measurable, understandable and then communicable.

Early sketch of cadence relationships
Early sketch of cadence relationships

The way forward was not “science fiction” design or UCD, not a middle ground nor an evolution from one to the other but both simultaneously and most importantly understanding their own cadences and benefits. The way forward was not the focus on outcomes but the activities that delivers them and therein the pattern emerged.

We have a dual pathway that helps us map and explain our activities towards designing for future audiences.

 

Domain Driven / Hypothesis Led Design

This is about exploring a future state technology using hypothetical design and team subject matter (domain) expertise.

This provides a project benefit of tangible testing and validation artefacts early, as well as maintenance of project momentum as teams, sponsors and customers get bored (or scared) when they aren’t seeing ideas manifesting via design i.e. there is evidence of delivery to customer stakeholders and/or project sponsor.

Another benefit is that technical innovation opportunities are unencumbered by current problems or conventional technical limitations. If you are designing for a 5 year future state, we can assume technology might have improved or become more affordable.

Also part of this is sale pitch type work where a concept is used to engage customers so there is a clear business engagement benefit.

Typical design activities include:

  • User interface
  • Information architecture
  • Interaction design
  • Co-design with customer and team
  • Deep thinking (quiet time), assumptions about needs for proposed audiences (yep, pretend to be that person),
  • Sales pitch concept designs
  • Infographic or other conceptual communication artefacts

Learnings:

  • Make no apologies for solutions being half baked or incomplete.
  • Continually communicate context and the design decisions because everyone imagines wrong stuff if you leave gaps.
  • Shut down any design by committee activities, relying instead on small bursts of subject matter expertise and vision leadership from the project sponsor. Two chiefs are a nightmare just as unstructured group brainstorms are.
  • Keep vigilant about eager business development folks selling an early delivery of work that is technically not feasible, has no real data (i.e. only sample data) or unfinished academic research (algorithms are immature). This is especially problematic when dealing with government who expect what they were pitched to the pixel and because it looks real, think it’s not far off from being built.

Insight Driven Design

This is about solution design using user research.

The project benefits are that the insights inform solution design and assists with maintenance of project focus (reduction of biases and subject matter noise)

If you are working on short term solutions for a customer while journeying to a blue sky future state, then this work assists in delivering on those milestones.

When there is user inclusion it helps with change management (less resistance to changes within current workflows). It provides evidence of delivery to customer stakeholders and/or project sponsor and short term deliverables can assist in securing income and/or continued funding

Typical design activities include:

  • Discovery workshops
  • Contextual inquiriy interviews
  • Establishing user beta panels
  • Usability and concept testing
  • Surveys
  • Usability testing
  • Metrics analysis

Learnings

  • Analysis paralysis is a killer. User research can be called into question once customers/teams tweak to the fact they don’t know anything about users and will expect large amounts of research to back decisions, thereby inflaming the issue with more information but not getting any more useful insights.
  • Unclear objectives from user research provide unclear outcomes
  • Poor recruitment leads to poor outcomes (expectations the designer(s) can just “call a few friends” for interviewing and/or testing).

Cadences

The cadences mentioned in the third paragraph refer to the time frames in which Hypothesis and Insight driven design operate.

  • They are in parallel but not entirely in step.
  • They cross inform as time goes on, typical of divergent/convergent thinking. New opportunities or pivots will create a divergence, results of interviews and testing will initiate a convergence.
  • Hypothetical cadences are short and may or may not map to development sprints.
  • Insight driven are longer, and may span several hypothetical cadences.
  • Ethnographic/behavioural science research projects are of a longer cadence still, and ideally would feed in/take insights from both the previous two. I’ve not covered this here as it’s not my area of expertise.

The graphic below illustrates this.

D61XD_methodology

This final outcome is the result of revisions of 4 years of projects with the current NICTA UX Design team using discovery and design thinking activities.

NICTA UX is: Hilary Cinis, Meena Tharmarajah, Cameron Grant, Phil Grimmett, Georgina Ibarra, Mitch Harris and Liz Gilleran.

 

 

The art and democratisation of digital experience design

Last week I was invited by BlueChilli to do a short presentation at their developer and designer offsite on the topic of “What makes a standout user experience through design in the digital space?”

In trying to answer this question I found myself really struggling to qualify anything significantly applicable. For a few reasons – that last thing on a friday arvo they’d probably had been shown lists of “should do’s”, models, reading references and exercises. 

I was also struck that in total honesty I don’t think there is any answer to this question that isn’t highly dependant on a huge range of things. Its simply too broad to handle in 30 mins + questions.

So I flipped it away from methods and to the other side – the experience, intuition and innate skills we develop over time and from working with others.

My hope was to provide the space for permission or confidence to rely on each designer’s unique skills, and how to handle making mistakes to get to the ‘standout user experience’

Basically there is more than science to good experience design –

  • there is an artist’s ability to make a leap using imagination and also the artist’s confidence in experimentation (which is also scientific but we don’t always have the frameworks to run a ‘proper’ experiment)
  • there is the team and it’s the team holding a shared and informed user experience frame of mind working collectively with respect for expertise that is also fundamental to good experiences.

The presentation is below. Skip to slide 6, everything prior is a bit of background about me. It’s not a particularly long presentation and I spoke a lot in each section about my experiences that have led me to this.

Review of NICTA year three

My review of year three at NICTA is a bit overdue.

Team growth

In 2014 I was alone, now mid into 2015 we have a team of six user experience designers at NICTA. There are four seniors, one mid-level and my self (Principal). Four of us are in Sydney, one in Canberra and one in Melbourne.

Meena and I in the design lab in Sydney
Meena and I in the design lab in Sydney

We are strong end-to-end designers capable of running a project from inception through to front end solution and handover. Mostly the work is a mix of hypothesis and insight derived, and we walk a line between pitch and user informed.

We are comfortable with failure and mistakes, and everything is as lean as possible. We don’t bother trying to define what ‘lean ux’ is, we get it and get on with it. In fact, none of us have been documentation people, have always been collaborative and are very comfortable with the shifting frontier we are faced with daily, understanding we take the teams with us on our exploration rather than direct. I have really enjoyed hiring the people we now have and selected them for this attitude along with their complimentary skills to round out the team.

Successes in 2015

Design is notoriously hard to measure, even more so when working on the high number of experimental projects we do. In review I see the successes as outlined below.

  • Increasing the requests for assistance, involvement earlier and earlier in engagements
  • Acceptance of pushback on front-end solution design deliverables (aka “can we get a mockup of…”) as an appropriate measure when projects have high levels of uncertainty in research findings, users and value proposition which allows more time for investigation and validation set ups.
  • Designing and facilitating”discovery workshops” to evaluate business opportunities within an industry or government services sector.
  • Direct contribution to descriptions of design approaches and outcomes for schedules of work in contracts and term sheets.
  • Increasing involvement in “non-core funded” projects (ie billable projects)
  • UX and design being used as a key to accessing high profile projects within key digital government activities (yes this is deliberately cryptic) that are non-core funded.
  • Significant contribution to the creation of two platforms upon which we can swiftly create instances to support projects without constant reinvention of the wheel. This is not a product suite, a style guide or a pattern library. They are true platforms with deeply engaging content, APIs and using open data. They deliver (and showcase) NICTA research, business, engineering and design. One is now live – terria.io, the other is still under wraps.
The UX space at TechFest 2015, Sydney Innovation Park
The UX space at TechFest 2015, Sydney Innovation Park
  • Placement of UX as a prominent capability at the annual technology showcase Techfest 2015, with workshops for startups, kids and a working wall to discuss digital design methods.
  • UX designers featured on the new NICTA website home hero module.

Along with this are the large number of compliments and experiences that reinforce our worth within the business. Larry Marshall CEO of CSIRO has mentioned user experience on several occasions when presenting the future of NICTA with CSIRO.

Learnings

Along with the team growth and successes, it’s been really great to reflect on what I’ve observed and learned in the last 18 months.

  • We have room to experiment and explore with adapted and new approaches. Frameworks have emerged but there is never a set process. We constantly review and improve them as we go. For a long time I felt I was the only person at NICTA not experimenting and exploring but during some time spent reflecting (and not obsessing about the negatives during my annual leave at the end of 2014) I realised, actually, it’s always been that way. I am now passionate about promoting and defending that culture.
  • Space and time think deeply is really important. Which is directly related to the next point…
Running my UX for Startups workshop
Running my UX for Startups workshop
  • Saying “no” to requests for help is really hard. Every new project sounds cool from the outset. But I think we can do a better job at choosing what to work on… Being that most of the work we do is similar, if not the same as startup incubation my philosophy is it’s ok to leave folks to their own devices and act as educators/consultants during the valleys between intensive ux/design activity peaks and/or the reassess if our involvement is needed as we go.
  • Expectation management is a fluid landscape that requires constant vigilance. Never assume anyone in the room is in the same place as you – customers, stakeholders, team members, business or communications/marketing. Context setting needs to be done each and every time; open team communication is needed at all times. It’s everyone’s responsibility to talk and confirm where they’re at.
  • Constant context switching and learning new domains is exhausting. EXHAUSTING. At NICTA we designers need to deeply understand the users and domains and these are usually highly technical and very specific. People do PhD’s on these domains so we are faced with a mountain of learning at the start of any project and a lot of time in parallel with another intense project. (see comment above about saying “no” more…) I’m not sure how to mitigate this… I’m open for suggestions! Dr Adam Fraser’s The Third Space is helping me to put a reference together to monitor and head off burnout threats.

I’m unsure what lies ahead with the proposed merger. Its nerve wracking and the chance to devise new strategies to engage with it is both tricky and exciting. The team are directly contributing and I look forward to seeing how it plays out. Tell you next year!