Leading Design Conference 2019

In November this year I had the fortune of attending the Leading Design Conference held in London, UK. I’d not attended this conference hosted by Clearleft before so was really excited about attending and what I might walk away with.

Take a look at the speakers and topics https://leadingdesign.com/conferences/london-2019

The New Design Frontier Report

Invision had commissioned and provided a report “The New Design Frontier Report” that has a useful breakdown of current state and trends around:

  • People: Team, Key partners, Executives and employees
  • Practises: User Research, Design Strategy, Experimentation, UI Design
  • Platforms: Design Operations, Design Systems

I think the breakdowns in the report can definitely help with figuring out where a team is at and also how to measure what’s appropriate for an organisation. Download it and take a look: https://www.invisionapp.com/design-better/design-maturity-model/

Conference themes and takeaways

It might help if I share what I was hoping to get from the conference. At the time of booking the tickets I was heading up the UX practise at CSIRO’s Data61. When the conference happened I had already moved to an IC role at Salesforce in Experience Architecture. (My reasons for the role change are many and complex and you’ll see some evidence of that in what I got from the conference.)

Initially I was looking for some well informed insights about how to run a design team in a consulting organisation, especially one working in true innovation – making new technologies. I still think the conference content was relevant for my new role as I’m setting up design capabilities for clients, leading designers on  projects and establishing a design community of practise within Salesforce. As I have some residual anxiety from the time at Data61, I was also seeking some validation that all the hard decisions, initiatives and learning from failures had had some lasting impact for the business, my old team and myself. Lastly I was looking for guidance on solid metrics to measure a design team’s success that wasn’t NPS, CSAT or usability and adoption measures at a specific project level.

The themes throughout the conference presentations echo’d the report regarding people, practises and platforms. Anyway my take-aways are:

There is no design leadership playbook
We are all making it up as we go dependant on the businesses we are in. Most presentations were from design leaders in product organisations and consulting agency with no R&D and little enterprise represented. This was eye opening because you’d expect more maturity in leadership patterns from product and consulting. It appears design maturity is uneven across organisations and not necessarily tied to team size or age of the business.

Design as facilitator of change
Either up front or behind the scenes, designers are the agitators of change. Design is a risk-taking activity but if led well it can be holistic and inclusive activity. Leveraging the democratisation of design can assist in uplifting the role of designers through awareness. Probably not new news is that silos were called out as innovation killers. When considering digital transformation, it’s easy to forget that digital and associated technologies are incidental and that transformation is an organisational and behavioural change. Transformational leadership may come from non-digital or non-design background eg Sales, if financials are key to change. And if it’s driven by an external lead like a consultant, it needs to be very deliberately co-created with internal stakeholders. We have to be very careful of creating an “us and them” dynamic, as this can affect confidence in any high risk work. We need to be wary of unproductive coalitions or our own cultural silos. If we think about transformation goals without considering the complex adaptive systems we work within, this can result in unhappy people outside of the delivery team – the very people we are aiming to transform a business for.

Design is still an unknown capability
Design suffers from a lack of confidence in selling its value. But the key is remembering that design remains fluid – and things are moving fast and problems can’t be solved that are fixed in time and need people who can see patterns. Designers have this capability, especially those who can make a call in the absence of information and take it forward iteratively.

Measuring success
Perhaps if we work on answering why we can’t articulate the value of design this could lead to establishing that value, constrained by the organisation we are working in. Other tips were to start every new engagement with customer data to set business goals and benchmarks. It’s important to make sure design outputs and design management occurs in tandem as operation is the key to success – this means delivery. We need to be intentional in understanding what are we measuring and why, as there can be conflict between what we need to measure v’s what we can measure. There is some evidence that companies that link non-financial and value creation are more successful.

Be patient, impact and changes take longer than we think
My biggest flaw is impatience, I underestimate how long change can take and often get demotivated as a result. I really needed to hear this tip. Related to this is another gem that we simply cannot communicate too much. Lastly on this theme we were reminded that we can’t win everything and be prepared to cut losses. Two years into your leadership, expect a crash because failure is inevitable and recovery is hard.

Intentional team design
Design teams need to be thoughtfully created with a sense of purpose and longevity from the outset. Organic, unfocussed growth can lead to problems for leaders and organisations down the track. We saw many presentations of team structures and skills profiles and how they change over time as teams grow. What appeared to be a pattern was the establishment of sub teams and leadership roles earlier rather than later and also splitting out capability broadly into design ops and strategy. This aligns with the delegation topic below. Well structured collaboration was cited as a positive way to create team goals and rituals including career paths.

This is an area I have had my hardest lessons. Not having any kind of “playbook” means this knowledge is hard to come by without trial and error but it is also very difficult as most teams evolve in response to demand and their leader is usually a strong performer who has some natural relationship skills. I feel the design mentoring network could do with some work now there is an emerging class of experienced technology design leaders.

Ruthlessly prioritise, delegate as much as possible
Many presentations were internal facing and focussed on building team cultures. Once you’re a design leader your time in the field needs to be limited as energy goes to leading for impact and supporting your team. Also, we were cautioned to be authentic from the outset because faking it until you make it is exhausting.

Building trust and relationships
It is critical we understanding our audiences – our teams, our internal partners, our executive. Speaking their language and as delivering as promised is key.

The insights I’ve captured were from presentations and workshops by:

  • Maria Giudice — Founder, Author, Coach, Hot Studio
  • Kristin Skinner — Co-author, Org Design for Design OrgsFounder, & GSD
  • Jason Mesut — Design Partner, Group Of Humans
  • Melissa Hajj — Director of Product Design, Facebook
  • Julia Whitney — Executive Coach, Whitney and Associates
  • Alberta Soranzo — Director, Transformation Design, Lloyds Banking Group

I’d suggest following them on social media and blogs.

Leading Design 2019:  https://leadingdesign.com/conferences/london-2019

Softwiring – The role of Human Centred Design in Ethical Artificial Intelligence

I recently revised my earlier articles about the human centred design contribution to automated decision systems that have ethical outcomes. I presented this as a keynote at our annual tech showcase this year, D61+Live.

Introduction

“To ensure autonomous and intelligent systems (A/IS) are aligned to benefit humanity A/IS research and design must be underpinned by ethical and legal norms as well as methods. We strongly believe that a value-based design methodology should become the essential focus for the modern A/IS organization. Value-based system design methods put human advancement at the core of A/IS development. Such methods recognize that machines should serve humans, and not the other way around. A/IS developers should employ value-based design methods to create sustainable systems that are thoroughly scrutinized for social costs and advantages that will also increase economic value for organizations. To create A/IS that enhances human well-being and freedom, system design methodologies should also be enriched by putting greater emphasis on internationally recognized human rights, as a primary form of human values.”
~ IEEE The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems – Methodologies to Guide Ethical Research and Design

“New ethical frameworks for AI need to move beyond individual responsibility to hold powerful industrial, governmental and military interests accountable as they design and employ AI… When tech giants build AI products, too often user consent, privacy and transparency are overlooked in favor of frictionless functionality that supports profit-driven business models based on aggregated data profiles… Meanwhile, AI systems are being introduced in policing, education, healthcare, and other environments where the misfiring of an algorithm could ruin a life.”
~AI NOW 2017 Report

In the HCD response to this topic, “ethical” means autonomous systems mindfully designed and developed that minimise harm, inequality and/or discrimination.

This response is applied when an automated system (AI, ML etc) moves from applied research to commercialisation and wide uptake for purposes beyond knowledge creation and dissemination. This means anything looking to move into operational use for clients or customers. It also not a binary decision, it would include both utilitarian and deontological approaches (or wider, eg “care ethics”).

There are at least three broad components to solving ethical AI questions. In each there are important considerations governing input (data, privacy constraints, purpose) and output (opportunity, impact, products/services).

Regulatory – Human Rights, International and National law, Compliance eg organisational code of conduct

Technological – Research and Development that seeks to resolve data collection/generation, privacy, encoded ethics, models, algorithms

Human Centred Design – Ecosystems of People, Data and Digital, Experience, Empathy,  Context, Culture, Diversity

This article is the last part, HCD or the “softwiring”

1.0 Ecosystems

Outside of academic research, digital technologies don’t exist in bubbles or vacuums for their own purpose. People make it with an intent for someone else (specialised groups with specific tasks eg forensic investigation; or a broad range of less skilled users seeking to fulfil certain tasks eg dynamic mapping of travel plan) to find information to assist with making decisions.

These systems interact with other systems, drawing on and creating data generation by the users. The point of this section isn’t to break down the ecosystems but to set the point of view that technology needs to be considered less as a tool, and more like a network of systems with complex relationships.

Diagram 1: The ecosystem of roles and responsibilities for AI development

To date, AI has been created by data, computer scientists and software engineers but when we move towards broader application development we can call on the expertise to an ecosystem of experts with special roles and responsibilities. AI production becomes a collective activity, unburdening the engineers from the ethical questions and introducing diverse knowledge and skills to reduce the incidence of unintentional consequences. This can be broken down into clear areas, as mentioned in the opening section (Regulatory, Technical, HCD).

Values

Questions regarding legal criteria or “can we..?” would be reviewed in regard to the the UN Declaration of Human Rights, concerning human dignity, social cohesion and environmental sustainability. Existing Australian data and privacy laws are available to examine potential, unintentional violations or abuses of personal data as well as many other international laws on the same.

This ought to assist establishing revisions of corporate values or creation of new ones (including SMEs and startups) that then assist the executive and their business decisions or “should we…?”.

Like most things, the devil is in the detail when it comes to actual development work and the embodiment of values into production requires further exploration particular to each project, program of work or product. This section only examines “who and why…? leaving “how and what” to technical expert contributions detailed in.

As mentioned above, AI software is made by teams, not just computer or data scientists or software engineers and is interacted with by a range of users, not just one operational kind. Regarding teams: an understanding of central (full time, dedicated project teams) and orbiting (consultation expertise eg lawyers or ethics councils) team members can assist in understanding the responsibilities and provide wayfinding for critical points of decision making during the operationalisation or production of ethical AI.

Leadership

Leaders have several challenges, both in understand the various kinds of AI and how it can be used (eg is it a prediction problem or a causation one) as well as how to resolve the ethical challenges these new services propose (eg is it low risk like having to talk to a chatbot, or high risk like “pre-crime”?).

An important point worth noting is there is evidence that leaders are more likely to use a tacit response to ethical challenges than compliance or regulation. The evidence also shows that organisations that support information sharing networks between leaders and/or other levels of staff resolve ethical dilemmas more successfully than those with structures that leave leaders isolated (eg due to status threats). Also worth noting is significant organisational changes can trigger ethical dilemmas as the lack of or poor inclusion of appropriate or new expertise coupled with historical approaches creates new situations without a familiar framework (the introduction of AI for business outcomes would be a clear example of this as it would cause a massive internal disruption in both finding new markets and required skills). This would seem to further support an ecosystem and diversity approach to ethics to share the load of the decision.

1.2 Data Quality

Diagram 2: “Revealing Uncertainty for Information Visualisation” Skeels, Lee, Smith, Robertson

Data has levels of quality, that a user can challenge if the outcomes don’t meet their mental models or expectations and this can lead to a drop in trust of the results. This model not only helps with understanding the relationships of data to itself but also can serve to ‘de-bug’ the issues when challenged by the user in where the breakdown might happen.

For example, if the data is poorly captured then all the following levels will augment this problem. If the data is good and complete then the issues might be in the inferences. This is why user feedback using real content (ie not pretend, old or substituted data) is important to test the system with.

1.3 Data Inclusion

Another point that provides context is the a considerable barrier to ethical technology development is uneven internet connectivity either through infrastructure or affordability (IEEE Ethically Aligned Design – Economics and Humanitarian Issues; Australian Digital Inclusion Index 2017) resulting in data point generation that reflects a certain location/socio-economic bias. While connectivity in Australia is improving there there are groups who are not connected well enough to use digital services that correlate to low education, low income, mobile only, disability and region.

“Across the nation, digital inclusion follows some clear economic and social contours. In general, Australians with low levels of income, education, and employment are significantly less digitally included. There is consequently a substantial digital divide between richer and poorer Australians. The gap between people in low income households and high income households has widened since 2014, as has the gap between older and younger Australians, and those employed and those outside the labour force.”

~ Australian digital inclusion index 2018

2.0 Operationalisation

Operationalising ethical AI development requires multidisciplinary approaches. As mentioned above, there are legal and technical constraints, below are details for a human centred component. Unlike the first two, which are either either fixed by law or encoded and therefore “hard wired”,  this is about softskills and could be referred to as “soft wired”.

Soft wiring is applied during the later stages of applied research when technologies are looking to into early stage production and balances utilitarian and deontological (duty of care) philosophies. There are four parts:

2.1 Empathy

Empathy is the ability to understand the point of view of the “other” and alone it won’t ensure an ethical AI outcome, it forms part of a suite of approaches (mentioned in the opening section of this document).

Unfortunately, empathy isn’t natural, easy or even possible for some people due to conditioning or biological reasons, stress, or other factors like perception of business needs.

However, the good news is technology production already has experts in capturing and communicating empathy. Their work in entirely focussed on understanding people, their needs and motivations within context. They are:

    • User Experience Researchers and Designers
    • Behavioural scientists
    • Psychologists
    • Ethnographic researchers
    • Ethicists

These roles may already exist in house or could be contracted to assist with project planning as well as audits/reviews. In some cases, these skills are also a mind set, and non practitioners can use them just as effectively.

2.2 Experience Led

Experience design starts with a clearly defined problem and examines all the people affected by this problem and relies on the empathy work done to capture the differing needs of various participants in this “ecosystem of effect – from highly influential stakeholders right through to passive recipients who are deeply affected by the AI but who have little or no influence over the decisions made on their behalf.

Experience led places “Who” and “Why” before “How and What”. This work aims to sufficiently understand the motivations and expectations from the different user types, not just the project sponsor or technology makers.

These perspectives also provide context for clearly defined use cases – facial recognition surveillance might be acceptable for airport security but is it for citizens travelling in public places?

Conceptual diagram illustrating how different people described in this document might interact with a system
Diagram 3: Simplified, conceptual diagram illustrating how different people interact with a system

An ethical AI product also needs to be holistically considered in both strategy and solution design.

2.2.1 Strategy

The following questions can be used as a framework to assist in troubleshooting potential problems in advance:

  • User Research
    • Do you know who will use this system?
    • Do you know why they would use it?
    • Is the AI product “better” than a human or existing systems?
  • Harm reduction
    • How aware or sensitive are development teams and stakeholders about the “ethical quality” of the technology?
    • Is utility v’s privacy appropriate?
    • Who owns the “duty of care”?
    • What happens if an error is made, who is responsible?
  • Trade off
    • Who benefits from this AI service, and who is left behind?
    • Is this gap acceptable?
    • Who will answer for this gap if it needs defending?

2.2.2 Solution

  • Assist comprehension
    • Is the apply of perceived affordances helpful?
    • Apply “cultural conventions”, visual feedback or signifiers
    • Apply Nielsen’s heuristics
  • Validation
    • Are you testing with a variety of users?
    • Are you reviewing and applying revisions to the solution in response to testing?
    • Build – measure – learn

2.2.3 Trust

Trust is established by treating all users with dignity. Trust is also easily lost when users don’t understand the decisions made by a system.

  • Clarity
    • Is it clear to a person using the digital produce/service why a decision has been made?
    • Are they sufficiently informed to at their level of comprehension?
  • Right of reply
    • Is there the feeling of a standard of due process they recognise or can understand?
    • Can that person participate/engage in that due process?
  • Accountability
    • Is there a feeling that the provider of the decision/service is owning responsibility of the consequences deliver by the system they have deployed?
    • Is there some form of algorithmic transparency provided (end of black box AI)?
    • What happens if the system breaks down? (power, code, data failure)

2.3 Practise

Processes and checklists don’t catch ethical nuances, only a living practise of exploring, testing and validation can. Establishing a good ethical practise encourages this to be done every project, to pursue the best possible outcomes and auditability.

2.3.1 Constructive Discourse

  • Top down
    • Leaders are empowered to seek input on ethical quandaries from various sources (personal networks and regulatory)
    • Management welcomes advice from their teams
  • Bottom up
    • Development teams are encouraged to self educate and share knowledge
    • Members intelligently challenge the ethical proposition

2.4 Team Diversity

Diversity has been identified in multiple publications (IEED, AINow and other National AI Frameworks eg France) as critical to reducing the errors of bias AI can deliver.  This is both development teams and users. Development teams need not only gender but cultural, cognitive, social and expertise and skills. This friction is deliberate and we need to be mindful of “big” voices dominating. There are many conventional team communication techniques already in use to facilitate healthy discussion and input, so they won’t be listed here.

Credits and References

This version is revision of http://hilarycinis.com/user-experience-design-guide-for-crafting-ethical-ai-products-and-services/ – aimed at designers and product managers.

 

User Experience Design Guide for Crafting Ethical AI Products and Services

This page leads to a series of related about the human centred design contribution to automated decision systems that have ethical outcomes.

The audience for these articles was initially the Data61 User Experience Designers and Product Managers who have been tasked to provide assistance on development of products and systems using data (sensitive or public) and Machine Learning (algorithms that make predictions, assist with decision making, reveal insights from data, or act autonomously), because these products are expected to deliver information to a range of users and provide the basis for contextually supported decisions.

However it’s always been hope that a wider UX and Product audience will find it helpful. 

Machine Learning computer scientists, software engineers, data scientists, anthropologists or other highly skilled technical or social science professionals are very welcome to read this guide in order to increase and enhance their understanding of user experience concerns and maybe even refer to it.

Crafting Ethical AI Products and Services Part 1: Purpose and Position
This article looks at the reasons why an ethical mindset and practise is key to technology production and positions the ownership as a multidisciplinary activity.

Crafting Ethical AI Products and Services Part 2: Proposed Methods
This article is a set of proposed methods for user experience designers and product managers working in businesses that are building new technologies specifically with machine learning AI.

Softwiring – The role of Human Centred Design in Ethical Artificial Intelligence
This is a keynote converted to an article and is more of a summary of the potential for UX to have meaningful contributions and impact in ethical technology (D61+Live 2018 keynote)

Reading List

This list that contributed to the articles above was compiled between 2017 – 2018.

Legal

Papers and Reports

Articles

Emerging Practise

Tools

Which and What Data, When?!

Updated April 15, with thanks to Georgina Ibarra for proof reading and edits and David Anning for links to the UN and Forbes.

I’ve noticed a common reaction to the word “data” when observing commentators delivering news stories or politicians evangelising the benefits of open data initiatives. While some of us implicitly understand data use in context from our domain expertise and regular exposure to the varying types of data (including how hard it is to get at times) generally speaking, people get freaked out because they assume the worse.

Granted, there are nefarious types out there collecting and selling personal details that they shouldn’t and this is sort of the point – to educate people about the data in use in a way they can grasp easily. Once we remove this knee jerk reaction about the word data, people can focus on what they can do with data rather than what someone else might do to them with it.

I was at the KnowledgeNation event at the ATP yesterday and this kind of hit home when Angus Taylor (Assistant Minister for Cities and Digital Transformation) talked about the “open data” initiative underway. After he finished his speech the first question from a member of the audience was about citizen’s personal details being released. He of course answered it expertly, but at first I was quite astonished at the leap the audience member made from “open data” to “personal data”. But afterwards I thought: well should it be that astonishing considering the vast ocean of “data” out there and how little most of us know about it?

So that got me thinking – how can we provide clearer descriptors for data that deliver an expectation of use and immediately set the tone for the ongoing discussion? As a user experience professional I see this as a responsibility and am now embarking on a proposed solution to try it out.

Like Eskimos have with snow, we might need more words for data or be more conscious of the type of data we are referencing when we talk about it (and when we talk about the stories we tell with data).

I think we’re all in agreement that the term “big data” is vague and unhelpful so I’m making some suggestions to introduce a commonly used vernacular for different types of data:

  • Private data – the citizen owns it, gives permissions, expiration times, and it’s protected from any other use
  • Secure data – sharable but with mind blowing encryption
  • Market data – anything used to sell products to you
  • Hybrid data – some kind of private and non-private mix
  • triangulated data – those seemingly harmless sets that are used to identify people
  • Near-to-Real Time data – because real-time is rarely actually real-time
  • Real Time data
  • Legacy data – old stuff
  • Curated data – deliberately created data sets serving a single purpose
  • Active: Photos, Videos, Searches (search terms) Communications (email, text, comments, blogs)
  • Passive: Health, Financial, Spending, External environmental, Domestic environmental, Location, Logs

Examples in use could be –

“Google are tracking your Real Time Location data when you use maps”

“The Australian Open Data initiatives makes Curated data from the ABS available”

Private and Secure Financial data will not be shared with any third parties”

At Data61 we are face to face with this too so it will be part of our UX work to discover patterns in attitudes and communication.

I’m currently investigating this idea, and I’d love your thoughts! Is there anything published, either academically or otherwise that might have attempted to do this already?

Refs: http://www1.unece.org/stat/platform/display/bigdata/Classification+of+Types+of+Big+Data

Review of NICTA/Data61 Year 4

Well 2015 was an interesting year to say the least. I won’t go into the funding stuff, it’s easy to read up, the results of which we merged or absorbed by (but definitely co-branded) with CSIRO. NICTA is now Data61.

I worked really hard this year at ensuring user experience is understood and utilised to the best of our capacity, especially with the potential increase in demand from the merger. And the business is undergoing the inevitable restructuring that happens so there has been a lot of opportunity to take advantage of to get things off on the right footing for 2016.

But the really exciting thing for me, is the impact on the User Experience team. We’ve now grown to 8 and are in Sydney, Canberra and Melbourne.

There has been an massive increase of our expertise and time in early stage (discovery and exploration) work with external clients. This has been satisfying, resulting from a lot of evangelising and demonstration of capability and probably also the general interest init from the market. While a lot of work has been for Federal and State government, also included are banking and the energy industries. These activities are also great strategic partnership starters and it’s assisted forming some solid relationships with key partners.

My team are a diverse and flexible bunch and I can only admire them as they teach me so much and deliver so consistently.

Some of our projects go for several years and roll over each year but in total, old and new projects we worked on 27 projects:

  • 5 platforms
  • 7 early stage and discovery
  • 8 products for clients
  • 7 products for spinouts/internal startups.

On these projects plus others, we:

  • surveyed 173 users
  • interviewed 76 users
  • ran 49 usability tests
  • facilitated 23 “Discovery” workshops with 414 attendees
  • ran 1 “Exploration” workshop with 12 participants

Things I learnt this year:

  • UX design is not the same across every organisation, so don’t apologise for doing things differently if it’s how your business needs to be served.
  • Design leadership is best treated as it’s own design problem/opportunity. Ultimately we all need to figure this stuff on our own. The number of people in your wider professional circle that you can rely on for council (or even just returning a message) is a lot smaller than you think.
  • In the same way a client would go to a particular design agency for a certain kind of job, each designer in a team has their own special skills suitable for certain projects, client needs and team dynamic.
  • UX needs champions within business development and engineering to gain traction. Keep an eye out for them, chat with them, learn from them.
  • Preparation needs patience … a future state proposal can take a long time to bear fruit… and don’t expect it to be the fruit that was on the label.
  • Definitions of success are best reviewed in terms of outcomes, not entirely the originally proposed steps or operational requirements.

 

 

Data61 (exNICTA) Design Methodology

For years I’ve attended conferences, read articles and seen presentations about the unquestionable importance of research led user experience design. Assumptions and imagination were downgraded as unworthy practises. We hear how research saves time in the long run, provides focus and yields hidden gems; it helps gain buy-in from teams and creates strong foundations for the decisions made so development can go ahead with less challenges. And yes, it does.

But as a sole approach, this is bullshit. In my pretty long career experience hypothesis and subject matter driven design is as much a contributor to success as research. The trick is not to rely on just one mode of operating. How much you do of each fluctuates depending on the work and the domain.

Its ok to make up stuff and see if it flies. I suspect everyone does it.

I’ve worked quite successfully using this methodology and I’ve been studying and experimenting with it specifically for the four years I’ve been at NICTA. I think I have a nice model emerging that explains how and when to combine domain/hypothesis driven design and insights led user centred design as related frameworks but with their own specific cadences. The model accommodates vision, research experiments, business requirements, customer inclusion (participatory), agile solution design and testing and validation.

NICTA project taxonomy and relationship sketch
NICTA project taxonomy and relationship sketch

The current explanation of the methodology comes as the result of a long term design project in itself, reviewing and analysing the 50+ projects over 4 years that have had ux and design contributions.

Initially I was attempting to categorise projects into what looked like the a taxonomy to assist with setting design requirements and team expectations but found too many differences. There were too many cross overs of activities and this weird need by humans to have a neat progression of design work and deliverable stages as a project matured into a product which simply didn’t exist.

What I took as project level design failures were actually fine. There were simultaneous activities happening. It was messy but there was a pattern. Our time to market is quite long so my impatience was getting the way. It all just needed reframing to make it measurable, understandable and then communicable.

Early sketch of cadence relationships
Early sketch of cadence relationships

The way forward was not “science fiction” design or UCD, not a middle ground nor an evolution from one to the other but both simultaneously and most importantly understanding their own cadences and benefits. The way forward was not the focus on outcomes but the activities that delivers them and therein the pattern emerged.

We have a dual pathway that helps us map and explain our activities towards designing for future audiences.

 

Domain Driven / Hypothesis Led Design

This is about exploring a future state technology using hypothetical design and team subject matter (domain) expertise.

This provides a project benefit of tangible testing and validation artefacts early, as well as maintenance of project momentum as teams, sponsors and customers get bored (or scared) when they aren’t seeing ideas manifesting via design i.e. there is evidence of delivery to customer stakeholders and/or project sponsor.

Another benefit is that technical innovation opportunities are unencumbered by current problems or conventional technical limitations. If you are designing for a 5 year future state, we can assume technology might have improved or become more affordable.

Also part of this is sale pitch type work where a concept is used to engage customers so there is a clear business engagement benefit.

Typical design activities include:

  • User interface
  • Information architecture
  • Interaction design
  • Co-design with customer and team
  • Deep thinking (quiet time), assumptions about needs for proposed audiences (yep, pretend to be that person),
  • Sales pitch concept designs
  • Infographic or other conceptual communication artefacts

Learnings:

  • Make no apologies for solutions being half baked or incomplete.
  • Continually communicate context and the design decisions because everyone imagines wrong stuff if you leave gaps.
  • Shut down any design by committee activities, relying instead on small bursts of subject matter expertise and vision leadership from the project sponsor. Two chiefs are a nightmare just as unstructured group brainstorms are.
  • Keep vigilant about eager business development folks selling an early delivery of work that is technically not feasible, has no real data (i.e. only sample data) or unfinished academic research (algorithms are immature). This is especially problematic when dealing with government who expect what they were pitched to the pixel and because it looks real, think it’s not far off from being built.

Insight Driven Design

This is about solution design using user research.

The project benefits are that the insights inform solution design and assists with maintenance of project focus (reduction of biases and subject matter noise)

If you are working on short term solutions for a customer while journeying to a blue sky future state, then this work assists in delivering on those milestones.

When there is user inclusion it helps with change management (less resistance to changes within current workflows). It provides evidence of delivery to customer stakeholders and/or project sponsor and short term deliverables can assist in securing income and/or continued funding

Typical design activities include:

  • Discovery workshops
  • Contextual inquiriy interviews
  • Establishing user beta panels
  • Usability and concept testing
  • Surveys
  • Usability testing
  • Metrics analysis

Learnings

  • Analysis paralysis is a killer. User research can be called into question once customers/teams tweak to the fact they don’t know anything about users and will expect large amounts of research to back decisions, thereby inflaming the issue with more information but not getting any more useful insights.
  • Unclear objectives from user research provide unclear outcomes
  • Poor recruitment leads to poor outcomes (expectations the designer(s) can just “call a few friends” for interviewing and/or testing).

Cadences

The cadences mentioned in the third paragraph refer to the time frames in which Hypothesis and Insight driven design operate.

  • They are in parallel but not entirely in step.
  • They cross inform as time goes on, typical of divergent/convergent thinking. New opportunities or pivots will create a divergence, results of interviews and testing will initiate a convergence.
  • Hypothetical cadences are short and may or may not map to development sprints.
  • Insight driven are longer, and may span several hypothetical cadences.
  • Ethnographic/behavioural science research projects are of a longer cadence still, and ideally would feed in/take insights from both the previous two. I’ve not covered this here as it’s not my area of expertise.

The graphic below illustrates this.

D61XD_methodology

This final outcome is the result of revisions of 4 years of projects with the current NICTA UX Design team using discovery and design thinking activities.

NICTA UX is: Hilary Cinis, Meena Tharmarajah, Cameron Grant, Phil Grimmett, Georgina Ibarra, Mitch Harris and Liz Gilleran.

 

 

The art and democratisation of digital experience design

Last week I was invited by BlueChilli to do a short presentation at their developer and designer offsite on the topic of “What makes a standout user experience through design in the digital space?”

In trying to answer this question I found myself really struggling to qualify anything significantly applicable. For a few reasons – that last thing on a friday arvo they’d probably had been shown lists of “should do’s”, models, reading references and exercises. 

I was also struck that in total honesty I don’t think there is any answer to this question that isn’t highly dependant on a huge range of things. Its simply too broad to handle in 30 mins + questions.

So I flipped it away from methods and to the other side – the experience, intuition and innate skills we develop over time and from working with others.

My hope was to provide the space for permission or confidence to rely on each designer’s unique skills, and how to handle making mistakes to get to the ‘standout user experience’

Basically there is more than science to good experience design –

  • there is an artist’s ability to make a leap using imagination and also the artist’s confidence in experimentation (which is also scientific but we don’t always have the frameworks to run a ‘proper’ experiment)
  • there is the team and it’s the team holding a shared and informed user experience frame of mind working collectively with respect for expertise that is also fundamental to good experiences.

The presentation is below. Skip to slide 6, everything prior is a bit of background about me. It’s not a particularly long presentation and I spoke a lot in each section about my experiences that have led me to this.

About the creative process within user experience design

I have always struggled with the discord between creative design and user centred design.

I went to design school and learned colour, form, typography, layout, flow and how use visuals to capture the imagination of the audience. Over the years working in tech it got hammered out of me because software was built by engineers, then after a while it was designed by researchers. My problem is that empirical always trumped creativity and there is room for both, not one hiding behind the other. Yes, UCD is creative in the problem solving side of things and this is extremely important but the creative is devalued unless it’s championed by a visionary. That so many UXers have a creative and visual design background is important to note, a dirty secret that I think needs to be aired. We do, and we are good at what we do and we can make up well considered stuff in the absence of research and its ok.

Until now I couldn’t quite articulate the creative value of design in technology, usually falling back on feeling left behind, misunderstood or just some hand wavy “some of us have intuitive skills” (intuition being highly refined skills crafted after years of experience).

This was really causing me a serious amount of professional and then also, personal depression. I kept upping my workload, hoping I’d find that missing spark in the next job – that moment when you hear the brief and get really excited about the potential – but of course, with even less time to do anything, it just got worse and worse. Also, working in a scientific research company, it’s really hard to communicate any kind of user research unless it’s published or attached to PhD. My attempts at talking their language fell on hard ground and I found that leveraging creativity got me way more traction.

So I dropped a whole bunch of projects to focus on one large one (as well as manage and grow a design team).

Meanwhile…

A weird series of events occurred. Sitting in my department director’s office, where I have sat many days each week, in the same chair, I spotted for the first time Design Driven Innovation” by Roberto Verganti, and asked to borrow it. “Yes!” he said, “Tell me what you think, I dunno about it.”

I started reading it, and after just the first chapter it all clicked totally into place. I finally felt permission to be the creative leaning UX designer I am, using UCD activities as well. I felt validated that I deep think, work immersively and reframe, and because there is precedent for it. I only have to adjust my skills slightly, not re-learn extensively and can now refer to an established document to back up my approaches.

A few days later, the head of the Machine Learning research group who is very encouraging of UX and who graciously shares his ideas with me sent me an email suggesting I read a book, which he had found an electronic copy of and attached for me. Same book.

Late that week, I was involved in an experimental workshop, hosted at our lab which challenged (successfully, I’ll add) the traditional way Government will develop a particular digital solution. After the first day I went home and decided to step away from the entire days work and think about the “meaning” of the work we were doing. How humans as community and messy creatures might handle the issue in an non-technical way. How geographical information and community updates are linked, and how to get away from bureaucratic procedure and the feeling of surveillance by “big brother” governmental mindsets. (Unfortunately I can’t share the details in full.)

I pitched the idea to the organisers and the next day we created a splinter group to examine and create a pitch for the new idea. The reframing and alignment of human meaning to an incredibly boring and laborious task was immediately taken up with excitement when I presented it to the senior executives in the room and created quite a buzz around the potential. The preceding two solutions also pitched which we had all worked on and while they were extremely well considered and quite achievable, they were met with challenging questions and a bit less enthusiasm.

I’ve used this approach many times, not knowing there was a name for it in most of my work and the times it’s failed is when I am unfamiliar with the domain, when I’ve relied too much on asking user’s what they want/need and when I am unclear of the meaningfulness and hoped someone could provide this for me (either as their vision or from research). When I redesigned iView, in 2010, I used this approached. It tested well and had incredible uptake. (Since then it has been redesigned).

Approaching digital solutions with mindset of an artist is really freeing. It is why in my hiring and building out the UX team at NICTA that I look for people who have non-performance type creative and artistic pursuits outside of work. Ego can get in the way of performance artists, while solo creative pursuits are more suited to deep thinking and exploration.

In deep research driven tech, I find the best starting point is examining and structuring a proposed workflow that makes sense using the tech and data; then observe the actual operators and beneficiaries of the current tech workflow practices and tool chains. From there we can “imagineer” potential solutions to then test against. Because it’s really hard to interview users about what they want and/or need in emerging, deliberately disruptive tech. They respond with conventional mindsets and speak in conventional solutions. I think this can “dumb down” the final results which as we all know suffer enough compromises as it is. Using a design driven approach frees up the the limitations and steps back to behavioural observations.

This now leads back to software no longer being a tool but an ecosystem. Read more here: Software Isn’t A Tool

 

Software isn’t a tool

many kinds of hammer
many kinds of hammer
We know tools by their affordances.

There is an interesting yet uneasy agreement, a deal that was struck between humans and technology, where at some point technology promised ease and comfort. And yes, the tools we started making provided this. We know tools – they are obvious by their affordances. You can pick up a tool and pretty much use it straight away. There are of course experts that wield tools like magicians (like a sushi chef, fine artist or a chainsaw sculptor) but we understand that it takes a lot of practise, experience and mistakes to get that good.

With the evolution of technology (and proliferation) to the digital, this notion of tools and the contract of making life easier has become a bit unstuck. From the simple ones like phones that dial themselves to software that doesn’t take the right data format and environmental sensors that require coding experience to enable, its all gotten a bit hard and complicated.

I propose the notion of a tool is no longer appropriate for software (including applications and websites) and therefore, the idea that it is “easy” to use digital technology is no longer valid.

Software is a ecosystem and a transient micro-community that connects humans to other humans either directly or via artefacts created by humans (usually data and images).

CIty street
Stepping out into an unfamiliar street, in a new city

When we think about these relationships and how software is a facilitating conduit then perhaps other metaphors are more useful, like a city or a market, or a machinery shed or a plane cockpit.

You can’t step into any of these environments and immediately interact with them unless you have a frame of reference and time to explore or training and experience. Like a city or a market, we reach for similar patterns in our memories and use these as initial templates for navigation, adjusting as we go and understand the differences in the new environment. In the case of a machinery shed or cockpit, expertise is expected. If we take the metaphor further, we can say some of these systems house sub-systems therefore adding complexity.

But when we watch users interacting with digital technology they are usually quick to anger or frustration when things don’t work because they expect it to. And we keep reinforcing this.

Is this evidence of an unspoken agreement between a human and machine? Why are we so mean to these selfless creations?! How did this happen? Answer me Steve Jobs!

So yes, there is some beautiful work in digital tech that eases the pain and delivers on the promise of an elegant, frictionless user experience. Until you get locked into a walled garden and start getting cranky again because each time you start iTunes it asks you to download a new version that has removed that feature you relied on all so often.

Communities, if we think about them have unwritten contracts. They are healthy when there is:

  • mutual respect: eg we provide a means to post your photos, and I won’t own your image
  • a notion of benevolent hierachy: eg Information architecture, reduction of clutter to ease decision making
  • respect for personal space: eg fork this code and run wild
  • assistance when needed: eg responsive help desk
  • reliablilty: eg reasonable performance
Busy farmers market
A new market – feels familiar but where to start!

When this contract is out of balance, people feel trapped, angry, unsupported and resentful. Then they leave or rebel, depending on age and wisdom.

The UX designer is tasked with assisting humans working with digital technology and I wonder if we as the creators of digital technology stop thinking about software as tools and reframe the work as creating communities and ecosystems where technology is the glue, rather than the goal.

If, when we refer to the systems we are building, we speak more about the connections not as abstractions but find appropriate metaphors to flesh out this weird magic box that fixes, finds or connects us.

If we speak about the community we are creating, not in a social media way but a genuine arrangement that benefits the contributors and consumers of the software.

If we think of the software or system as a conduit to allow people to move freely through, to explore without punishment, with gentle leadership or wayfinding so they can fulfill their tasks like they do in the physical world.

If you’re a designer, then I’m not telling you anything you don’t already know but hopefully this metaphor will help you to convince the others who don’t quite get it.

Who wants to go to a market where the apples advertised are missing, or the light is poor, the ground is uneven, all the deliveries are late, or the people can’t hear because there is too much noise? When you turn around you can’t find the way out or you get hassled to buy stuff you don’t want? Your purse gets pinched or your followed around by someone and you know it’s not your imagination?

The physical world makes no promises to be easy so maybe software shouldn’t either. We can keep striving to bring good manners and respect into these systems as its so easy to just overlook them. But this does require a shift from the concept of a tool to something else that accommodates more variables.

User Experience Storytelling – Grab their imaginations!

Problem: Effectively communicating NICTA productisation work fitting with real people in their real situations to researchers and software engineers  

Solution: Highly visual, short and entertaining comics


I attended Webstock earlier this year, intending to find inspiration for more creative approaches to my work.

Now, I have a creative background, UX is something I learned on the job over the years and I worked hard to conform to the data driven (and somewhat dry) approaches used to communicate the work we do.

I always felt I failed to deliver the impact UX documentation is supposed to. We all know no-one reads it.

After a great deal of consideration, I figured it was time to reignite my creative skills – stop being ashamed of my visual design background and start using those particular skills in my own way to solve this design problem.

What I came home from Webstock with was a great rush of excitement and a galvanised idea that gave me some direction on how to capture the imaginations of my colleagues. Stop creating heavy boring UX documents and create comics that told a story instead!

Its not a ground breaking idea, story telling, but the results at NICTA have been spectacular.

Because I am surrounded by researchers and scientists, I had been trying extensively to court their interest and educate using user research and scientific language (eg “hypothesis”, “experiments”, “validation”) and still do, and it works to a point.

So the idea of creating comics, was a great left turn – they all got it right away, it created buzz and excitement about the industry focussed work we are doing beyond the want for a pretty presentation layer.

So I spent a few weeks translating some more developed projects into a highly visual story, it breaks down pretty neatly in this:

  • Primary use case = story line
  • Context = story line
  • Personas = characters
  • Environment = panel illustrations
  • Pain points = drama or the villain
  • Solution = the hero or hero super power
  • Collaborative methods = team and production credits

…and then chucked in a bunch of stupid stuff that I found personally amusing (eg aliens, egg timers…).

Then I posted them up on the walls in the kitchen at NICTA…..

Comics on display in the main kitchen entrance
Comics on display in the main kitchen entrance

The CEO specifically found me out to tell me how much he loved them, then requested one for his pet project (he got the above mentioned alien character in his)

Feel free to download them and check them out, these are all real projects that I have worked on providing a full range of UX and UI work for.

These are hard work, no software will write a story for you, but as a uxer, that part shouldn’t be too difficult. I looked at a few programs to short cut the illustration work – I can draw but I don’t have the time – and decided on Comic Life 3

It takes me about 10 hours to produce each one (on the train commute each day) and they require image sourcing and go through many layouts to find the right flow. Some flowed really well and others I needed to write a script and even scrap earlier completed versions.

I found being a comic book fan, it was quite easy to use a traditional comic book style with a “villain” (usually a situation, not a persona) and a “hero” (main persona) using a “superpower” (the software) who saves the day. Also, I am highly visual so the layouts weren’t hard so much, more I had too many ideas in my head and ended up not using a lot of stuff.

To help with the internal communications issue, I created an overarching idea of a “NICTA Jam” (participatory or collaborative design) to hold together the series I was creating, which explained that all this is only achievable when good people work together understanding and including the audience.

The last point, which I anticipated having to be clear about and said “no” to a couple of requests, is these are NOT external product marketing brochures. They need to be approached as internal communications designed to illustrate the work. This was hard as the interest in them is  high and its easy to see the application to a market. The difference is subtle but it’s important.

To be honest a couple do work in a marketing communication sense but when the work is directed specifically for an external audience with a marketing voice, the original purpose is lost because internal teams feel they are being sold and idea, and not included in it.

Wanna learn more? It was Erika Hall‘s workshop at Webstock that really dropped the penny for me, her blog and books are good reading. And there is a workshop at UX Australia by Dave Malouf this year on storytelling, I highly recommend attending if you want to sharpen these skills.