Softwiring – The role of Human Centred Design in Ethical Artificial Intelligence

I recently revised my earlier articles about the human centred design contribution to automated decision systems that have ethical outcomes. I presented this as a keynote at our annual tech showcase this year, D61+Live.

Introduction

“To ensure autonomous and intelligent systems (A/IS) are aligned to benefit humanity A/IS research and design must be underpinned by ethical and legal norms as well as methods. We strongly believe that a value-based design methodology should become the essential focus for the modern A/IS organization. Value-based system design methods put human advancement at the core of A/IS development. Such methods recognize that machines should serve humans, and not the other way around. A/IS developers should employ value-based design methods to create sustainable systems that are thoroughly scrutinized for social costs and advantages that will also increase economic value for organizations. To create A/IS that enhances human well-being and freedom, system design methodologies should also be enriched by putting greater emphasis on internationally recognized human rights, as a primary form of human values.”
~ IEEE The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems – Methodologies to Guide Ethical Research and Design

“New ethical frameworks for AI need to move beyond individual responsibility to hold powerful industrial, governmental and military interests accountable as they design and employ AI… When tech giants build AI products, too often user consent, privacy and transparency are overlooked in favor of frictionless functionality that supports profit-driven business models based on aggregated data profiles… Meanwhile, AI systems are being introduced in policing, education, healthcare, and other environments where the misfiring of an algorithm could ruin a life.”
~AI NOW 2017 Report

In the HCD response to this topic, “ethical” means autonomous systems mindfully designed and developed that minimise harm, inequality and/or discrimination.

This response is applied when an automated system (AI, ML etc) moves from applied research to commercialisation and wide uptake for purposes beyond knowledge creation and dissemination. This means anything looking to move into operational use for clients or customers. It also not a binary decision, it would include both utilitarian and deontological approaches (or wider, eg “care ethics”).

There are at least three broad components to solving ethical AI questions. In each there are important considerations governing input (data, privacy constraints, purpose) and output (opportunity, impact, products/services).

Regulatory – Human Rights, International and National law, Compliance eg organisational code of conduct

Technological – Research and Development that seeks to resolve data collection/generation, privacy, encoded ethics, models, algorithms

Human Centred Design – Ecosystems of People, Data and Digital, Experience, Empathy,  Context, Culture, Diversity

This article is the last part, HCD or the “softwiring”

1.0 Ecosystems

Outside of academic research, digital technologies don’t exist in bubbles or vacuums for their own purpose. People make it with an intent for someone else (specialised groups with specific tasks eg forensic investigation; or a broad range of less skilled users seeking to fulfil certain tasks eg dynamic mapping of travel plan) to find information to assist with making decisions.

These systems interact with other systems, drawing on and creating data generation by the users. The point of this section isn’t to break down the ecosystems but to set the point of view that technology needs to be considered less as a tool, and more like a network of systems with complex relationships.

Diagram 1: The ecosystem of roles and responsibilities for AI development

To date, AI has been created by data, computer scientists and software engineers but when we move towards broader application development we can call on the expertise to an ecosystem of experts with special roles and responsibilities. AI production becomes a collective activity, unburdening the engineers from the ethical questions and introducing diverse knowledge and skills to reduce the incidence of unintentional consequences. This can be broken down into clear areas, as mentioned in the opening section (Regulatory, Technical, HCD).

Values

Questions regarding legal criteria or “can we..?” would be reviewed in regard to the the UN Declaration of Human Rights, concerning human dignity, social cohesion and environmental sustainability. Existing Australian data and privacy laws are available to examine potential, unintentional violations or abuses of personal data as well as many other international laws on the same.

This ought to assist establishing revisions of corporate values or creation of new ones (including SMEs and startups) that then assist the executive and their business decisions or “should we…?”.

Like most things, the devil is in the detail when it comes to actual development work and the embodiment of values into production requires further exploration particular to each project, program of work or product. This section only examines “who and why…? leaving “how and what” to technical expert contributions detailed in.

As mentioned above, AI software is made by teams, not just computer or data scientists or software engineers and is interacted with by a range of users, not just one operational kind. Regarding teams: an understanding of central (full time, dedicated project teams) and orbiting (consultation expertise eg lawyers or ethics councils) team members can assist in understanding the responsibilities and provide wayfinding for critical points of decision making during the operationalisation or production of ethical AI.

Leadership

Leaders have several challenges, both in understand the various kinds of AI and how it can be used (eg is it a prediction problem or a causation one) as well as how to resolve the ethical challenges these new services propose (eg is it low risk like having to talk to a chatbot, or high risk like “pre-crime”?).

An important point worth noting is there is evidence that leaders are more likely to use a tacit response to ethical challenges than compliance or regulation. The evidence also shows that organisations that support information sharing networks between leaders and/or other levels of staff resolve ethical dilemmas more successfully than those with structures that leave leaders isolated (eg due to status threats). Also worth noting is significant organisational changes can trigger ethical dilemmas as the lack of or poor inclusion of appropriate or new expertise coupled with historical approaches creates new situations without a familiar framework (the introduction of AI for business outcomes would be a clear example of this as it would cause a massive internal disruption in both finding new markets and required skills). This would seem to further support an ecosystem and diversity approach to ethics to share the load of the decision.

1.2 Data Quality

Diagram 2: “Revealing Uncertainty for Information Visualisation” Skeels, Lee, Smith, Robertson

Data has levels of quality, that a user can challenge if the outcomes don’t meet their mental models or expectations and this can lead to a drop in trust of the results. This model not only helps with understanding the relationships of data to itself but also can serve to ‘de-bug’ the issues when challenged by the user in where the breakdown might happen.

For example, if the data is poorly captured then all the following levels will augment this problem. If the data is good and complete then the issues might be in the inferences. This is why user feedback using real content (ie not pretend, old or substituted data) is important to test the system with.

1.3 Data Inclusion

Another point that provides context is the a considerable barrier to ethical technology development is uneven internet connectivity either through infrastructure or affordability (IEEE Ethically Aligned Design – Economics and Humanitarian Issues; Australian Digital Inclusion Index 2017) resulting in data point generation that reflects a certain location/socio-economic bias. While connectivity in Australia is improving there there are groups who are not connected well enough to use digital services that correlate to low education, low income, mobile only, disability and region.

“Across the nation, digital inclusion follows some clear economic and social contours. In general, Australians with low levels of income, education, and employment are significantly less digitally included. There is consequently a substantial digital divide between richer and poorer Australians. The gap between people in low income households and high income households has widened since 2014, as has the gap between older and younger Australians, and those employed and those outside the labour force.”

~ Australian digital inclusion index 2018

2.0 Operationalisation

Operationalising ethical AI development requires multidisciplinary approaches. As mentioned above, there are legal and technical constraints, below are details for a human centred component. Unlike the first two, which are either either fixed by law or encoded and therefore “hard wired”,  this is about softskills and could be referred to as “soft wired”.

Soft wiring is applied during the later stages of applied research when technologies are looking to into early stage production and balances utilitarian and deontological (duty of care) philosophies. There are four parts:

2.1 Empathy

Empathy is the ability to understand the point of view of the “other” and alone it won’t ensure an ethical AI outcome, it forms part of a suite of approaches (mentioned in the opening section of this document).

Unfortunately, empathy isn’t natural, easy or even possible for some people due to conditioning or biological reasons, stress, or other factors like perception of business needs.

However, the good news is technology production already has experts in capturing and communicating empathy. Their work in entirely focussed on understanding people, their needs and motivations within context. They are:

    • User Experience Researchers and Designers
    • Behavioural scientists
    • Psychologists
    • Ethnographic researchers
    • Ethicists

These roles may already exist in house or could be contracted to assist with project planning as well as audits/reviews. In some cases, these skills are also a mind set, and non practitioners can use them just as effectively.

2.2 Experience Led

Experience design starts with a clearly defined problem and examines all the people affected by this problem and relies on the empathy work done to capture the differing needs of various participants in this “ecosystem of effect – from highly influential stakeholders right through to passive recipients who are deeply affected by the AI but who have little or no influence over the decisions made on their behalf.

Experience led places “Who” and “Why” before “How and What”. This work aims to sufficiently understand the motivations and expectations from the different user types, not just the project sponsor or technology makers.

These perspectives also provide context for clearly defined use cases – facial recognition surveillance might be acceptable for airport security but is it for citizens travelling in public places?

Conceptual diagram illustrating how different people described in this document might interact with a system
Diagram 3: Simplified, conceptual diagram illustrating how different people interact with a system

An ethical AI product also needs to be holistically considered in both strategy and solution design.

2.2.1 Strategy

The following questions can be used as a framework to assist in troubleshooting potential problems in advance:

  • User Research
    • Do you know who will use this system?
    • Do you know why they would use it?
    • Is the AI product “better” than a human or existing systems?
  • Harm reduction
    • How aware or sensitive are development teams and stakeholders about the “ethical quality” of the technology?
    • Is utility v’s privacy appropriate?
    • Who owns the “duty of care”?
    • What happens if an error is made, who is responsible?
  • Trade off
    • Who benefits from this AI service, and who is left behind?
    • Is this gap acceptable?
    • Who will answer for this gap if it needs defending?

2.2.2 Solution

  • Assist comprehension
    • Is the apply of perceived affordances helpful?
    • Apply “cultural conventions”, visual feedback or signifiers
    • Apply Nielsen’s heuristics
  • Validation
    • Are you testing with a variety of users?
    • Are you reviewing and applying revisions to the solution in response to testing?
    • Build – measure – learn

2.2.3 Trust

Trust is established by treating all users with dignity. Trust is also easily lost when users don’t understand the decisions made by a system.

  • Clarity
    • Is it clear to a person using the digital produce/service why a decision has been made?
    • Are they sufficiently informed to at their level of comprehension?
  • Right of reply
    • Is there the feeling of a standard of due process they recognise or can understand?
    • Can that person participate/engage in that due process?
  • Accountability
    • Is there a feeling that the provider of the decision/service is owning responsibility of the consequences deliver by the system they have deployed?
    • Is there some form of algorithmic transparency provided (end of black box AI)?
    • What happens if the system breaks down? (power, code, data failure)

2.3 Practise

Processes and checklists don’t catch ethical nuances, only a living practise of exploring, testing and validation can. Establishing a good ethical practise encourages this to be done every project, to pursue the best possible outcomes and auditability.

2.3.1 Constructive Discourse

  • Top down
    • Leaders are empowered to seek input on ethical quandaries from various sources (personal networks and regulatory)
    • Management welcomes advice from their teams
  • Bottom up
    • Development teams are encouraged to self educate and share knowledge
    • Members intelligently challenge the ethical proposition

2.4 Team Diversity

Diversity has been identified in multiple publications (IEED, AINow and other National AI Frameworks eg France) as critical to reducing the errors of bias AI can deliver.  This is both development teams and users. Development teams need not only gender but cultural, cognitive, social and expertise and skills. This friction is deliberate and we need to be mindful of “big” voices dominating. There are many conventional team communication techniques already in use to facilitate healthy discussion and input, so they won’t be listed here.

Credits and References

This version is revision of http://hilarycinis.com/user-experience-design-guide-for-crafting-ethical-ai-products-and-services/ – aimed at designers and product managers.

 

User Experience Design Guide for Crafting Ethical AI Products and Services

This page leads to a series of related about the human centred design contribution to automated decision systems that have ethical outcomes.

The audience for these articles was initially the Data61 User Experience Designers and Product Managers who have been tasked to provide assistance on development of products and systems using data (sensitive or public) and Machine Learning (algorithms that make predictions, assist with decision making, reveal insights from data, or act autonomously), because these products are expected to deliver information to a range of users and provide the basis for contextually supported decisions.

However it’s always been hope that a wider UX and Product audience will find it helpful. 

Machine Learning computer scientists, software engineers, data scientists, anthropologists or other highly skilled technical or social science professionals are very welcome to read this guide in order to increase and enhance their understanding of user experience concerns and maybe even refer to it.

Crafting Ethical AI Products and Services Part 1: Purpose and Position
This article looks at the reasons why an ethical mindset and practise is key to technology production and positions the ownership as a multidisciplinary activity.

Crafting Ethical AI Products and Services Part 2: Proposed Methods
This article is a set of proposed methods for user experience designers and product managers working in businesses that are building new technologies specifically with machine learning AI.

Softwiring – The role of Human Centred Design in Ethical Artificial Intelligence
This is a keynote converted to an article and is more of a summary of the potential for UX to have meaningful contributions and impact in ethical technology (D61+Live 2018 keynote)

Reading List

This list that contributed to the articles above was compiled between 2017 – 2018.

Legal

Papers and Reports

Articles

Emerging Practise

Tools

Which and What Data, When?!

Updated April 15, with thanks to Georgina Ibarra for proof reading and edits and David Anning for links to the UN and Forbes.

I’ve noticed a common reaction to the word “data” when observing commentators delivering news stories or politicians evangelising the benefits of open data initiatives. While some of us implicitly understand data use in context from our domain expertise and regular exposure to the varying types of data (including how hard it is to get at times) generally speaking, people get freaked out because they assume the worse.

Granted, there are nefarious types out there collecting and selling personal details that they shouldn’t and this is sort of the point – to educate people about the data in use in a way they can grasp easily. Once we remove this knee jerk reaction about the word data, people can focus on what they can do with data rather than what someone else might do to them with it.

I was at the KnowledgeNation event at the ATP yesterday and this kind of hit home when Angus Taylor (Assistant Minister for Cities and Digital Transformation) talked about the “open data” initiative underway. After he finished his speech the first question from a member of the audience was about citizen’s personal details being released. He of course answered it expertly, but at first I was quite astonished at the leap the audience member made from “open data” to “personal data”. But afterwards I thought: well should it be that astonishing considering the vast ocean of “data” out there and how little most of us know about it?

So that got me thinking – how can we provide clearer descriptors for data that deliver an expectation of use and immediately set the tone for the ongoing discussion? As a user experience professional I see this as a responsibility and am now embarking on a proposed solution to try it out.

Like Eskimos have with snow, we might need more words for data or be more conscious of the type of data we are referencing when we talk about it (and when we talk about the stories we tell with data).

I think we’re all in agreement that the term “big data” is vague and unhelpful so I’m making some suggestions to introduce a commonly used vernacular for different types of data:

  • Private data – the citizen owns it, gives permissions, expiration times, and it’s protected from any other use
  • Secure data – sharable but with mind blowing encryption
  • Market data – anything used to sell products to you
  • Hybrid data – some kind of private and non-private mix
  • triangulated data – those seemingly harmless sets that are used to identify people
  • Near-to-Real Time data – because real-time is rarely actually real-time
  • Real Time data
  • Legacy data – old stuff
  • Curated data – deliberately created data sets serving a single purpose
  • Active: Photos, Videos, Searches (search terms) Communications (email, text, comments, blogs)
  • Passive: Health, Financial, Spending, External environmental, Domestic environmental, Location, Logs

Examples in use could be –

“Google are tracking your Real Time Location data when you use maps”

“The Australian Open Data initiatives makes Curated data from the ABS available”

Private and Secure Financial data will not be shared with any third parties”

At Data61 we are face to face with this too so it will be part of our UX work to discover patterns in attitudes and communication.

I’m currently investigating this idea, and I’d love your thoughts! Is there anything published, either academically or otherwise that might have attempted to do this already?

Refs: http://www1.unece.org/stat/platform/display/bigdata/Classification+of+Types+of+Big+Data

Data61 (exNICTA) Design Methodology

For years I’ve attended conferences, read articles and seen presentations about the unquestionable importance of research led user experience design. Assumptions and imagination were downgraded as unworthy practises. We hear how research saves time in the long run, provides focus and yields hidden gems; it helps gain buy-in from teams and creates strong foundations for the decisions made so development can go ahead with less challenges. And yes, it does.

But as a sole approach, this is bullshit. In my pretty long career experience hypothesis and subject matter driven design is as much a contributor to success as research. The trick is not to rely on just one mode of operating. How much you do of each fluctuates depending on the work and the domain.

Its ok to make up stuff and see if it flies. I suspect everyone does it.

I’ve worked quite successfully using this methodology and I’ve been studying and experimenting with it specifically for the four years I’ve been at NICTA. I think I have a nice model emerging that explains how and when to combine domain/hypothesis driven design and insights led user centred design as related frameworks but with their own specific cadences. The model accommodates vision, research experiments, business requirements, customer inclusion (participatory), agile solution design and testing and validation.

NICTA project taxonomy and relationship sketch
NICTA project taxonomy and relationship sketch

The current explanation of the methodology comes as the result of a long term design project in itself, reviewing and analysing the 50+ projects over 4 years that have had ux and design contributions.

Initially I was attempting to categorise projects into what looked like the a taxonomy to assist with setting design requirements and team expectations but found too many differences. There were too many cross overs of activities and this weird need by humans to have a neat progression of design work and deliverable stages as a project matured into a product which simply didn’t exist.

What I took as project level design failures were actually fine. There were simultaneous activities happening. It was messy but there was a pattern. Our time to market is quite long so my impatience was getting the way. It all just needed reframing to make it measurable, understandable and then communicable.

Early sketch of cadence relationships
Early sketch of cadence relationships

The way forward was not “science fiction” design or UCD, not a middle ground nor an evolution from one to the other but both simultaneously and most importantly understanding their own cadences and benefits. The way forward was not the focus on outcomes but the activities that delivers them and therein the pattern emerged.

We have a dual pathway that helps us map and explain our activities towards designing for future audiences.

 

Domain Driven / Hypothesis Led Design

This is about exploring a future state technology using hypothetical design and team subject matter (domain) expertise.

This provides a project benefit of tangible testing and validation artefacts early, as well as maintenance of project momentum as teams, sponsors and customers get bored (or scared) when they aren’t seeing ideas manifesting via design i.e. there is evidence of delivery to customer stakeholders and/or project sponsor.

Another benefit is that technical innovation opportunities are unencumbered by current problems or conventional technical limitations. If you are designing for a 5 year future state, we can assume technology might have improved or become more affordable.

Also part of this is sale pitch type work where a concept is used to engage customers so there is a clear business engagement benefit.

Typical design activities include:

  • User interface
  • Information architecture
  • Interaction design
  • Co-design with customer and team
  • Deep thinking (quiet time), assumptions about needs for proposed audiences (yep, pretend to be that person),
  • Sales pitch concept designs
  • Infographic or other conceptual communication artefacts

Learnings:

  • Make no apologies for solutions being half baked or incomplete.
  • Continually communicate context and the design decisions because everyone imagines wrong stuff if you leave gaps.
  • Shut down any design by committee activities, relying instead on small bursts of subject matter expertise and vision leadership from the project sponsor. Two chiefs are a nightmare just as unstructured group brainstorms are.
  • Keep vigilant about eager business development folks selling an early delivery of work that is technically not feasible, has no real data (i.e. only sample data) or unfinished academic research (algorithms are immature). This is especially problematic when dealing with government who expect what they were pitched to the pixel and because it looks real, think it’s not far off from being built.

Insight Driven Design

This is about solution design using user research.

The project benefits are that the insights inform solution design and assists with maintenance of project focus (reduction of biases and subject matter noise)

If you are working on short term solutions for a customer while journeying to a blue sky future state, then this work assists in delivering on those milestones.

When there is user inclusion it helps with change management (less resistance to changes within current workflows). It provides evidence of delivery to customer stakeholders and/or project sponsor and short term deliverables can assist in securing income and/or continued funding

Typical design activities include:

  • Discovery workshops
  • Contextual inquiriy interviews
  • Establishing user beta panels
  • Usability and concept testing
  • Surveys
  • Usability testing
  • Metrics analysis

Learnings

  • Analysis paralysis is a killer. User research can be called into question once customers/teams tweak to the fact they don’t know anything about users and will expect large amounts of research to back decisions, thereby inflaming the issue with more information but not getting any more useful insights.
  • Unclear objectives from user research provide unclear outcomes
  • Poor recruitment leads to poor outcomes (expectations the designer(s) can just “call a few friends” for interviewing and/or testing).

Cadences

The cadences mentioned in the third paragraph refer to the time frames in which Hypothesis and Insight driven design operate.

  • They are in parallel but not entirely in step.
  • They cross inform as time goes on, typical of divergent/convergent thinking. New opportunities or pivots will create a divergence, results of interviews and testing will initiate a convergence.
  • Hypothetical cadences are short and may or may not map to development sprints.
  • Insight driven are longer, and may span several hypothetical cadences.
  • Ethnographic/behavioural science research projects are of a longer cadence still, and ideally would feed in/take insights from both the previous two. I’ve not covered this here as it’s not my area of expertise.

The graphic below illustrates this.

D61XD_methodology

This final outcome is the result of revisions of 4 years of projects with the current NICTA UX Design team using discovery and design thinking activities.

NICTA UX is: Hilary Cinis, Meena Tharmarajah, Cameron Grant, Phil Grimmett, Georgina Ibarra, Mitch Harris and Liz Gilleran.

 

 

The art and democratisation of digital experience design

Last week I was invited by BlueChilli to do a short presentation at their developer and designer offsite on the topic of “What makes a standout user experience through design in the digital space?”

In trying to answer this question I found myself really struggling to qualify anything significantly applicable. For a few reasons – that last thing on a friday arvo they’d probably had been shown lists of “should do’s”, models, reading references and exercises. 

I was also struck that in total honesty I don’t think there is any answer to this question that isn’t highly dependant on a huge range of things. Its simply too broad to handle in 30 mins + questions.

So I flipped it away from methods and to the other side – the experience, intuition and innate skills we develop over time and from working with others.

My hope was to provide the space for permission or confidence to rely on each designer’s unique skills, and how to handle making mistakes to get to the ‘standout user experience’

Basically there is more than science to good experience design –

  • there is an artist’s ability to make a leap using imagination and also the artist’s confidence in experimentation (which is also scientific but we don’t always have the frameworks to run a ‘proper’ experiment)
  • there is the team and it’s the team holding a shared and informed user experience frame of mind working collectively with respect for expertise that is also fundamental to good experiences.

The presentation is below. Skip to slide 6, everything prior is a bit of background about me. It’s not a particularly long presentation and I spoke a lot in each section about my experiences that have led me to this.

Review of NICTA year three

My review of year three at NICTA is a bit overdue.

Team growth

In 2014 I was alone, now mid into 2015 we have a team of six user experience designers at NICTA. There are four seniors, one mid-level and my self (Principal). Four of us are in Sydney, one in Canberra and one in Melbourne.

Meena and I in the design lab in Sydney
Meena and I in the design lab in Sydney

We are strong end-to-end designers capable of running a project from inception through to front end solution and handover. Mostly the work is a mix of hypothesis and insight derived, and we walk a line between pitch and user informed.

We are comfortable with failure and mistakes, and everything is as lean as possible. We don’t bother trying to define what ‘lean ux’ is, we get it and get on with it. In fact, none of us have been documentation people, have always been collaborative and are very comfortable with the shifting frontier we are faced with daily, understanding we take the teams with us on our exploration rather than direct. I have really enjoyed hiring the people we now have and selected them for this attitude along with their complimentary skills to round out the team.

Successes in 2015

Design is notoriously hard to measure, even more so when working on the high number of experimental projects we do. In review I see the successes as outlined below.

  • Increasing the requests for assistance, involvement earlier and earlier in engagements
  • Acceptance of pushback on front-end solution design deliverables (aka “can we get a mockup of…”) as an appropriate measure when projects have high levels of uncertainty in research findings, users and value proposition which allows more time for investigation and validation set ups.
  • Designing and facilitating”discovery workshops” to evaluate business opportunities within an industry or government services sector.
  • Direct contribution to descriptions of design approaches and outcomes for schedules of work in contracts and term sheets.
  • Increasing involvement in “non-core funded” projects (ie billable projects)
  • UX and design being used as a key to accessing high profile projects within key digital government activities (yes this is deliberately cryptic) that are non-core funded.
  • Significant contribution to the creation of two platforms upon which we can swiftly create instances to support projects without constant reinvention of the wheel. This is not a product suite, a style guide or a pattern library. They are true platforms with deeply engaging content, APIs and using open data. They deliver (and showcase) NICTA research, business, engineering and design. One is now live – terria.io, the other is still under wraps.
The UX space at TechFest 2015, Sydney Innovation Park
The UX space at TechFest 2015, Sydney Innovation Park
  • Placement of UX as a prominent capability at the annual technology showcase Techfest 2015, with workshops for startups, kids and a working wall to discuss digital design methods.
  • UX designers featured on the new NICTA website home hero module.

Along with this are the large number of compliments and experiences that reinforce our worth within the business. Larry Marshall CEO of CSIRO has mentioned user experience on several occasions when presenting the future of NICTA with CSIRO.

Learnings

Along with the team growth and successes, it’s been really great to reflect on what I’ve observed and learned in the last 18 months.

  • We have room to experiment and explore with adapted and new approaches. Frameworks have emerged but there is never a set process. We constantly review and improve them as we go. For a long time I felt I was the only person at NICTA not experimenting and exploring but during some time spent reflecting (and not obsessing about the negatives during my annual leave at the end of 2014) I realised, actually, it’s always been that way. I am now passionate about promoting and defending that culture.
  • Space and time think deeply is really important. Which is directly related to the next point…
Running my UX for Startups workshop
Running my UX for Startups workshop
  • Saying “no” to requests for help is really hard. Every new project sounds cool from the outset. But I think we can do a better job at choosing what to work on… Being that most of the work we do is similar, if not the same as startup incubation my philosophy is it’s ok to leave folks to their own devices and act as educators/consultants during the valleys between intensive ux/design activity peaks and/or the reassess if our involvement is needed as we go.
  • Expectation management is a fluid landscape that requires constant vigilance. Never assume anyone in the room is in the same place as you – customers, stakeholders, team members, business or communications/marketing. Context setting needs to be done each and every time; open team communication is needed at all times. It’s everyone’s responsibility to talk and confirm where they’re at.
  • Constant context switching and learning new domains is exhausting. EXHAUSTING. At NICTA we designers need to deeply understand the users and domains and these are usually highly technical and very specific. People do PhD’s on these domains so we are faced with a mountain of learning at the start of any project and a lot of time in parallel with another intense project. (see comment above about saying “no” more…) I’m not sure how to mitigate this… I’m open for suggestions! Dr Adam Fraser’s The Third Space is helping me to put a reference together to monitor and head off burnout threats.

I’m unsure what lies ahead with the proposed merger. Its nerve wracking and the chance to devise new strategies to engage with it is both tricky and exciting. The team are directly contributing and I look forward to seeing how it plays out. Tell you next year!

About the creative process within user experience design

I have always struggled with the discord between creative design and user centred design.

I went to design school and learned colour, form, typography, layout, flow and how use visuals to capture the imagination of the audience. Over the years working in tech it got hammered out of me because software was built by engineers, then after a while it was designed by researchers. My problem is that empirical always trumped creativity and there is room for both, not one hiding behind the other. Yes, UCD is creative in the problem solving side of things and this is extremely important but the creative is devalued unless it’s championed by a visionary. That so many UXers have a creative and visual design background is important to note, a dirty secret that I think needs to be aired. We do, and we are good at what we do and we can make up well considered stuff in the absence of research and its ok.

Until now I couldn’t quite articulate the creative value of design in technology, usually falling back on feeling left behind, misunderstood or just some hand wavy “some of us have intuitive skills” (intuition being highly refined skills crafted after years of experience).

This was really causing me a serious amount of professional and then also, personal depression. I kept upping my workload, hoping I’d find that missing spark in the next job – that moment when you hear the brief and get really excited about the potential – but of course, with even less time to do anything, it just got worse and worse. Also, working in a scientific research company, it’s really hard to communicate any kind of user research unless it’s published or attached to PhD. My attempts at talking their language fell on hard ground and I found that leveraging creativity got me way more traction.

So I dropped a whole bunch of projects to focus on one large one (as well as manage and grow a design team).

Meanwhile…

A weird series of events occurred. Sitting in my department director’s office, where I have sat many days each week, in the same chair, I spotted for the first time Design Driven Innovation” by Roberto Verganti, and asked to borrow it. “Yes!” he said, “Tell me what you think, I dunno about it.”

I started reading it, and after just the first chapter it all clicked totally into place. I finally felt permission to be the creative leaning UX designer I am, using UCD activities as well. I felt validated that I deep think, work immersively and reframe, and because there is precedent for it. I only have to adjust my skills slightly, not re-learn extensively and can now refer to an established document to back up my approaches.

A few days later, the head of the Machine Learning research group who is very encouraging of UX and who graciously shares his ideas with me sent me an email suggesting I read a book, which he had found an electronic copy of and attached for me. Same book.

Late that week, I was involved in an experimental workshop, hosted at our lab which challenged (successfully, I’ll add) the traditional way Government will develop a particular digital solution. After the first day I went home and decided to step away from the entire days work and think about the “meaning” of the work we were doing. How humans as community and messy creatures might handle the issue in an non-technical way. How geographical information and community updates are linked, and how to get away from bureaucratic procedure and the feeling of surveillance by “big brother” governmental mindsets. (Unfortunately I can’t share the details in full.)

I pitched the idea to the organisers and the next day we created a splinter group to examine and create a pitch for the new idea. The reframing and alignment of human meaning to an incredibly boring and laborious task was immediately taken up with excitement when I presented it to the senior executives in the room and created quite a buzz around the potential. The preceding two solutions also pitched which we had all worked on and while they were extremely well considered and quite achievable, they were met with challenging questions and a bit less enthusiasm.

I’ve used this approach many times, not knowing there was a name for it in most of my work and the times it’s failed is when I am unfamiliar with the domain, when I’ve relied too much on asking user’s what they want/need and when I am unclear of the meaningfulness and hoped someone could provide this for me (either as their vision or from research). When I redesigned iView, in 2010, I used this approached. It tested well and had incredible uptake. (Since then it has been redesigned).

Approaching digital solutions with mindset of an artist is really freeing. It is why in my hiring and building out the UX team at NICTA that I look for people who have non-performance type creative and artistic pursuits outside of work. Ego can get in the way of performance artists, while solo creative pursuits are more suited to deep thinking and exploration.

In deep research driven tech, I find the best starting point is examining and structuring a proposed workflow that makes sense using the tech and data; then observe the actual operators and beneficiaries of the current tech workflow practices and tool chains. From there we can “imagineer” potential solutions to then test against. Because it’s really hard to interview users about what they want and/or need in emerging, deliberately disruptive tech. They respond with conventional mindsets and speak in conventional solutions. I think this can “dumb down” the final results which as we all know suffer enough compromises as it is. Using a design driven approach frees up the the limitations and steps back to behavioural observations.

This now leads back to software no longer being a tool but an ecosystem. Read more here: Software Isn’t A Tool

 

Comment to blog by Dan Turner

Boxes and Arrows wont’ let me post (I get stuck in a duplicate post error message loop) Here’s the article:http://boxesandarrows.com/we-dont-research-we-buildReally good to read and I would like to contribute to the conversation, so my response is below 🙂

I’ve written a few blogs on this topic also, so won’t reiterate those same ideas here:

http://hilarycinis.wordpress.com/2013/10/01/ux-advice-for-start-ups-especially-in-emerging-technology/

http://hilarycinis.wordpress.com/2013/10/10/171/

http://hilarycinis.wordpress.com/2013/10/30/ux-activities-in-australian-startups-tech23/

I have many strong feelings on this topic, and the startups themselves are only a part of the machine.

VC’s aren’t asking for ux evidence (in Australia, anyway) and so are making assumptions it’s included in the business and marketing plans.

The startups are often very confused about how to talk to customers and don’t understand they will have a range of needs from multiple user types. Blank and Ries are great reads but I also feel they have repackaged user experience work where it could sound like we are nagging about stuff the start ups feel they are already doing. I often go to great lengths to unpack the segments in the BCM where UX fits, and how these activities are extensive in order to get a clear picture.

I have quite strong feelings about marketing strategies having too large an influence in this conversation. Marketing comes later when the business knows it’s product or service and how it fits in with people’s lives. It’s totally arse-about.

I also suggest that business school educators start to look at user/customer experience seriously as part of the curriculum. I find it very difficult to get traction in conversations with business mentors about how early ux activities can assist in selecting a direction with more confidence, rather than setting up a business around a feature or a product and hoping for the best. The jargon used obscures the pain that startups can experience – pivoting and the culture of failure are nice terms for very difficult periods of time.

I see many similarities between StartUps VC activities and the entertainment industry funding machine.

We know that ux isn’t a magic wand to ensure success but when added to domain expertise and customer/user feedback it can add structure and assist with decision making when there are too many unknowns.

Maybe incubators need a ux on staff full time to assist across the teams. I work with Incubate doing this in Sydney, although not full time but I do run a workshop and follow up each round they do and I have found it really educational and I get some good feedback. I guess the proof is in the success of each business.