Changemakers: Cennydd Bowles on the Ethical Responsibility of Designers

Cennydd Bowles is a visionary leader in ethical product, UX, and futures design. The author of two books (Future Ethics and Undercover User Experience Design) and former design manager at Twitter UK, he now spends his time speaking and leading training workshops on the ethics of emerging technology at organizations like Microsoft, Stanford, Google, IBM, and Capital One. Cennydd spoke with Andrew from All Tech is Human about the responsibility that designers have in the creation of ethical technologies. 

Cennydd Bowles alt headshot b:w.jpg

Andrew: What’s your earliest memory of digital technology?

Cennydd: This is the question that reveals everyone’s age, right? For me, the ZX Spectrum, which my parents kindly bought me one Christmas, when I was maybe 5 or 6. Many of my early years were spent copying idiosyncratic BASIC from magazines and textbooks: two hours of detailed transcription resulting in an arcane syntax error.

How and why did you first become passionate about the need to think critically and thoughtfully about technology? 

During my final year at Twitter. I joined in 2012, when the industry and company alike were on an upswing. The story was that technology had emerged as a triumph of the Arab Spring, a means for citizens to uproot the hierarchies and autocracies of the past. Technoutopianism was the default perspective through the industry, even more so than today.

Of course, things didn’t turn out so rosily. Social media did indeed uproot some hierarchies, but the ensuing entropic chaos is no better: the powerful and ill-intentioned have found ways to exploit it in dangerous ways.

I was also saddened by the company’s shortcomings on abuse, particularly during Gamergate. Gamergate was a trolling campaign in 2014/5 aimed at women in the gaming industry, conducted largely on Twitter, that essentially forged an abuse blueprint for what became the alt-right. It struck me that we – and the industry at large – were failing in our responsibilities toward vulnerable users.

I left Twitter in 2015, and then pursued a career of sorts in the field of ethical technology.

Is it important for the general population to attain a certain level of “AI literacy” in order to make sense of a world increasingly driven by algorithms? If so, how can people go about this? 

No. It’s up to us to make technologies that people can interrogate and understand.

The common counter is that certain machine learning systems, particularly deep learning, are mathematically impenetrable: it’s no good exposing the decision architecture, since all we end up with is a series of neuron weightings, which no human could claim to make sense of. If that’s truly the case (and there are many people working on explainable AI / XAI who disagree), then in my view we shouldn’t use these systems for decisions that impact human freedom or flourishing, such as sentencing or hiring choices.

It seems that designers are in many cases at the forefront of the tech ethics movement. Why do you think this is the case?

All design involves futures, and all futures involve ethics. When you design, you’re making a claim about how we should interact with a future object, and by extension, how we should interact with each other. There’s an inherent, inescapable normative angle here: design can’t be separated from its ethical implications.

Also, at the risk of generalisation, I find designers typically have broader motives than profit. The designers I like are those who want to make the world less random and more humane. For these folks, companies are necessary vehicles to deliver on that promise, and profit is a way to sustain that work; smart companies can make good money from hiring these designers to anticipate the future correctly. But I admire the honest arrogance of aiming for something higher, and it makes more fertile ground for the ethical discussion to germinate.

When a technology produces unintended ethical consequences, does it often boil down to a design problem? What part of the responsibility should be borne by the designers of the technology vs. other stakeholders?

It’s often a design problem, yes. But technologies have an annoying habit of distributing or diluting moral responsibility. The products we build today involve multiple stacks, layers, and protocols – hardware architectures, operating systems, platforms, libraries – that it’s sometimes hard to say exactly who’s responsible for what outcome. It leads to an environment of finger-pointing and shirked responsibility.

The way out of this is to take responsibility regardless. In my mind, whichever company is closest to the user is the one ultimately delivering the service, so this team is accountable for the ethics of its products. Your app is only as ethical as the partners who handle your customers’ data are, or the authorities who peek through the backdoors you built for them.

Tough gig, but given how well-compensated and powerful our industry is, it damn well should be a tough gig. Time to justify our status.



You can keep up with Cennydd on Twitter and learn more about his work at cennydd.com. Be sure to pick up a copy of Future Ethics as well. 

Guest UserComment