Changemakers: Reid Blackman on Ethical Risk and Why We Need Philosophers in Tech
Reid Blackman is an academic philosopher-turned ethics consultant and founder of Virtue, an ethics consultancy. He has taught philosophy at Northwestern University, UT Austin, UNC Chapel Hill, and Colgate University and serves today as a Senior Advisor and Founding Member of EY’s AI Advisory Board. Andrew from All Tech is Human spoke with Reid about how companies can mitigate ethical risk, the need for academically-trained ethicists in tech, and more.
Andrew: You began your career in academia as a philosophy professor, and have shifted over time to working as a consultant and advisor to corporations, governments, and NGOs. What prompted you to make this transition from academia into the private sector?
Reid: A variety of factors. First, I saw there were problems I could solve. Businesses were starting to get beat up on social media and the news media for ethical misconduct. #DeleteFacebook, #DeleteUber, #BoycottStarbucks. I thought, “they really need to get their ethical houses in order; their reputations are on the line. But they don’t know how to think about ethics and put it into action; I can help them with that.” The other thing I saw was engineers ringing the alarm bells around the ethical implications of artificial intelligence. I knew I could help them as well. So in short, after seeing present and future problems, where failing to solve for them would cost businesses real money through reputational risk (both among consumers and employees who no longer want to associate with companies that prioritize profit over people), I decided there was a market for ethics consulting, or as I often call it, ethical risk consulting.
Second, I’m in love with philosophy but I’m not in love with academia. As an academic philosopher I spent an inordinate amount of time writing pieces for professional journals, where the acceptance rates for those journals is ~5-10%. What that means is that you spend 10% of your time getting the core idea of the paper right and the remaining 90% playing defense against the blind reviewers of your submitted manuscript. That’s just no fun.
Lastly, I was teaching at Colgate University, which is beautiful and in the middle of nowhere. My wife’s career is in Manhattan, we’re both city people, and with a second kid on the way we couldn’t do the back and forth anymore. So, I thought, why not start an ethics consultancy in NYC?!
You launched Virtue about two years ago to help companies mitigate ethical risk. Why did you choose to focus on ethical risk in particular? Is this the best lens through which tech companies should view ethics?
It’s the best in the sense that it’s the one most likely to lead to a sale and thus impact. You can’t sell “you’ll sleep better at night” in a consistent way. Businesses have bottom lines to promote and protect, and if you can’t make a business case for a product or service, you’ll have a hell of a time selling it. Ethics in particular is difficult because corporations have historically regarded it as fluffy or naive. But if you can package it in wrapping with which they're familiar and have a tradition of spending money on - brand or reputational risk - then ethics is suddenly relevant to them.
Can you give us some examples of ethical challenges that you’ve helped organizations work through?
Some of those challenges have been less about product and more about culture. For instance, about a year ago I started working with a startup that was comprised of 17 people, only 3 of which were women. There were some sexually explicit comments made, the culture was highly competitive and bro-y, and the CEO was worried there could be a lawsuit if this continued. It was particularly pressing to him because he knew they were about to experience hockey stick growth; one year later they were at nearly 70 people. So he needed help creating an ethically sound culture. I worked with them to define their ethical values as concretely as possible and then worked with them to operationalize their ethics; we turned those values into process and practice. We’ve been very happy about the results. In fact, the Wall Street Journal profiled the work we did together.
For other companies it has been about how to develop and/or deploy their technology. I worked with Biohax, for example, which is a company in Sweden that does micro-chipping in people’s hands. Picture something about the size of a grain of rice implanted in the fleshy part of your hand between your thumb and index finger that you can use like a swipe card to, say, open doors, pay for the metro, or even store medical information that hospitals can access in case of emergency. To a lot of people that’s really scary tech. But Biohax wants to earn the trust of potential clients and government regulators (notice the bottom line risk!).
I worked with them to create a deployment program to minimize the ethical risks associated with these implants. For instance, the company will not work with an employer who requires or even pressures their employees to get chipped, once chipped the property is that of the employee and not the employer even if the employer paid for it, and potential clients need to take part in an “inboarding” process to ensure that their consent is as informed as we can get it. Employees need to wear the chip under a sticker that’s placed on their hand for five days, which simulates the experience of having an implant.
Lastly, I work with Ernst & Young to advise them on how they can help their clients vet their AI products for ethical safety.
You’ve said that it’s important for companies to recognize that serious ethical analysis is best handled by trained ethicists. Others advocate for the democratization of ethical thinking, arguing that engineers, designers, and others should be equipped to conduct ethical analysis themselves. What do you think is the right balance here?
I think we need trained ethicists. While we can create guidelines for engineers around, for instance, where they can source their data, what data they are not allowed to access, what data they can or cannot combine with other data, how to think about potential unintended consequences, how to think about algorithmic bias, and so on, there will always be grey areas - some light gray, some dark grey. Properly trained ethicists - and here I mean people with at least an M.A in philosophy if not a Ph.D. - can help companies work their way through those gray areas to come to a responsible conclusion. It’s worth drawing a comparison here to medical ethics, where there are people who are trained bioethicists.
What are some areas of ethical risk that you think are currently under-appreciated in the tech industry, or some important ethical questions that you think aren’t being asked enough?
I suppose my biggest big picture complaint is around how people think about ethics. It’s very whack-a-mole. People see algorithmic bias and say, “bias is no good - stop that!” And then they realize they can’t explain outputs and they say, “stop unexplainability too!” This is not a systematic approach to identifying and mitigating ethical risks. It’s what’s hot right now. But we need a systematic approach because it’s the only way we’ll get (more or less) exhaustive risk mitigation. This, incidentally, is what philosophers are very good at; we’re experts at seeing holes in systems and devising either patches for those holes or better systems. For instance, I would say that algorithmic bias falls under a larger umbrella of fairness or justice. But if you just focus on bias and don’t think more broadly about justice in general, then you’ll likely miss out realizing that your product will aggravate existing wealth inequalities.
You can connect with Reid on LinkedIn and Twitter and can learn more about his work at virtueconsultants.com.