Changemakers: Dan Wu on Technology & Social Inequality
Dan Wu is a Privacy Counsel & Legal Engineer at Immuta, an automated data governance platform for analytics. He’s advised Fortune 500 companies, governments, and startups on ethical & agile data strategies and has advocated for data ethics and inclusive urban innovation in TechCrunch, Harvard Business Review, and Bloomberg. Andrew from All Tech is Human sat down with Dan to discuss tech innovation, social inequality, and AI ethics.
Andrew: How did you first get into the tech space?
Dan: If I can get into tech, you can. I came into it from a very unlikely path: a PhD in Sociology and Social Policy at the Harvard Kennedy School.
Yet throughout grad school, holistic solutions merging law (I also pursued a JD at Harvard Law because I’m a glutton for educational punishment, ha), data/technology, and community organizing excited me. I began exploring how cross-sectoral solutions might form systemic solutions to wicked social problems like housing affordability.
In service of that goal, at Harvard’s Innovation Lab, I began to experiment with “housing tech” startups. One became a finalist for the President’s Innovation Challenge and the other won a national competition sponsored by Toyota. Both of my products focused on helping renters pool their resources and gain economies of scale to bring down rents.
Ultimately, due to my interest in playing a larger role in product and promoting “safer” innovation, I jumped into a role at Immuta. As privacy counsel and legal engineer, I help research, advise, and educate our customer and product teams on data and AI regulations. By combining law and tech, I help companies nimbly navigate regulatory environments and innovate safely — bridging two worlds that rarely speak to each other.
Outside of my day-to-day work, I help founders, VCs, and policy advocates with housing innovation. I detail hundreds of examples and some of my analysis in my recent article in TechCrunch.
My big hairy audacious goal would be to help all people live in a world where we can meet our basic needs — like housing and early-childhood education — regardless of their circumstances. That way, more can spend time with loved ones, solve critical problems, and enhance our world’s collective ability to make smarter decisions.
Tell us a bit more about your work at Immuta.
Immuta lets companies share data safely and quickly to accelerate ethical analytics and protect trust. All without writing code, our platform helps companies restrict who can access data, applies privacy-protecting algorithms, enables purpose restrictions, and does audit processing.
Given this topical focus, I spend my time thinking about how we can create tools to democratize more to operationalize data ethics and governance so companies are actually walking the walk. I also think a lot about making sure these tools are used to prevent, not accelerate, pre-existing social inequalities and power dynamics, and creating values-driven data cultures that integrate ethical innovation into corporate missions.
Let’s talk about social inequality and tech. Are the technological transformations we’re experiencing today fundamentally different from those of the past, with regards to how they’ll affect the distribution of wealth or opportunity in society? If so, how?
As MIT’s Brynjolfsson argues, technological advances have favored highly-skilled, educated workers, which increases inequality. What’s different now is likely the speed and intensity at which inequality is growing.
A few factors come to mind in understanding this problem — the first being inequitable impact. Lower-skill routine jobs are far more likely to be harmed. At the same time, wealth growth has been rapid and benefited a shrinking set of people. Brynjolfsson describes this as the “winner-take-all” economy. While more emerging economies around the world are entering the middle class, the top 1% still owns half of the world’s entire wealth — one of the highest rates of wealth inequality in recent memory.
The second factor is the inequitable distribution of opportunities. High-skilled entrepreneurs and workers are more likely to come from families whose parents are educated, made more money, and/or lived in better zip codes. Unfortunately, disadvantages start even before one is born. Historical injustices, such as slavery, redlining, and other inequitable acts, mean race, gender, and class are correlated with key predictors of advantage.
Finally, we have to think about inequitable power relations. Political scientists from Stanford to Princeton, argue that those with the most opportunities tend to “hoard” or “rig” opportunities in their favor. For instance, the wealthy have more influence over policy through campaign contributions, lobbying, and other expensive tools, while the voice of the poor and middle-class are much less likely to have influence. Patents, copyrights, antitrust, corporate governance, and macroeconomic laws tend to protect “redistribute upward” and bolster the market power of wealthy or elite actors.
In the end, emerging technologies like artificial intelligence intensify these three levers. Consider, for instance, “surveillance capitalism,” the use of technologies to observe, predict, and modify behavior in the direction of those who have the resources to wield them. Furthermore, algorithms to manage workers, basic services, and other opportunity-defining institutions, such as criminal sentencing, have already discriminated against vulnerable communities, reinforcing historical power imbalances. Finally, as a variety of critics have voiced, who gets to decide which problems are solved and how we solve them? For more on these themes, read AI Now’s powerful report.
As a result, technology is extremely unlikely to close disparities by itself — and will, more likely than not, widen inequality. But paired with new political coalitions, laws, and institutions, innovation, including technological ones, can certainly hasten equity. When the marginalized, the middle class, and other allies co-create new ideas, these interventions build collective power and bolster innovation and autonomy. Some examples include Brazil’s participatory budgeting program or community land trusts and cooperatives.
What are some of the most important principles for technologists to keep in mind with regards to AI ethics?
Before I begin, it’s worth considering why these principles and, more importantly, enforceable rules and accountability are critical. It’s summed up as: “the road to hell is paved with good intentions.” Case in point: Denying treatment to African Americans suffering from Syphilis due to the government’s desire to innovate treatments for that very disease. Even the best interventions can harm stakeholder trust, if not done in an ethical way.
There are three types of principles to consider here. The first is what’s stated in key emerging data regulations like the EU’s General Data Protection Regulation and frameworks like data protection by design. We’ve started building consensus around the importance of things like purpose limitation, minimisation, transparency, confidentiality, and accountability. Check out the UK’s data protection authority for more information about this.
The second is what’s vaguely called “ethics,” which is a discussion of what should be done outside of stated rules to protect and benefit stakeholders. This includes not just protecting stakeholders (nonmaleficence), but working with them to ameliorate harm (justice) and empowering them (autonomy and beneficence), regardless of what the law says.
Strongly connected to ethics are principles about how ethics and social impact programs enhance strategic competitiveness and trust. For these to succeed, the C-suite must start by taking ethics and compliance seriously. Even the Boston Consulting Group points to trust and privacy as the foundation for “winning” the next decade of wild technological disruption. Purpose-led missions, like Patagonia’s, distinguish some of the most innovative companies. Furthermore, trust results in more stakeholder loyalty, resulting in better data, insights, products, and employee productivity.
The third is a critical response to dominant data and ethics practices, such as feminist and postcolonial data studies. The Feminist Data Manifest-no pushes technologists to investigate power relations, processes, and history. Practices include the critical interrogation and righting of “neoliberal logics,” hierarchy (versus co-constituted relationships), and historical and current injustices against vulnerable communities, often tied to “race, gender, sexuality, class, disability, nationality, and other forms of embodied difference.”
Technologists should not stop learning. While I give a landscape of resources and questions above, new harms and opportunities emerge everyday.
Much like a doctor that has to keep up-to-date on the latest medical research, it’s also the responsibility of technologists to keep learning, diversify their relationships, and challenge their assumptions. Relatedly, check out Pivot for Humanity, a movement to professionalize technologists and give tech ethics more teeth.
You can connect with Dan on LinkedIn and can keep up with his work by signing up for his newsletter on cities, ethics, and innovation.