CHANGEMAKERS: Will Griffin on Law, Ethics, and Creating an Enterprise-Scale Ethics Framework

Will Griffin is the Chief Ethics Officer of Hypergiant Industries, where he has developed and implemented a uniquely effective company-wide AI ethics framework. His past entrepreneurial work has earned him the prestigious IAB/Brandweek Silver Medal for Innovation and the culturally significant NAACP Image Award. His career has taken him from investment banking to consulting to media to tech, and he holds a JD from Harvard Law School. Will spoke with Andrew from All Tech is Human about the relationship between law and ethics and what he’s learned about implementing ethics frameworks at scale.

HGWIllGriffinHeadshot.png

Andrew: You hold a JD from Harvard Law School and have said that an understanding of law is crucial in your work as Chief Ethics Officer. What is the relationship between law and ethics in the context of technology?

William: Since there are not many laws written on AI, I was really discussing the value of organizations understanding the legal reasoning process as they develop and deploy AI use cases. This is where I believe empowered General Counsels, Corporate Compliance, and Public Policy professionals could perform an important function for their companies and society-at-large. The core question is this: if policy-makers understood and knew the risk of a specific AI use case in the same way technology developers and designers did, what laws would they write to protect society? Companies should then implement a private governance regime that conforms to this standard.

The aforementioned legal reasoning process would look like this. First, identify the issue: How is the technology developed and deployed in this AI use case? Then comes the rule: What existing and potential legal issues does the AI use case create? Next, gather the facts: What are the risks created for stakeholders — including society — in the deployment of this AI use case? Then conduct your analysis: What rule/law should exist to manage these risks? Finally, draw a conclusion: how do we design the AI use case to survive this rule/law if it were implemented?

In reality, law plays a very limited role in emerging technology workflows because technological innovation is way ahead of the policy-makers’ ability to understand the underlying tech and the risks associated. Sadly, most companies exploit this lack of regulation to maximize profits and minimize management of these risks. AI today is where derivatives and other innovative financial instruments like mortgage-backed securities were in the 1990s. At the beginning of my career I worked in the Structured Finance Group at Goldman Sachs. Our group was comprised of brilliant financial engineers who were creating an entirely new class of securities that the policymakers in Congress and regulators at the SEC never truly understood.

The first mortgage-backed securities were issued in 1968 by Ginnie Mae, Fannie Mae issued its first mortgage-back security in 1981, and the market was issuing trillions of dollars worth of securities a year in the 1990s. Congress did not pass any meaningful mortgage back security regulation until the Dodd-Frank Act in 2011. As a result, trillions of dollars of economic value was created and destroyed in a largely unregulated environment for over fifty years! Since the financial engineers of Wall Street understood the function and risks of mortgage-backed securities better than the regulators, an information asymmetry was created and the general public and policy-makers relied on Wall Street to manage the risks associated with these financial instruments. The industry-wide lack of ethics and poor management of risk in the market resulted in the 2008 Financial Crises and led to the regulation of these securities via Dodd-Frank in 2011.

My goal is to help Hypergiant, our clients, and the emerging tech industry internalize our obligations to society and avoid the disastrous ethical lapses that created the financial crises.

Your ethical framework at Hypergiant is heavily influenced by Immanuel Kant’s ethical philosophy. Why did you gravitate towards Kantian ethics as opposed to other ethical systems?

Our co-founder and CEO, Ben Lamm, has a broad vision of “…delivering on the future we were promised.” In a nutshell that means developing technology solutions that attempt to solve some of humanities biggest problems (i.e. Climate Change, global pandemics). We wholeheartedly believe AI has a central role to play in these solutions and are excited to be a part of it. Moreover, in an industry where the ethical and legal rules of the road are not written, we cannot look externally and wait for others to give guidance on how to conduct ourselves. We have to be leaders in forging the road ahead. This causes us to look internally and focus on our values, company character, and define our duties to society-at-large.

Kant’s deontological framework is a closed-system that focus on the individual agent and does not rely on external factors to determine right/wrong. This approach helps us keep ethics at the heart of the company and informs everything we do. We call this approach Top of Mind Ethics (TOME). The elements are contained in simple heuristic that allows everyone involved in designing and developing solutions to grapple with the hard questions for each AI use case.

1.    Goodwill. Is the intent this AI use case positive?

2.    Categorical Imperative. If everyone in our company, every company in industry, and every industry in the world deployed AI in this way, what would the world look like?

3.    Law of Humanity. Is this use case designed to benefit people, or are people being used as a means to an end?

This ethical reasoning process teaches our developers and designers how to think (reasoning) as opposed to what to think (compliance). We find this process makes our teams more creative, our solutions more robust and able to withstand ethical and legal scrutiny our solutions become public and  as laws are developed. Out development team has created a very useful ethical decision-making workflow tool that guides our team through the process and keeps a record of how decisions were made on each use case.

One of Hypergiant’s ethical principles is that technology should not use people as a means to an end. It’s common for businesses to view people as means, i.e. customers as means to profit, users as means to ad revenue. What does it look like, practically, for a business to flip this on its head? 

We could be “good”, sell all of our worldly possessions, and join a monastery, but we live in a material world and humanity depends on economic exchange to survive.  We are in a business, so economic value has to be created. How we design the solutions will determine our adherence to the Law of Humanity. For instance, we have a recent use case that designs a predictive maintenance system for the largest provider of railcars in the world. Our team developed several different options for the client. The profit maximization approach, would in theory, recommend replacing all maintenance people with robots – thereby reducing labor costs. This approach fails every step of our ethical model (Goodwill, Categorical Imperative, and Law of Humanity). Since ethics foreclosed the 100% robot option, it made us more creative and think more broadly about how economic value is created in the railcar industry.

How so? The public sector is one of the largest purchasers of railcars (for subways, automated people movers at airports, interstate rail networks etc.). They have a vested interest in keeping their citizens employed (to prevent social unrest) and actually award purchase contracts based on the number of jobs created in their jurisdictions. A fully automated maintenance force would actually cause our client to lose contracts (and economic value) in the public sector. As a result we designed a solution where robots and humans work together resulting in safer railcars for workers and riders (public at-large), as well as supporting a strong labor force (and social stability) in the jurisdictions where we operate. Ethical reasoning allowed us to broaden our vision of economic value creation, while conforming to the Law of Humanity.

What were some of the biggest challenges you faced when implementing the Top of Mind Ethics system, and how did you overcome them?

Very few tech professionals are trained in ethics, so there is a misconception that it is a drag on innovation because it adds a layer of external issues to be considered. Once they integrate TOME’s approach into their workflow they realize it is really another tool that can make them more creative, generate a broader range of solutions, meet the needs of a diverse set of stakeholders and, future-proof their solutions against stifling regulation or outright bans (like facial recognition in some places). Finally, tech developers are humans beings, have moral values, egos, and want to have pride in their work. Once they realize that TOME helps accomplish all of these goals they are all in. They key is mandating it into the workflows and they value will become apparent from the very first use case.

Based on your experience, what advice would you give someone looking to implement a tech ethics framework at his or her own company?   

Get the CEO (and other C-Suite decision-makers) on-board and equip them to be the chief champion of ethics in the company. This will require you to educate the CEO, link the ethics framework to your company’s stated values, and illustrate how ethics is directly linked to economic value creation. I am fortunate, our CEO Ben Lamm, is an enthusiastic champion for me and ethics at our company. This buy-in ensures that everyone understands that the ethics framework must be implemented into workflows, and baked into the products/services we create. If you cannot get C-Suite buy-in then quit that company and go work somewhere else with better vision and stronger leadership. A company that does not implement ethical principles in AI is destined to fail because the solutions you develop will not be robust and an ethical lapse and disaster is inevitable.


You can connect with Will on LinkedIn and Twitter.

Andrew Sears is an advisor at All Tech is Human and the founder of technovirtuism.

Guest UserComment