Inspiring the next generation of responsible technologists and changemakers
Find the curated resources, podcast, and action items related to this livestream HERE.
Written by Nina Joshi
As emerging technology continues to play a profound role in our society, how can we inspire the next generation of responsible technologists and changemakers? Rumman Chowdhury (Responsible AI Lead at Accenture) and Yoav Schlesinger (Principal, Ethical AI Practice at Salesforce) discuss their non-linear path to the Responsible Tech space and opportunities for the next generation to engage with the community.
Key Takeaways:
Diverse backgrounds make the Responsible Tech community stronger. The Responsible Tech industry can seem heavily gated, or even unattainable, for non-technologists, but Rumman and Yoav speak to the need for multi-disciplinary talents in this vast and diverse field. Yoav describes how the Responsible Tech community itself is a multi-faceted industry that touches on so many other disciplines (environment, urban planning, media, etc.). It is easy to find a niche. For those interested in Responsible Tech, Rumman suggests that you find ways that Responsible Tech compliments your unique background and skill set.
“Multi-disciplinary backgrounds are the signs of a good and thriving industry” - Rumman Chowdry
The Responsible Tech community needs social problem solvers in addition to technical problem solvers. When speaking about the different areas of influence within this industry, Yoav outlines a few key areas of opportunity for individuals to get involved. He speaks to the need for informed policymakers that understand the impact of technology and for advocacy leaders in the civil or non-profit sectors. Yoav also describes the need for individuals who can problem-solve around essential sectors (housing, sustainability energy, work, etc.) as they are directly influenced by technology. One of the crucial goals of Responsible Tech is to recognize the negative effects of technology and remediate these issues. Yoav explains that because technology is interwoven in so many aspects of our lives, there is an infinite number of areas that will need guidance.
Mentorship is a great way to break into the Responsible Tech space. Both Rumman and Yoav attest to the power of mentorship to form connections within the community. When reaching out to a mentor, Rumman suggests the following steps: (1) proactively reach out to a potential mentor via email or direct messaging on social media, (2) in the initial message, make sure to include some background information about yourself and two or three topics that you would like this individual to talk to you about, and (3) email them again, Rumman urges individuals to follow-up with their mentor (and promises that is wont be considered annoying!). Yoav also suggests attending virtual events for opportunities to network and find a potential mentor.
For individuals in the Responsible Tech community, the word “ethical" has different meanings. Yoav explains how the term “ethical” can be distracting because organizations are not training their Responsible Tech employees on ethics theory and moral philosophy. Employees are trained at risk spotting and preventing negative consequences of said risk. When describing the difference between personal values and the values of an organization, Rumman and Yoav explain how the role of an individual in the ethics space is not to impose their moral frameworks on the organization. Rumman also explains the necessity of having strong leaders to support you when dealing with ethical nuances on the job.
If you are interested in learning more about Responsible Tech check out our free Responsible Tech Guide at ResponsibleTechGuide.com. The guide features industry leaders, influencers, and changemakers on their forays in tackling issues including misinformation, algorithmic bias, facial recognition, hate speech, harassment, and many others. It also aims to serve as a strong overview for students and other professionals to open doors into the responsible tech movement, regardless of experience.
Questions from the Community:
What is your academic and career background?
Can you describe what you do in your position?
Do you offer positions for graduates focusing on Digital and AI Ethics?
How can I have a voice in the ethical tech space if I'm not technical but am passionate about tech that amplifies the best rather than the worst of our humanity?
How do you and your team slot into your company's operations?
I've seen different and interesting models come up in responsible and critical technologist roles like these, and at times these teams function as in-house consultants and other times as implementers that own specific products/lines of effort. - Does your team also work on internal culture change? And if so, how?
Most skill development requires practice, which implies trial and LOTS of error. What kinds of "practice" do you advocate for would-be technologists and would-be changemakers? Where are the equivalents of practice fields, test kitchens, or sandboxes where those people can practice (and err) without actually harming real people, communities, or ecosystems?
What is the role of whistleblowers in promoting ethical practices by the AI industry?
Several prominent tech whistleblowers -- Tyler Shultz & Erika Cheung (Theranos WBs) and Dr. Jack Poulson (Google WB) -- have founded nonprofits to encourage good corporate governance & ethics in tech companies -- Ethics In Entrepreneurship (Shultz & Cheung) and Tech Inquiry (Dr. Poulson). How can we leverage such resources as EIE & Tech Inquiry? What spurred the creation of such nonprofits?
How can market forces and incentives be optimized to encourage responsible tech (and data use) - to beyond just as a buzz word, rather through a business imperative? What can writers on product teams do to help ensure responsible design?
How can we identify and scale good practices from different organizations/sectors, especially to the benefit of smaller or medium-sized enterprises?
Do you see a role for international collaboration? What would that look like?
How can we ensure that human thriving is not threatened by deployed technology solutions - particularly how do we prevent individuals suffering from inappropriate or unintended adverse effects of algorithmic recommendations or decisions?
Ethics of conversational voice AI - are they friends, colleagues, of supervisors? Do they have an obligation to follow social norms? What is your opinion on the growth of Conversational Voice AI during COVID? Do innate Voice AI attributes (data value + intent) change ethical approaches?
Which of the many AI ethics frameworks do you consider to be the most promising? Or are all of these generally similar and supportive of each other?
In the design, development, and implementation of various technologies, what are some meaningful mechanisms for engaging with communities who are often adversely impacted by these technologies that ensure more responsible tech development? Perhaps beyond a set of guidelines or a user-centered design frame?
Many trained technologists immigrate to work in North America without any humanist training or awareness relating to the ethical, moral, and philosophical implications of questioning their bias or do not have enough training to test the technologies they build that might unintentionally cause harm. How do you propose to lessen this risk? What can corporations do to check this?