Changemakers: Pearlé Nwaezeigwé on Forming Good Tech Habits
Pearlé Nwaezeigwé is an award-winning Nigerian Attorney specializing in technology, user rights, and artificial intelligence. Her projects have included “Generation AI,” a collaboration with UNICEF and the World Economic Forum, and Beyond Le Code, a podcast about the effect of emerging technologies on marginalized groups and society as a whole. Andrew from All Tech is Human sat down with Pearlé to discuss good personal data habits and diversity & inclusion in tech.
Andrew: How did you first get into the tech space?
Pearlé: During my time in grad school at UC Berkeley Law, I participated in a research project with the Human Rights Center called Generation AI. This project was in collaboration with UNICEF and the World Economic Forum to analyze the impact new technologies have on child rights. My case study was “Smart Toys,” where I uncovered the privacy issues these toys faced. An example was Cloud Pets, where a hacker leaked over 800,000 personal identifiable information of parents and children.
How does your upbringing inform how you understand your relationship with technology today?
Back in college, I studied human rights and I have always been passionate about the traditional issues like migration, gender inequality, poverty reduction, etc. I was a huge fan of Model UN, which gave me a global perspective to think beyond my country. Applying it now to new technologies was an easy transition. Researching and speaking out about the protection of digital rights such as free expression and right to privacy narrowed my scope in my pursuit to become a human rights expert.
What are some good personal data habits that people should implement in their day-to-day lives?
With the release of statutory laws like GDPR and the CCPA, data ownership is heavily promoted. I wrote an article on the CCPA called “Personal Data is Deeply Personal” which highlights practical tips for users to protect their data online. There are easy things, like making sure that the sites you’re browsing are “https” rather than “http,” as these servers are more secure. Or, when a pop-up comes on prompting you to opt-out from data collection or cookies, take the time to read and consent to opting out.
Another good practice is to make sure that you always use strong passwords. Consider using a password manager that can generate new passwords that are more than 15 characters. Multi factor authorization, where a code is sent to your phone to ensure your identity after sign in, is a good way to keep your accounts secure. Also, don’t underestimate the importance of updating your operating software and apps: new updates provide better security patches to protect your phone or laptop from the latest vulnerabilities
Do tech companies like Facebook and Twitter have a responsibility to police misinformation on their platforms? If so, how should they approach this? Is it a problem of design, of incentives, or something else?
Yes, I believe these social media companies do have a level of responsibility to their users to prevent the spread of misinformation. The argument of S.230 – the “safe harbor law” – is rather stale and should be revoked. I am currently researching the “duty of care” principle which can be imposed on social media companies. Prominent scholars have suggested that social media platforms be regarded as public spaces where users congregate to connect and gain information. There are a lot of laws that place a level of responsibility on such spaces to ensure the safety of customers. These same laws can be applied to social media platforms.
What can people do to ensure that they’re sharing true content online, and not spreading misinformation?
This is something I’ve discussed before on my podcast “Beyond Le Code.” There are a number of practical ways to spot fake news: use reliable sources only, read beyond the headline to make sure you’re not just consuming clickbait, fact check it if it's more six moths old. And remember to check the author; some authors are just internet trolls impersonating a credible source.
What principles do you use to guide your technology consumption & use in your own life?
Putting a clock to my screen time. I realized that I wanted to be on social media and news platforms to be aware of pressing issues, but that these platforms made me less productive.
I decided to be more intentional. I wake up at six am each day and make sure my wifi isn’t connected to prevent notification distractions. I spend an hour meditating and praying, setting my goals for the day.
I’m currently reading a book Digital Minimalism by Cal Newport, which provides insights on why we are hooked on our gadgets and how to stay off of them except when necessary. Now when I'm on the bus or train, I don't look at my phone to keep me occupied. Instead, I carry a book or magazine to unplug. I create Lists on my Twitter profile, that way I don't have to go through my timeline perpetually. Instead, I go through specific tweets to get the news or updates that I need to stay informed.
How can people recognize when they might be addicted to a particular technology or product?
I check my screen time app to see how many hours I spend on a platform. I used to clock over two hours on Twitter daily! I realized that this was my addiction, and I’ve since put a restriction to only spend forty-five minutes to one hour on the app.
Why do tech companies continue to struggle with diversity & inclusion?
Tech companies continue to struggle with diversity because they believe that qualified people of color do not exist. Fun fact, we actually do.
Further, many tech companies promote D&I for the sake of promotion. In my experience interviewing with big tech, they like to show that they had people of color in the pipeline but they don't end up hiring them.
Thanks to the media and the portrayal of people of color, many managers tend to bring pre-existing biases and preconceived notions into the workplace. I have heard personal stories of candidates changing their names to seem “white” just to get past the resume screen. Many times, I find myself ticking “don't want to identify my race” because I fear being disqualified. Further, I do not want to be considered only to fit their quota of “diversity hires.”
Are you optimistic that things can get better?
My journey in the tech policy space has taught me that technology is neither good nor bad; the use of it becomes a problem when it infringes human rights or discriminates against under-represented groups. I feel pretty confident in this era of awakening. Users are becoming more aware of how their data is being shared. Governments are stepping up to introduce regulations in what once seemed to be an unregulated space. Personally, it has been a continued learning experience, reaching out to like-minded professionals and working in groups to effect change not only for this generation, but for the future generation.
You can keep up Pearlé’s work by connecting with her on Twitter and LinkedIn and subscribing to her podcast Beyond Le Code.