Improving Social Media: Misinformation & Free Expression
Our information ecosystem is related to the health of our democracy. How can we reduce misinformation and disinformation on social media platforms while also ensuring that platforms promote the free exchange of ideas?
Watch our conversation with two leading experts in the field. We were joined by Dr. Jasmine McNealy (Associate Professor of Telecommunication at the University of Florida, Harvard Berkman Klein Center affiliate, media & law expert) and Dr. Claire Wardle (co-founder and director of First Draft, leading expert on user generated content, verification and misinformation).
This conversation is part of our monthly livestream series with All Tech Is Human and TheBridge. Recent guests have included Safiya Noble, Yael Eisenstat, Dipayan Ghosh, Sarah T. Roberts, Mutale Nkonde, Charlton McIlwain, Kate Klonick, Timnit Gebru, Meredith Broussard and many more. Check out our previous conversation HERE and also on our playlist. This conversation will be moderated by David Ryan Polgar. The livestream will be recorded. In addition, a follow-up podcast and curated resources will be supplied by the Radical AI Podcast.
Here are some of the questions we received from our community:
Should social media draw the line at "is this dangerous for someone's safety" (e.g. "the moon is hollow" = disinformation with little risk for safety, "Covid is a hoax" = disinformation with high risk for safety)? In which case it would mean that there are various degree of disinformation: how do you cope with this in a content moderation system that's already opaque and struggling (e.g. cultural specificities, biased algorithm...)
What are the unexpected metrics/ indicators you've found for seeking out bad content? How can we be more proactive as content moderators and not rely on user reporting?
How do you look at this question through an international lens? Particularly when governments have vague misinformation/disinformation laws on the books and demand tech platforms remove the content.
Is it better to play watchdog on content or create new, honest content? What actions should be prioritized?
What role do whistleblowers like former Facebook employee Sophie Zhang play in the fight against misinformation?
Any suggestion for social media and teens? any startups and companies you recommend?
What are options for moderating immutable content? Are there blockchain folks looking at designs, oracles, or processes for raising awareness?
How do we support communities who are less digitally savvy in becoming aware of and able to manage the threat of disinformation online?
How can community organizations and agencies, such as libraries, literacy councils, food pantries, etc., help to fight misinformation?
How could public health institutes engage in social media?
Can we rely on the self-regulation of the social media platforms? What is the role of the public sector/State in the construction of social media regulation?
What are your thoughts on improving social media from a gendered perspective? i.e. to reduce or respond swiftly to gender-related abuse online?
Do you have a view on the proposed Filter Bubble Transparency Act or Protecting Americans from Dangerous Algorithms Act?
What role could auditing firms have (if any) in holding social media companies accountable for misinformation?
So much of the 'free expression' debate has focused on political censorship, but what do we do if certain people and parties are heavily contributing to misinformation over others?
1) Do you think that public figures, who can reach many more people than the average social media user and who by virtue of their office or celebrity are also imbued with more authority, should be held to a different standard than the rest of us when it comes to what they post as "true" on content platforms? Can we hold them accountable as a function of the responsibility of their position to discourage rumor-mongering for political gain, and how might we do so?
2) Although social media has democratized content sharing in many ways, ironically it has also contributed to imperiling liberal democracy by splintering reality and fueling unfounded conspiracy theories... It seems impossible and inadvisable for social platforms to be "arbiters of truth", but how can they hide behind a stated commitment to being an impartial/neutral conduit when their algorithms actively promote the most controversial content?
3) Do you think big lawsuits like the ones brought by Dominion Voting Systems will or could have an effect on mis- and disinformation spread by "news" networks and made viral on social media platforms?
To reduce the flow of misinformation while maintaining free expression and not depending on the debatable judgment of humans and automation, could we require that, with respect to Section 230, a service provider may provide information (including ads) to a user from only publishers that the user has explicitly selected, otherwise the service provider would be considered a publisher? The service provider would also be required to show each user the publishers he/she has selected and allow the user to deselect any publisher.
Which platform, in your opinion, has been the most responsive to misinformation challenges, and if you're open to sharing, which has been the least?
What role can government play in mitigating the harms posed by misinformation on social media?
What do they think about the censorship applied to Trump's twitter the days of the presidential transition?
To what extent do you think that the practice of fact-checking helps address misinformed beliefs in communities impacted by misinformation? There's a growing body of evidence that correcting facts doesn't do much to change peoples' views – so what could we as journalists and scholars do that might work better?
any work on multilingual misinformation?
When people increasingly depend on social platforms for "news(feeds)" they will inevitably swim in false info, deceitful ads, and other misinformation (see: 2016, 2020 elections). How do we resolve this tension and lead more people to the sources of legitimate news content (and away from gamed feeds in these platforms)?
Can you discuss tools and options for democracies outside of just media literacy?
Any real or theoretical case studies of providing citizens with tokens/virtual currency which they use to both access info and forward it. Goal - more thoughtful use of sources and distribution. Would have to be done in a study or. school environment to identify benefits that would appeal to the general population. Could reward "civil exchanges" without specifying the content of those exchanges.
I'm wondering if the speakers can expand on the gender dynamics of misinformation that they have encountered in their recent work. If perhaps they do speak about it during the panel I'll have questions pertaining to those specifics.
When you have a group social space, how do you 'coach' people on how to use it appropriately? How do you identify / classify a spammer and is there a way to coach them into being a contributor?
What role could auditors play in the handling of misinformation by social media platforms?