
Panelists at AI LA symposium discuss ethical implications of AI. (photo: AI LA/Facebook)
Last week, AI LA held a symposium called “AI, Ethics, and Fairness.” The panel discussion took place at the Phase Two co-working space in Culver City, California and its members included a computer scientist, a technology ethicist, a legal expert in AI and an AI product designer.
In his intro remarks, panelist Peter Eckersley, research director at the Partnership on AI mentioned a key ethical problem on which he’s working:
“The safety of AI systems and how you design to avoid unintended consequences and misspecified objectives.” For example, “How do we ensure that the optimization of a social media newsfeed doesn’t mess up your political system?”
Civics Lesson: The Fairness Doctrine
In 1985, FCC Chairman Mark Fowler (a communications attorney who had served on Ronald Reagan’s campaign staff) argued that the Fairness Doctrine hurt the public interest and violated the First Amendment right to free speech. As a result of those arguments, the Fairness Doctrine was abolished in 1987. The Fairness Doctrine was introduced in 1949. It required that the holders of broadcast licenses present controversial issues of public importance, and do so in a manner that was honest, equitable, and balanced. However, stations were given a lot of latitude in how they wanted to present these contrasting views: it could be done through news segments, public affairs shows, or editorials. In 1969, the Supreme Court upheld the FCC’s general right to enforce the Fairness Doctrine, but the court did not rule that the FCC was obliged to do so.
What he is referring to is the AI that control’s Facebook’s newsfeed algorithm. Facebook, Google and other tech platforms all use a type of AI that predicts what you will like in order to customize your newsfeed, search results and online ads.
Because these Silicon Valley-bred AIs are designed to monetize your attention as efficiently as possible, authoritarians, fascists and violent extremists have figured out how to weaponize Facebook’s and Google’s AI to divide Americans by stoking paranoia and hatred that makes us fear and distrust our fellow citizens.
It used to be that journalists, their editors, and publishers served as a curator of accurate information necessary for a healthy and functioning democracy. In that system, if a professional newspaper or TV news program consistently lied, they would go out of business due to a lack of credibility.
But, during the past few decades, the Russians, Fox News, Rush Limbaugh, a host of other media companies owned by Rupert Murdoch, and other activist millionaires and billionaires have invented a new type of opinion journalism that is pure propaganda meant to promote an extreme and radical form of right-wing conservatism.
This right-wing network of sites and media personalities have been able to weave together a mix of truths, half-truths and outright lies that make an audience unsure about what is exactly true or untrue, and because the stories confirm pre-existing biases about culture and politics, these propaganda shows get huge ratings and traffic.
Opinion journalism also exists on the Left, but it is mostly fact-based and so far it has not promoted the emergence of leaders who regularly tell lies or promote threats of violence against citizens with views different than their own.
The Russians have successfully used these tactics for decades with names like “active measures,” “non-linear war” and “asymmetric polarization.” These are all methods of psychological warfare, and since the 2016 presidential campaign, the right-wing propagandists have perfected the use of social media and SEO (search engine optimization) to micro-target digital content that stokes violent extremists and paramilitary militia groups.
After the symposium, I had the opportunity to chat with Brian Green, Director of Technology Ethics at Santa Clara University’s Markkula Center for Applied Ethics. I asked him about reports that Elon Musk killed an AI project that was too dangerous to release to the world. The AI that Musk’s Open AI team developed, called GPT-2, could automatically write plausible sounding fake news stories after being trained by being fed 8 million web pages of content.
The assumption is that the Russians and other bad actors will get access to or build their own fake-news generator in the next one to two years, and Green said that Eckersley, who is also a Distinguished Technology Fellow the Electronic Frontier Foundation (EFF), is part of a group thinking about norm-building around how to share information about potentially dangerous algorithms. One of the hopes for many working in the field is that an AI antidote to GPT-2 and other potentially malicious forms of AI will soon be discovered.

Jenny Liang takes the AI LA audience through a workshop on Moral Foundations Theory (photo: AI LA/Twitter)
During the panel, Eckersley said that he is skeptical of any traditional AI approaches that try to use logic to solve the problem and said that based on what he is seeing with visual deep learning AI, he expects text-analytics AI to work similarly by using fractals. He said it will be more like “herding fractal cats than actual cats” if there is going to be a true AI that can learn ethics and detect fake news.
Before my conversation with Brian Green and immediately after the panel, there was an interactive workshop led by Jenny Liang, a design researcher and strategist at Otis College of Art. She helped the symposium audience learn about our cultural and ethical biases by taking us through an exercise based on a psychological model called Moral Foundations Theory.
Moral Foundations Theory (MFT) says that humans all see the world through different lenses that are skewed by our parents and cultural upbringing. Here are the main Moral Foundations:
1) Care/harm: This foundation is related to our long evolution as mammals with attachment systems and an ability to feel (and dislike) the pain of others.
2) Fairness/cheating: This foundation is related to the evolutionary process of reciprocal altruism.
3) Loyalty/betrayal: This foundation is related to our long history as tribal creatures able to form shifting coalitions. It underlies virtues of patriotism and self-sacrifice for the group.
4) Authority/subversion: This foundation was shaped by our long primate history of hierarchical social interactions.
5) Sanctity/degradation: This foundation was shaped by the psychology of disgust and contamination.
6) Liberty/oppression: This foundation is about the feelings of reactance and resentment people feel toward those who dominate them and restrict their liberty.
Liang explained that her dream-AI will help people easily see the different moral foundations and world views held by an author in a social network or in online forums.
For example, liberals tend to value Care/Harm and Fairness/Cheating as their primary moral values while conservatives tend to value Authority/Subversion and Sanctity/Degradation. Liang’s hope is that an AI that could learn MFT would help bring Americans together by showing us the different biases and points of view behind our comments and opinions shared online.
In the absence of rules like the Fairness Doctrine, which governed and limited the amount of extreme political views that could air on television or radio, market incentives have allowed extremist propaganda to flourish, leading to the election of fascist demagogues like Donald Trump around the globe.
Hopefully, the efforts by Green, Eckersley, Liang and others will help us get control of the bad AI that is breaking our democracy and instead develop an AI that helps us defend democracy while bringing Americans together based on our shared morals and values.