Skip to main content

Trust the Scientists (All of Them): AI Impacts everyone and everyone should impact AI policy

The booming world of artificial intelligence (AI) is brimming with possibilities, but its ethical and societal implications raise complex questions of transparency, ethics and accountability. As we navigate this uncharted territory, who should steer the ship? Traditionally, voices from the hard sciences have dominated the conversation – computer scientists, engineers, and the like. But is this enough? In a resounding “No”, I argue that we desperately need a multi-disciplinary, cross-functional approach to AI discussions, regulations, and policy. This means welcoming not just the Einsteins and Teslas, but also policymakers, linguists, economists, and lawyers – the social scientists and humanities scholars who bring invaluable perspectives to the table.

Why is this broader chorus so crucial? Here's why:

  • AI's impact transcends the technical: Sure, the nuts and bolts of AI are essential, but its real-world implications touch every facet of society – from employment, education and healthcare to privacy, justice and equity. These nuances require insights from disciplines like sociology, economics, and law.
  • AI ethics needs diverse voices: When it comes to the ethical dilemmas posed by AI, a one-dimensional, tech-centric approach falls short. We need the moral compasses and critical thinking skills honed by philosophers, anthropologists, and ethicists to guide us through the ethical minefield.
  • Building trust requires understanding people: The success of AI hinges on public trust. This trust can only be built by incorporating the perspectives of those who understand human behavior, emotions, and societal anxieties – psychologists, communication specialists, and even artists.

Imagine a world devoid of these diverse voices, where AI development happens in a silo. We might end up with powerful algorithms that are technically brilliant but socially disastrous, like biased decision-making systems, Orwellian surveillance tools, or even autonomous weapons with questionable ethical frameworks.

We don’t have to imagine it, we’ve already witnessed the disastrous results of it. Recall the countless unfortunate incidents of facial recognition algorithms misidentifying innocent people and leading to their arrest. What if a lawyer, a sociologist, a parole officer or someone, anyone, had been present to prevent this from happening? Letting AI engineers build software and unleashing it on a population with no guardrails is dangerous and irresponsible, and marginalized populations disproportionately pay the price. For instance, take ProPublica’s 2016 study of the use of software to predict recidivism in America. It was found to be biased against Black people. Some judges relied on computer programs that used demographic information like race to score the likelihood of an individual committing a future crime. While these risk assessments “were crafted with the best of intentions”, as former attorney general Eric Holder mentioned, “they have proven to be biased and may inadvertently undermine efforts to ensure individualized and equal justice”.

Instead of a future where biased algorithms can make such consequential decisions based on flawed data, let's envision a vibrant symphony of knowledge, where engineers and philosophers harmonize, data scientists and anthropologists collaborate, and ethicists and artists create a chorus of wisdom that guides us towards a responsible and equitable AI future.

Here are some concrete ways to make this vision a reality:

  • Establish multi-disciplinary AI task forces and advisory boards.
  • Fund AI research projects that bridge the gap between the hard and soft sciences.
  • Create educational programs that equip future AI leaders with a broader understanding of the social and ethical implications of their work.
  • Encourage open, frequent and continuous dialogue and collaboration between scientists, policymakers, and the public.

 

There has been some progress in this area. For example, the American Association for the Advancement of Science (AAAS) launched an AI Rapid Response Cohort last fall to place AI practitioners as policy advisors in Congressional offices. While this prestigious fellowship usually requires fellows to have a PhD or a terminal degree in engineering, this time, they also opened it to people with bachelor’s and master’s degrees. I’m the only fellow who has neither a doctorate nor a terminal engineering degree. I was selected as a fellow thanks to my experience as a public servant, a former diplomat and a tech worker who understands the power of AI as a game-changing technology. This should be the norm, not the exception, and I applaud AAAS for its foresight and for recognizing the value that an experienced and skilled public servant can contribute towards responsible AI.

By embracing the full spectrum of human knowledge and understanding, we can ensure that AI becomes a force for good, not a Pandora’s Box of unintended consequences. So, let's trust not just some of the scientists, but all of them. Together, we can create an AI future that benefits all of humanity.

 

Image: Cash Macanaya on Unsplash

Related Posts