Over the next decade, artificial intelligence (AI), automation powered by AI, and other emerging technologies will play an increasingly larger role in the global economy.
The McKinsey Global Institute (MGI) estimates “between 400 million and 800 million individuals could be displaced by automation and need to find new jobs by 2030 around the world, based on our midpoint and earliest (that is, the most rapid) automation adoption scenarios.”[1]
AI will also have an outsized impact on numerous other parts of American life. For example, AI is already being utilized to help determine criminal sentencing decisions, and it’s starting to revolutionize the way schools educate their children.[2],[3]
Although artificial intelligence has the potential to dramatically improve Americans’ quality of life, it could also be used as a tool by activists, academics, and big corporations to radically transform society, making AI one of the biggest threats to freedom in the world today.
Introducing Bias
Artificial intelligence systems can analyze data, discover important patterns, and answer complex questions on a scale that far surpasses humans’ abilities. But AI can also produce heavily biased results, depending on its design and the data being utilized.
Some left-wing activists, technology companies, and politicians want to embed AI with social justice metrics and goals—including environmental, social, and governance (ESG) scores—to alter the United States’s economy, institutions, and culture. They can accomplish this task by ensuring that algorithms favor, disfavor, or exclude certain information, or by changing data before using it to train an AI system. In many cases, activists claim such a strategy is necessary to ensure that existing systemic biases, especially those related to race, do not make their way into emerging AI models.[4]
The growing movement to fashion AI so that it provides results favored by elites and left-wing activists poses substantial threats to liberty. If AI models are rigged to benefit some groups over others, a significant reduction in freedom, at least for some groups, will surely follow. For example, some criminals could receive harsher or more lenient punishments than others because of algorithmic biases. Similarly, some loan applicants could be denied access to capital to help ensure that a larger social-engineering goal is accomplished.
In these cases, and many others, the users and potential victims of AI technology might never understand why an AI system made the recommendation that it did, creating immense confusion, frustration, and skepticism of AI.
Manipulating AI
Despite the dangers associated with embedding AI systems with ESG and other social justice metrics, many politicians are advocating for substantial AI manipulation, including the Biden administration.
In October 2022, the White House released a “Blueprint for an AI Bill of Rights.” The AI Bill of Rights includes a troubling section titled “Algorithmic Discrimination Protections,” which reads in part, “Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race” and other factors.[5] In other words, if AI isn’t producing equitable outcomes, then it’s guilty of “discrimination.”
Among other things, the White House wants AI designers to use “proactive equity assessments as part of the system design,” a move that would deliberately bias AI to achieve social or economic goals favored by President Biden.[6]
Some of the most powerful institutions on Wall Street appear to agree with the White House’s approach. Institutional Shareholder Services (ISS)—one of only two massive proxy advisory firms widely used by the nation’s biggest institutional investors (Glass Lewis is the other)—noted in a June 2023 report, “A primary way to improve AI model fairness is the specification of fairness-aware algorithms. This means that in addition to other objectives, such as predicting high job performance, user engagement, or other successful outcomes, the model also factors in fairness metrics such as gender balance. These constraints encourage predictions that are equitable across certain protected attributes, thereby mitigating discrimination.”[7]
Existing data seems to show that these attitudes have already made their way into many popular AI platforms. For example, academic journal Public Choice published in August 2023 a robust study of potential political bias in ChatGPT, a widely used AI program that helps users discover new information available on the internet. The researchers found, “Our battery of tests indicates a strong and systematic political bias of ChatGPT, which is clearly inclined to the left side of the political spectrum.”[8]
Policy Recommendations
Policymakers seeking to ensure that AI systems used in their states produce truly unbiased results—rather than results that are embedded with an ESG-aligned, social justice agenda—have a wide range of options.
First, state lawmakers could pass bills that ban state and local agencies from using AI systems that have been altered to promote an ideological agenda, or that rely on manipulated datasets. If AI models are going to be used by governments—and there are numerous cases where they certainly could be—then those governments ought to be required to guarantee that the AI systems that they rely upon are built and operated objectively. Mathematics and objective data, not political or social ideology, should be the foundation of all government AI systems.
Second, state policymakers could enact new fair access rules or revise existing processes to ensure that financial institutions, including banks and insurance companies, are not embedding AI systems with ESG scoring metrics to unjustly favor some customers over others.
It’s also important to note that many existing fair access banking and insurance rules would likely address numerous concerns outlined in this Tip Sheet, regardless of whether they mention AI explicitly. Banks and insurance companies generally cannot escape regulatory burdens by turning to AI systems. They are obligated to ensure their use of AI is in line with federal and state laws. Thus, if a state requires fair access, financial institutions’ AI systems must be in compliance with fair access guidelines.
Third, lawmakers could ban state and local criminal justice systems from using AI to help with sentencing decisions. Although humans are also capable of introducing bias into criminal sentencing, the dangers associated with utilizing AI for this important purpose are far too great to ignore.
Fourth, AI could be banned from government-run schools or severely limited. Until AI designers pledge that they will not use AI to advance left-wing ideological goals, as well as remove all existing biases that might exist, it makes little sense to include AI in classrooms. A detailed review of all education-related AI systems used in a given state is warranted and likely a good first step in this area.
[1] James Manyika et al., “Jobs lost, jobs gained: What the future of work will mean for jobs, skills, and wages,” McKinsey Global Institute, mckinsey.com, November 28, 2017, https://www.mckinsey.com/featured-insights/future-of-work/jobs-lost-jobs-gained-what-the-future-of-work-will-mean-for-jobs-skills-and-wages
[2] See Karen Hao, “AI is sending people to jail—and getting it wrong,” MIT Technology Review, January 21, 2019, https://www.technologyreview.com/2019/01/21/137783/algorithms-criminal-justice-ai
[3] Claire Chen, “AI Will Transform Teaching and Learning. Let’s Get it Right,” hai.standford.edu, Human-Centered Artificial Intelligence, Stanford University, March 9, 2023, https://hai.stanford.edu/news/ai-will-transform-teaching-and-learning-lets-get-it-right
[4] See ReNika Moore, “We Must Get Racism Out of Automated Decision-Making,” aclu.org, American Civil Liberties Union, November 18, 2021, https://www.aclu.org/news/racial-justice/we-must-get-racism-out-of-automated-decision-making
[5] “Algorithmic Discrimination Protections,” Blueprint for an AI Bill of Rights, WhiteHouse.gov, U.S. Office of Science and Technology Policy, last accessed August 29, 2023, https://www.whitehouse.gov/ostp/ai-bill-of-rights/algorithmic-discrimination-protections-2
[6] “Algorithmic Discrimination Protections,” Blueprint for an AI Bill of Rights, WhiteHouse.gov, U.S. Office of Science and Technology Policy.
[7] Joe Arns et al., “The Intersection of AI and ESG: Discrimination,” ISS Insights, June 27, 2023, https://insights.issgovernance.com/posts/the-intersection-of-ai-and-esg-discrimination/#:~:text=A%20primary%20way%20to%20improve,metrics%20such%20as%20gender%20balance
[8] Fabio Motoki, Valdemar Pinho Neto, and Victor Rodrigues, “More human than human: measuring ChatGPT political bias,” Public Choice, August 17, 2023, https://link.springer.com/article/10.1007/s11127-023-01097-2
Justin Haskins is a New York Times bestselling author and political commentator, the president and founder of The Henry Dearborn Liberty Network, and the director of the Socialism Research Center at The Heartland Institute, a national free-market think tank. (His work here does not necessarily reflect the views of The Heartland Institute.) Follow him on Twitter @JustinTHaskins.