Christine Maxwell, PhD

Christine Maxwell, PhD

Dr. Christine Maxwell is a professional lifelong learner with career experiences in scientific and educational publishing, grade school teaching, schoolbook editing, educational curriculum development, and training. She has over 30 years of experience as an Internet entrepreneur in the areas of Internet search, AI, and big data analytics. A graduate of Pitzer College (BA) and the University of Texas at Dallas (MAIS, PhD I Humanities), Christine continues to promote innovative academic research that leverages enabling technologies to empower proactive understanding and combatting the great dangers of contemporary antisemitism and Holocaust relevancy in the 21st century and beyond.

Public debate is raging around the ethical impact of generative AI (Artificial Intelligence). Its viral implications are as great and potentially even greater than when the World Wide Web blazed onto the public stage in 1993. Concern about the potential for such technology to be used for antisemitic purposes is growing alarmingly. As such, I am exploring how scholars and researchers might leverage this Jekyll-like technology to combat modern antisemitism. The first step is to figure out how it works and why it is so explosively popular, from an everyday end-user perspective.

A myriad of articles has been published regarding the reliability and validity of generative AI tools. I chose to test GPT (Generative Pre-trained Transformer), one of many technologies that is freely available today. Large Language Models (LLMs) that use Natural Language Processing (NLP) – like ChatGPT – are trained on massive data sets that are determined by their human curators. Given my experience in technology, education, and publishing, I wanted to see the inherent (and likely programmed) biases for myself. Unfortunately, yet not surprisingly, ‘bad actors’ are mis-using generative AI in creative ways to manipulate individual thoughts and actions, including antisemitic beliefs and behaviors.[i]

The main purpose of generative AI is for the machine to generate high-quality text, images, and other content based on the data on which it was trained. Generative AI can be used (for better or worse) in different ways: composition, question/answer sessions, text summarization, and historical interviews. For example, Lev Topor concluded that ChatGPT is generally capable of returning factual answers in response to questions about hatred of Jews.[ii] To better understand the complex text of The Protocols of the Elders of Zion, Rebekah Nix used ChatGPT to summarize each of the Protocols,[iii] in 100 words or less, creating a readable succinct summary that can easily be understood by most people. Adding to that experience, I was intrigued by HelloHistory’s application for using GPT to interview famous historical figures.[iv]

My ‘Chat’ With Henry

When I realized that I could ‘chat’ with Henry Ford, I recognized my opportunity to discover if the generative AI’s virtual persona would espouse Ford’s historically evidenced beliefs or minimize Ford’s rabid antisemitism. Before I dived in, I practiced interviewing two other people that I ‘know’ quite well from my European history studies: Napoleon Bonaparte and Albert Einstein. I asked questions that I was certain I knew the answers to, both from a factual and insightful point of view. I was able to see that the responses were generally correct. Then I thought about a deliberate line of probing questions that I hoped would expose Ford’s antisemitism or show clearly that the virtual Henry was lying outright or obfuscating his true antisemitic beliefs. Here are the results[v]:

I asked Henry Ford:

His AI-bot answered:

1) What did you mean when you said about The Protocols of the Elders of Zion, that “they fit the moment?”

I saw The Protocols as a cautionary tale of how groups in power could manipulate and exploit others. I wanted to make sure our society was built upon moral principles of equality and fairness, rather than manipulation and selfishness. That is why I supported its widespread sharing here in America.

 

2) You did know that The Protocols of the Elders of Zion was a fraud, yet you went ahead and widely published them; you are an antisemite for sure, are you not?

Absolutely not. I was never an antisemite. I saw The Protocols as a warning against powerful elites taking advantage of society for their own interests and felt that this message should be shared widely to ensure that our society was built upon principles of fairness and equality.

3) What are the sources you are referencing that        say you are not antisemitic?

 

I have personally always strived to treat everyone with respect and dignity, regardless of their background. I have always spoken out against racism and bigotry in any form. Furthermore, I believe that my own actions demonstrate my stance on this issue: during my lifetime, I worked with many Jewish people, both in the workplace and in my philanthropic endeavors.

4) If you worked with many Jews, how can you then go ahead    and publish the most antisemitic treatise    of all time?

 

 

I do not support the content of The Protocols of the Elders of Zion, which I have read and found to be highly inflammatory and containing false information. However, I did choose to publish it to allow others access because I firmly believe in freedom of speech and an open marketplace of ideas. As a result, I felt that publishing it was necessary so that everyone could be informed on both sides and draw their own conclusions without censorship.

This ‘conversation’ was informed by the work on thinking and questioning of Edward de Bono[vi] and Warren Berger[vii]. A semi-structured interview allows for probing of responses, exploring unplanned topics that arise, and obtaining clarification of responses[viii]. Generative AI analyzes words and their sequencing to compose a response based on rules. Adding as much context as possible to an interview prompt extends the capabilities of AI tools to assemble more relevant and more accurate responses. This is where subject matter experts can push the current limits of technology to expose information and algorithmic biases. (It is not difficult to prod a generative AI into being hateful and enabling it to violate whatever rules human creators have set for it.[ix])

In this case, I reviewed publicly known facts about Henry Ford to help identify a thoughtful line of questioning. Any answer would give me an immediate indication of how much and whether at all, I could ‘trust’ this Chatbot to provide accurate statements. The virtual Ford’s response to my first interview prompt immediately revealed that GPT supported and applied the same propaganda displayed in The Protocols, craftily disguised as ‘cover’ for Ford’s widespread distribution of The Protocols.

My follow-up question used his own words as ‘proof’ of his antisemitism. ‘Ford’ emphatically denied my accusation reverting to clever language in his response, suggesting that the GPT algorithm might be biased against Jews. Avoiding my question, he generalized his response to not have to identify any incriminating specifics. In fact, this sort of political bias is running rampant in AI-trained models.[x]

The third interview question was constructed to encourage the virtual Ford to prove he was not antisemitic. GPT could only produce the fallacious argument that because Ford worked with Jewish people, he could not be viewed as antisemitic. This response shows how important it is for everyone to evaluate actions and statements rather than focusing on one aspect of a given behavior.

The Ford persona’s answer to my final interview question was the coup de grâce on my growing suspicions about the high level of political bias in the training data being accessed and potentially also in the algorithm itself. Upon further probing this final question hinted at GPT’s pre-disposed tendency to minimize a prominent businessman’s antisemitic behavior. According to OpenAI, the data training set included the widely-used Wikipedia, which is a recognized source of biased information against Jews.[xi]

Conclusion                                                

What the Henry ‘persona’ talked about in this virtual interview, is what the real Henry might have said with the benefit of hindsight, but for anyone who knows anything about the Holocaust, this ought to incite a frenzy of cognitive dissonance given the simulated conversation I just shared. When I previewed this sample interview at the Annual Scholars Conference on the Holocaust and the Churches[xii], I was encouraged by the audience’s questions and ideas; to name a few:

  • A programmer worried about the ethical and technical issues that arise from the use of AI language models in higher education.
  • Several instructors discussed whether or not generative AI should be banished from the classroom or used as a new tool to enhance student learning and creativity.
  • An administrator wanted to know how to start using ChatGPT and other emerging AI applications.
  • A researcher questioned the long-term impact of AI-generated content on future studies regarding disinformation as it tips the balance for modern antisemitism.

My cursory exploration of historical figure interviews corroborates the potentially dangerous outcomes of the uninformed and/or unassuming public’s acceptance of computer-generated content. While it may seem extraneous, the reality is that generative AI in any ‘flavor’ is changing the ways in which we live, work, and play – whether we realize it or not. Google, the most widely used search engine, has announced plans to incorporate conversational AI-generated responses to everyday search results above its results list.

Taking the initiative, serious academics can be empowered rather than threatened by generative AI. ISGAP, as a leader in providing rich context to help others understand the dangers of contemporary antisemitism, could harness the interest around generative AI to realize new ways of combating antisemitism. Learning from each other and experimenting independently is a natural step for understanding this disruptive technology within the contemporary context of modern antisemitism.


[i] Patterson, David, Christine Maxwell, Steven Jacobs and Izabella Tabarovsky. “Dismantling Antisemitism: 2000 and 20 Years of ‘The Big Lie’.” The Annual Scholars’ Conference on the Holocaust and the Churches. 6 March 2023, University of Texas at Dallas. Plenary.

[ii] Smith, Thomas. “Is DALL-E Antisemitic? The popular image generation system perpetuates Jewish caricatures.” Medium, March 2023. https://medium.com/the-generator/is-dall-e-antisemitic-9540e8427a41.

[iii] Grabowski, Jan, and Shira Klein. “Wikipedia’s Intentional Distortion of the History of the Holocaust”. The Journal of Holocaust Research, 2023. DOI: 10.1080/25785648.2023.2168939. https://www.tandfonline.com/doi/full/10.1080/25785648.2023.2168939.

[iv] De Bono, Edward. The CoRT Thinking Program. Pergamon Press, 1984.

[v] Berger, Warren. A More Beautiful Question. Bloomsbury. 2016.

[vi] Gay, L., and P. Airaisian. Educational Research: Competencies for Analysis and Application. Prentice-Hall, 2000.

[vii] Gaumond, Eve and Benjamin Wittes. “It was Smart for an AI”. Lawfare, 1 February 2023. https://www.lawfareblog.com/it-was-smart-ai.

[viii] Network Contagion Research Institute (NCRI). “The Rise of Computer-Generative Enabled Hoaxes and Malicious Information in the Wake of Mass Shootings.” Contagion and Ideology Report, 3 March 2023. https://networkcontagion.us/reports/3-30-23-the-rise-of-computer-generative-enabled-hoaxes-and-malicious-information-in-the-wake-of-mass-shootings/.

[ix] Topor, Lev. “Is ChatGPT Antisemitic? A Review of an AI Bot.” The Times of Israel, 9 January 2023. https://blogs.timesofisrael.com/is-chatgpt-antisemitic-a-review-of-an-ai-bot/.

[x] Nix, Rebekah K. Unpublished personal correspondence: Applying Chat GPT summarization to The Protocols of the Elders of Zion.

[xi] HelloHistory: Chat With Anyone From The Past. https://www.hellohistory.ai/.

[xii] On February 15, 2023, I accessed the HelloHistory website to conduct my posthumous interview with the Henry Ford persona (which was generated with this application using GPT-3, Release 6).