As a History Professor, This Is How I Use AI in Class 

An AI-generated image based on the prompt “Russell Crowe fighting a computer,” made with Imagine.art AI image generator (image by Sarah E. Bond)

At the beginning of the movie Gladiator (2000), Roman troops prepare for battle against their Germanic foes. In a snowy forest not far from the Danube River, the soldiers await a response to a final diplomatic gesture extended to these “barbarians.” When the Germanic soldiers reply with a headless messenger returned to the Romans on horseback, they get their answer: Battle is imminent. The camera pans to a Roman military officer turning to his general. He states flatly, “People should know when they are conquered.” The general and the film’s protagonist, Maximus (Russell Crowe), bristles at the comment and responds pithily, “Would you, Quintus? Would I?”

When and whether we should accept defeat in the face of adversity — or stand and fight — is the question that guides the entire film. I reflect on this scene a lot, namely because “my Roman Empire,” is well, the actual Roman Empire. I know the film, its dialogue, and its minute details well because I teach Gladiator nearly every year. In my Roman Empire course, I ask students to review the film by critiquing the historical errors and ahistorical choices made by the filmmakers, costume designers, and screenwriters. And yet, recently, I have thought back on Maximus’s oft-quoted lines for other reasons. Listening to the largely defeatist responses of many of my colleagues in academia to the use of Artificial Intelligence (AI) in classrooms across the country, I found myself thinking that there is no reason we should think we are conquered.

Listen beautiful relax classics on our Youtube channel.

The use of generative AI tools to produce images, text, sound, and even video has grown exponentially over the last year. In response, art and humanities teachers have increasingly moved to ban their use in assignments. And yet, from books to alcohol to sex, sociologists largely agree that bans are socially ineffective. As University of Buffalo philosopher Ryan Muldoon wrote, studies show that media and technology prohibitions don’t stop behaviors. Instead, they encourage black markets and clandestine use.

I didn’t need a study to tell me students would secretly use AI. I grasped this new reality after prohibiting ChatGPT in the Spring of 2023 in my courses. Still, I received numerous papers that AI detectors flagged as likely written by artificial intelligence. As a scholar and writer, this was an alarming result. I am a historian at the University of Iowa, which is consistently ranked as one of the top in the country for writing. Our focus on writing is why increased student use of ChatGPT began to worry administrators and professors alike.

Over the summer, however, I began to ponder how I might work with AI instead of fighting it. Maybe it was time for some diplomacy. I began to integrate ChatGPT into my courses this semester as a way of underscoring AI’s deficiencies and stressing the critical connections of which humans are uniquely capable. 

Instead of assigning my rather routine Gladiator review, I asked students to query ChatGPT about the film’s historical inaccuracies. I also reached out to the person who knows the film’s historical deficiencies inside and out: Harvard scholar Kathleen Coleman, a professor of Classics who served as the historical advisor for Gladiator before it was released. As an expert in Latin and a leading scholar of gladiatorial combat, she provided copious notes and corrections on three different scripts. In the end, however, the ahistorical elements of the movie proved to be too much. Coleman asked the filmmakers not to specify “historical advisor” in her credit line. 

And yet, when I spoke with Coleman, it wasn’t the memories of her work on Gladiator that captured my attention. It was the fact that she had already begun thinking about the incorporation of AI into the classroom long before other college instructors had.

Coleman teaches a general education course for Harvard undergraduates called “Loss.” The question that guides the class is all too relevant in a world constantly at war, and which lost nearly 7 million people in the COVID-19 pandemic: “How are we to cope with the inevitability that some of what we most love in life we will lose?”

In one assignment, Coleman asked students to compose letters of condolence. She contributed one herself, to a friend whose adult daughter had died of cancer, and asked the students to critique it, which they did with gusto. Afterwards, she told them that it had been written by AI. “ChatGPT did an absolutely appalling job and the students just ripped the thing to shreds,” Coleman told me. The lack of emotion and humanity, she remarked, “made them realize that AI is so generalized, so cliche-ridden, and has absolutely no sensibility. It went on for more than a page, in hundreds of words of nothing but blabber.” 

As it turns out, humans are pretty good at sussing out which “thoughts and prayers” come from a genuine — and human — place. Earlier this year, Vanderbilt University was forced to apologize after emailing a condolence missive written by ChatGPT addressing the shooting at Michigan State University to faculty, students, and staff, many of whom quickly picked up on the letter’s absence of any authentic emotion.

How, then, can we teach the use of AI responsibly? Well, we can start by showing — and not telling — our students about its myriad issues and algorithmic biases. As information scientist and English literature expert Ted Underwood has remarked, it is time to work with rather than against AI. In the process, we may realize humanity’s own strengths and weaknesses. He suggests that experiencing the pitfalls of AI is more instructive than warning students about them. We are also preparing them for a future that, without doubt, will require being AI savvy. 

Redesigning assignments to force students to go back and critique, fact-check, and evaluate AI tools may also give them pause the next time they use them. Let’s take ChatGPT’s assertion that the Colosseum had not yet been built during the time period of Gladiator. In reality, the Flavian Amphitheater — best known as the Colosseum — was dedicated 100 years prior to Marcus Aurelius’ death. Students in my course figured out quickly that AI currently struggles with dates and discerning AD/BC and CE/BCE dating systems. 

Chronological confusion aside, AI has a host of other issues, though many of its faults improve or become a bit less perceptible with each new version. We already know a lot about generative AI’s present limitations — with “present” being the operative word. Most troubling, AI chatbots continue to struggle with proper citation of sources and providing attribution. In the face of uncertainty, AI tends to “hallucinate” and make facts and people up with the confidence of a mediocre White man. It fills in gaps in knowledge with rubbish and false sources, leaving the reader to sort out fact from fiction.

ChatGPT’s answer to the query of what were the historical inaccuracies in the movie Gladiator (screenshot Sarah E. Bond/Hyperallergic)

What ChatGPT couldn’t explain to students was why certain errors were made in Gladiator. With research beyond AI, they found that anachronistic blunders like horse stirrups, for example, were necessary because the stuntmen needed them as part of their safety contract. However,  the bad Latin that made it into the movie likely resulted from the filmmakers’ failure to incorporate Coleman’s copious notes on the script drafts. But more glaring errors, such as Gladiator’s allegation that Marcus Aurelius wished to bring back the Roman Republic during his lifetime, demonstrate how Rome’s past is wielded as a mechanism for commenting on contemporary society. The students soon realized that in modern media, we often receive the Rome that producers want, rather than the one that really existed. 

Rome as a metonym for America is hardly an innovative parallel to draw, but its consistent use gestures at the ways in which we use Rome’s perceived faults as a way of critiquing our own society. Monica Cyrino is one of the leading scholars of films focused on antiquity and a historical consultant for TV and movies herself. In 2004, she published an article looking at the ways in which Gladiator cast a light on contemporary American society. In her analysis, she quotes Peter Bondanella, formerly a professor of Italian and film studies, who believed that the myth of Rome was drawn upon by movies from Ben-Hur to Spartacus because, like all myths, it is malleable. To him, this myth provides a means for people to understand humanity, the world, and themselves. He saw Rome as a common heritage to be recast again and again.

What made Gladiator a success, as Cyrino points out, was that it remixed the traditional message. The movie had nuanced characters that appealed to Americans because it spoke to their disaffection with American politics. Maximus’s alienation as a gladiator marginalized by Rome and Commodus’s imperial rule was itself an analogy for many Americans in the year 2000. American society was hot on the heels of the Clinton administration, which had been rife with corruption and scandal, even as the nation thrived economically. The film was released at the beginning of the summer election season which saw George W. Bush going up against Clinton’s vice president, Al Gore. By November, the country would be plunged into uncertainty in the midst of hanging chads and further disenchantment with American political culture. 

Director Ridley Scott’s sequel to Gladiator is set to be released in November 2024. This follow-up to the now-classic film is likely to be another reflection of a divided America — albeit over two decades later. Its attendant historical blunders will, of course, be listed out by your favorite smug historian on social media or mocked in a pretentious review in the New Yorker. What we can hope for is that, like the progressive versions of ChatGPT, the film will learn from its earlier flaws and listen to historical advisors this time around. 

Movies that get history wrong can be frustrating for historians and at the moment, AI really isn’t doing much better. And yet film and AI both have the power to greatly alter how people understand the past in dangerous and misleading ways. Rather than blaming the academics who often work tirelessly behind the scenes on big-budget films, only to have their notes rejected, we might ask ourselves about the choices that brought these gaffes about — and encourage filmmakers to be more historically accurate. Similarly, historians are already working hand-in-hand with computer scientists to improve AI’s ability to draw conclusions about the past. 

All the way back in the first century CE, the Roman philosopher Seneca the Younger noted, “To err is human, but to persist in error is diabolical.” From film sets to our use of AI, historical errors will undoubtedly persist. But humans are still needed in order to critically analyze them, make connections, and correct the record. As Coleman remarked, AI is already successful at mimicry, but lacks the ability to be truly creative. Perhaps more importantly, the ability to communicate empathy, loss, and complex emotions remains uniquely human. AI may be able to state that a group of people were conquered by Rome, but it’s up to historians to explain why they continued to resist.

Source: Hyperallergic.com

Listen beautiful relax classics on our Youtube channel.

No votes yet.
Please wait...
Loading...