Getting your Trinity Audio player ready...

Caitlyn Harrington and Samantha McCarthy, two sophomore roommates at UMass Dartmouth, were scrolling through Instagram and chatting in their dorm room in January. A foot of snow had fallen on the bucolic campus, and with roads and sidewalks still to be cleared, the administration decided to cancel the first day of classes.

Harrington asked her roommate if she saw that one post, from the official student life Instagram account, and didn’t it look… weird?

“I don’t know how to describe it, but you look at it and you just know,” Harrington said. The buildings in the post were not actual buildings on campus. The logo was not the university’s logo. Perhaps most strikingly, the university’s mascot appeared in profile — which was weird to Harrington, who works part-time in the marketing department and knew there were rules against altering the official mascot.

The Instagram post wasn’t real. Or it was real, but no human made its content.

An image posted to the umassd_students Instagram account raised suspicions among students.

Generative artificial intelligence — which most people are just calling AI — apparently made this post’s content, and seemed responsible for several social media posts from the student life account. This was happening despite the university’s own marketing guidelines: “Departments should prioritize authentic photography and approved design assets rather than AI-generated visuals… All visual materials should follow the official UMass Dartmouth brand standards.”

Harrington said that her bosses in the marketing department were aware of — and frustrated about — AI appearing on the student life account, which is one of the roughly 250 UMass Dartmouth-affiliated social media accounts, including for academic departments, student organizations, and athletic teams. A representative from the marketing department did not respond to The Light’s outreach.

Meanwhile, Harrington and McCarthy in recent years have received multiple syllabi that said using AI could “result in an automatic zero grade and potentially failure of the entire course.”

Harrington felt personally offended by the use of AI to fabricate social media art. She could have made that post at her on-campus job. Why was someone generating art instead of giving her the chance?

Their experience is part of the rapidly changing landscape inside schools and universities. More than half of American teenagers now say they use AI for help with their schoolwork, according to a recent Pew Research Center poll — a rate that has doubled since a 2024 survey. Many students also believe their peers are using AI to cheat. In response, universities are racing to figure out policies to protect their academic mission. 

UMass Dartmouth officials, for example, said they convened an “AI Task Force” to address the new technology. But a representative from that task force declined an invitation to talk about its work. “It would be premature to publish anything about the AI Task Force’s work, as we haven’t yet concluded it,” said Amy Shapiro, the chair of the task force and dean of the UMass Dartmouth’s honors college.

To hold their university accountable, the two sophomore roommates, who are staff contributors to the campus newspaper, The Torch, penned an opinion cartoon and an opinion column, respectively. They sought to highlight the AI creeping over guardrails and into the university’s daily life.

Last month, Harrington’s cartoon lampooning the “slop” got over 20,000 views. When The Torch’s Instagram page ran it, the cartoon garnered nearly 4,000 likes.

Click the arrow on the right side of the slide to see the full cartoon ↓

“It got a lot of reach within the school, and people were happy I said something,” Harrington said.

McCarthy’s column opined, “With 573 students enrolled in the College of Visual and Performing Arts (as of May 2024) … a machine [making art] dismisses the hardworking students we have across this campus.” 

Since publishing, Harrington and McCarthy have tracked as the university removed almost half a dozen apparently AI-generated posts. One, which advertised an event on female leadership, included a caption encouraging young women to “Gain Visibility.” Yet the post seemed to feature non-existent AI women. McCarthy’s original column pointed out the AI’s subtle hallucinations, including that one woman was wearing two watches.

After The Torch ran the opinion pieces, the university replaced this post with one featuring real people: the alumnae and students of UMass Dartmouth.

AI in the classroom

At the same time that universities like UMass Dartmouth are encountering the uncharted risks of incorporating new technology, they are simultaneously working to harness that technology.

The computer science department at UMass Dartmouth within the last five years has launched an Artificial Intelligence concentration and a minor. The concentration is one of four specializations for computer science majors, alongside cybersecurity, game design, and software engineering. Students who are not majoring in computer science can declare AI as their minor. 

“The selling points are naturally the job market,” said Haiping Xu, the professor and chair of the computer science and information department. “The field is getting more and more popular.” Starting salaries for AI engineers in Massachusetts can exceed $100,000, according to industry estimates.

Caitlyn Harrington (left) and Samantha McCarthy, sophomore roommates at UMass Dartmouth, said they’re concerned about “AI slop” on campus. Credit: Colin Hogan / The New Bedford Light

Xu believes that a strong foundation in coding and computer science will become even more important as AI systems become more prevalent. “Eventually coding becomes more and more important because of software security,” he said. “AI depends on predictions based on the neural network, so it’s not always reliable. It requires a human being.”

Some experts agree with Xu and others disagree about the enduring importance of human programmers. Regardless, every classroom is currently adapting to the new reality — including Xu’s classroom.

Xu himself prints paper-based exams for his senior computer science students. “I always give exams in that way. It’s a traditional way, and it reduces things like cheating,” he said.

Students must take programming courses as a prerequisite before joining the AI concentration, Xu said, and they mostly are not allowed to use AI to complete their assignments. They must also defend the code that they write. “If a student can explain their thinking and justify why they wrote the code, then that’s a sign that they code by themselves,” Xu said.

Students have accepted this approach, according to Xu. “They understand that they cannot be the slave of the AI.”

Communication with students about the risks of AI is, at present, UMass Dartmouth’s main policy regarding AI in the classroom. In 2024, the faculty senate passed a two-page “Guidelines” document on the use of generative AI. The document maintained that attempting to pass AI-generated content as one’s own work was plagiarism, but beyond that left a lot of discretion to individual faculty members.

“The guidelines are that it’s up to faculty to define what acceptable use is,” summarized Shannon Jenkins, an associate provost. “The administration is not going to dictate to faculty the use of tools in their classroom… What we are looking at is ways to support faculty in those determinations.” 

Jenkins said that the university administration is “very conscious and aware of academic freedom,” and wants to provide professors with the tools to incorporate AI, if they choose to do so. On the broad positions about what acceptables uses of AI are, Jenkins said that the recently assembled AI task force “is going to come up with our approach writ large.”

At peer universities, campus-wide AI policies have already been written. UMass Amherst, for example, convened its own AI Task Force in 2023. A core tenet of Amherst’s initial task force was that humans (whether students, faculty, or staff) retain the accountability for whatever AI produces.  

Michelle Trim, a computer science and information professor, co-chaired that effort and will again serve on the task force this year as the university looks to update its policies. While Trim didn’t comment on UMass Dartmouth’s AI social media usage, she said the conversation “is illustrative of the challenges we’re grappling with.”

Xu, the UMass Dartmouth computer scientist, said he had not seen the AI social media posts. But he added, “Personally, I think we have to be careful using AI.”

AI in the humanities

Chris Eisenhart joined the UMass Dartmouth English Department in 2004, when the dot-com era craze was winding down and a laptop computer retailed for more than $2,500 in inflation-adjusted dollars. He has now taught rhetoric, writing, and communication since before many of his present students were born.  

In 2023, Eisenhart went on sabbatical as a new software from OpenAI, known as ChatGPT 3.5, was making headlines worldwide. He was stopped in his tracks. 

In the middle of rethinking his curriculum for a course on style, Eisenhart remembers wondering, “Do I even need to teach this class anymore? What’s the real capability here?” 

He connected with researchers at George Mason University, including Douglas Eyman, who were researching the potential of chatbots in English courses. Eisenhart pitched an idea: what if he put ChatGPT through his coursework on style? How would it respond? he wondered. And were there any stylistic lessons to be learned from observing how a computer handled a semester’s worth of classic writing prompts? 

“I evaluated the way you might evaluate student work, but also was thinking about its pedagogical relevance” — in effect, treating the robot’s work as a student’s regular output, but also considering if any of it was interesting enough to study. 

The result: “Very little changed, actually,” Eisenhart said.

The chatbot was useful at identifying redundancies, like an editor crossing out unnecessary words. But it also struggled, and not in particularly illustrative ways. “The things ChatGPT tends to struggle with are the things that students tend to struggle with,” Eisenhart said. 

Teaching students to write in the active voice, for example, is a common lesson, and ChatGPT was not a good tool for those lessons. In a simple example, the sentence, “Medicine was given to the patient,” is something that Eisenhart said ChatGPT might struggle to rewrite. “You have to come up with ‘doctor’ if you’re going to write that in an active voice. GPT has mixed results, especially with more complex sentences,” he said. 

“So unfortunately the lessons from the study were pretty minimal as far as what we could incorporate into the style curriculum.”

Student’s feelings about AI shift

As Eisenhart’s experiment was taking place, McCarthy, the UMass Dartmouth sophomore who wrote the opinion column, remembers her feelings about AI shifting.

“When ChatGPT first came out… It was a novelty,” McCarthy said. “Then it became more of a concern.”

Today she’s an English and philosophy major, and she shares the concerns of many students that their peers may be tempted to cheat. Furthermore, “it’s hurtful” to see people “quickly type in a prompt” when she and her friends may spend hours on their assignments. 

Samantha McCarthy (left) and Caitlyn Harrington in the small basement offices of The Torch, UMass Dartmouth’s’ student newspaper. Credit: Colin Hogan / The New Bedford Light

That was especially true when university accounts were posting AI-generated art. “We see these accounts posting things and it feels hypocritical,” McCarthy said. She sympathized with her roommate, the author of the “slop” cartoon, who spent late nights in the studio arts building working on her drawings and paintings.

As with programmers and every other field, the effect on the job market is not yet clear for those studying the humanities. Among the cases made for why AI won’t take your job is simply that people want work made by other people.

For McCarthy, that has become evident with her work at The Torch, the student newspaper. “It’s such a great community… It’s become like a family,” she said. Next year, McCarthy is slated to become the paper’s new editor-in-chief.

Covering the rise of AI on campus has generated a wave of positive feedback at the paper, she said, and assured her that people care about the “robbery of what makes our brains so special.” 

Email Colin Hogan at chogan@newbedfordlight.org


Leave a comment

Your email address will not be published. Required fields are marked *