“So can AI do our jobs? Some aspects of it, yes. But the more substantive side — it can’t, if we are bent on producing quality that only humans can deliver”
FOR all the journalism subjects I teach, my syllabi all contain a provision called “Articulation of commitment to intellectual honesty and journalistic integrity.” It is about how it is a given that journalism students — and more importantly decent human beings — certain things are expected of them when it comes to maintaining integrity in producing journalistic works.
“You also acknowledge the potential that artificial intelligence can bring into people’s way of life, but commit that you will never use it in place of what is expected of you as a journalist,” part of the provision goes.
The plan, on the first day of class, is to make students recite a paraphrase of the sentence above. Uttering and enunciating the words as an articulated commitment would be a constant reminder to students to be mindful and cautious about their use of AI. On a higher plane, I hope it would drive them to conduct themselves ethically at all times despite the temptations of ease and convenience.
I doubt whether any student in UP would deliberately enter the journalism program with the intention of purely relying on technology to do the work. They are here, after all, precisely taking BA Journ because they have it in them to ask the right questions, to investigate, to create.
These early days, the tendency is to see AI as a bad thing that somehow mars the integrity of the finished work — at least, for people from my generation, who are not digital natives and who tend to view new things with suspicion.
But like many new things that eventually grow on us, AI is certainly staying. We better snap out of our denial and just deal with it.
In its August 2023 meeting, the University of the Philippines’ Board of Regents approved the Principles for Responsible and Trustworthy Artificial Intelligence, which acknowledges the excellent opportunities provided by AI but also the significant risks it creates.
According to the document, there are five basic general principles: common good, empowerment, cultural sensitivity, privacy, and accountability. There are others specific to research and development: meaningful human control, transparency, fairness, safety and environment friendliness. Meanwhile, principles specific to education are the primacy of learning goals, human capital development, capacity building, education management and delivery, and collaboration. Still, specific guidelines in journalism and journalism education have yet to be fleshed out.
This is where we are precisely now: Already here, but not quite there.
***
That’s journalism education. What about actual practice? In a piece called “At Least Two Newspapers Syndicated AI Garbage” published in The Atlantic, Damon Beres and Charlie Warzel talk about an article that had been syndicated by a company called King Features in prominent newspapers such as the Chicago Sun-Times and The Philadelphia Inquirer.
“Heat Index” is supposed to be a summer guide, providing “303 Must-Dos, Must-Tastes, and Must-Tries” for summer. The compilation of harmless treats, however, is not so harmless after all. “The summer-reading guide matched real authors with books they hadn’t written….a hint that the story may have been composed by a chatbot. This turned out to be true,” the authors write. Sometimes, there is a discrepancy between people and their job titles, if the people mentioned even existed at all.
Some parts of the article was written by a freelancer called Marco Buscaglia, who admitted to using ChatGPT for the article; he asked AI to help with book recommendations.
“Slop has come for the regional newspapers,” write Beres and Warzel. “AI- generated content is frequently referred to as slop because it is spammy and flavorless.”
The authors say they could see how Buscaglia’s mistake “could become part of a pattern for journalists swimming against a current of synthetic slop, constantly produced content, and unrealistic demands from publishers.”
In this sense, “sloppy” takes on a new significance.
***
Given the still-murky issues that have yet to be threshed out and the boundaries that have yet to be set, we can at the moment strike a delicate compromise. We can acknowledge the tasks that can be made easier with the use of these modern tools. With less time spent on transcribing interviews, for instance, or scouring the web for information on an unfamiliar subject, we can now devote more of our time and resources to separating legitimate sources from bogus ones, verifying claims, asking uncomfortable, non-obvious questions, and planning and writing our stories so that they become as insightful and compelling as they need to be.
So can AI do our jobs? Some aspects of it, yes. But the more substantive side — it can’t, if we are bent on producing quality that only humans can deliver. Then again, let us not be smug. We need to constantly look into how we work, address our failings and weaknesses, and ensure that our work reflects how human we are at our core.
adellechua@gmail.com