LONDON—AI is both a threat and an opportunity for journalism, with more than half of those surveyed for a new report saying they had concerns about its ethical implications on their work.
While 85 percent of respondents had experimented with generative AI such as ChatGPT or Google Bard for tasks including writing summaries and generating headlines, 60 percent said they also had reservations.
The study, carried out by the London School of Economic’s JournalismAI initiative, surveyed over 100 news organisations from 46 countries about their use of AI and associated technologies between April and July.
“More than 60 per cent of respondents noted their concern about the ethical implications of AI on journalistic values including accuracy, fairness and transparency and other aspects of journalism,” the researchers said in a statement.
“Journalism around the world is going through another period of exciting and scary technological change,” added report co-author and project director Charlie Beckett.
He said the study showed that the new generative AI tools were both a “potential threat to the integrity of information and the news media” but also an “incredible opportunity to make journalism more efficient, effective and trustworthy”.
Journalists recognized the time-saving benefits of AI with tasks such as interview transcription.
But they also noted the need for AI generated content to be checked by a human “to mitigate potential harms like bias and inaccuracy”, the authors said.
Challenges surrounding AI integration were “more pronounced for newsrooms in the global south” they added.
“AI technologies developed have been predominantly available in English, but not in many Asian languages. We have to catch up doubly to create AI systems, and AI systems that work with our local languages,” the report quoted one respondent in the Philippines as saying.
Co-author Mira Yaseen said the economic and social benefits of AI were concentrated in the global north and its harms disproportionately were affecting the global south.
She said this was “exacerbating global inequality”, adding that a “power-conscious framing of global AI development and adoption was needed.