There were 717 press releases posted in the last 24 hours and 398,056 in the last 365 days.

#IFJLondon: defending journalists and journalism in the age of AI

Misinformation, journalistic integrity and the need for regulation all featured in the panel discussion.

Larry Goldbetter, National Writers Union president and chair of the International Federation of Journalists’ AI Experts Group chaired the morning’s session on AI and its impact on journalists and journalism.  

First to speak was Dr Tomasz Hollanek, co-creator of Cambridge University’s AI & Responsible Journalism Toolkit who told IFJ representatives of his research focused in-part on how journalists can report responsibly on AI. Whilst work considering ethical AI narratives is underway at the university’s Leverhulme Centre, he said attention must be paid to the need for improved AI literacy to help better understand the way technologies work.  

Hollanek explained that although tools for journalists exist, these are “often created by academics and not tailored to journalists’ needs.” The AI & Responsible Journalism toolkit launched in October last year helps rectify this by debunking myths on AI, producing useful resources including guidelines on reporting alongside information on a database of experts. 

In her contribution, Zuliana Lainez, president of the National Association of Journalists of Peru, stated there had been a rise in the use of AI technologies but that it had become increasingly difficult to decipher content produced by generative AI and that by journalists, due to the failure of outlets to lead with transparency and label content. Lainez, also the IFJ’s vice president, said media owners were too often focused on generating content without the necessary consideration given to quality produced. 

Attendees obtained a publisher’s perspective from Matt Rogerson, acting chief communications and live officer at Guardian Media Group. Discussing the challenges faced, he said many tech companies defended their use of content published online as “fair use” and that without the necessary regulatory frameworks, publishers face a battle with developers as they seek to ensure an end to the scraping of content and ongoing rights infringements. Robust in his position, Rogerson noted that with many models trained on journalism, companies should be required to pay for content used. On action by the publisher, he spoke of the work of an AI Working Group with stakeholders including editorial and licensing colleagues. Confirming generative AI is not used in the creation of journalism at The Guardian, Rogerson noted AI could be a feature within journalism but not act as a replacement.  

“Journalism is about truth and about finding out facts and then publishing those facts in stories.” he said. 

Charlotte Tobitt, Press Gazette UK editor, considered risks arising from AI to independent journalist organisations, highlighting a possible increase in redundancies as companies altered business models. Audience members heard findings from a recent survey revealed only 13 per cent of news executives felt “well prepared” to take advantage of AI’s potential. Tobbitt shared that where AI had been deployed, examples included transcription and chat-based search tools. She reminded the audience of approaches taken by some to sign licensing deals and others using litigation for use of their content.  

Dr Adam Cox said AI was now “part of the journalistic toolbox” and reflected on findings by himself and colleagues following a study with 15 trainee journalists using ChatGPT soon after its launch. 

He said: 

“We identified three major areas of risks from human-AI collaboration. One was whether journalists will be sceptical enough when using AI tools; a second is whether journalists could lose a sense of agency and let their journalistic muscles atrophy; a third stems from the problems that stem from the lack of transparency, both in how the systems work and at times where the content comes from.”  

Cox informed representatives that he hoped funding would be granted soon for further research to study AI and journalism in the Global South.  

Danger or exaggeration?  

Questions from the floor allowed for discussion on what action could be taken by unions to protect journalists’ jobs and safeguard journalism from AI technologies. Michelle Stanistreet, NUJ general secretary, said: 

“It is important we do not let emerging technology change the way in which we see and define journalism.”

Commenting on the shift in attitudes by publishers, she noted how quality journalism is defined is of importance. 

Goldbetter referenced a six-month strike by screenwriters in the United States including calls for safeguards in the use of AI and stressed the important roles union play in bargaining on behalf of members and in defending journalists’ rights.  

The IFJ delegation including representatives from Latin America, Africa and the United States agreed to review a london IFJ AI declaration introduced by Tim Dawson, IFJ deputy general secretary, as part of ongoing discussions on the issue.  

Return to listing

Legal Disclaimer:

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.