Artificial Intelligence to journalism: Constructive or malign?

It has some unlikely advocates – from Vladimir Putin to the BBC. Across the media, headlines extoll its virtues or its dangers – a misinformation superspreader or a complex tool central to the future of trusted journalism? Like Google before it, or Twitter and TikTok, Artificial Intelligence (AI) systems – from ChatGPT to Reporters and Data and Robots (RADAR) – are making headlines.

The Engineered Arts Ameca humanoid robot with artificial intelligence is demonstrated during the Consumer Electronics Show on January 2022 in Las Vegas, Nevada. Credit: Patrick T. Fallon / AFP

And yet the use of automated news writing and distribution is not new. Associated Press has been automatically generating stories based on economic data for 15 years and everything from weather to football match reports to data-driven local stories and election results and the Los Angeles Times QuakeBot which writes up earthquake reports within minutes of the event, have been consumed by an unknowing public for more than a decade.

So why now do we talk of AI shaking up journalism? And do we have something to fear? The instant popularity of Open AI’s chatbot ChatGPT – used by over 100m people in just the last 3 months is causing frenzied speculation about its impact on journalism and the integrity of the information we consume. With journalists under increasing pressure due to job cuts, there is a temptation for reporters under deadline pressures to increasingly rely on AI systems.

ChatGPT can write as well as many journalists, producing convincing copy. The problem is not with the technology but with who controls it and for what purpose.

Already, greedy corporate news organizations have sacked journalists, replacing them with automated systems. One of the largest UK media groups – Reach – shed 200 just as their CEO announced a new working group looking to expand the use of AI. Buzzfeed saw its share price rise on the back of getting rid of humans and replacing them with AI systems.

Estimates vary wildly but expectations are that up to 32% of jobs in the information and communication sector in Belgium will go, and 17% in the whole creative sector in Germany.

And herein lies the problem. Yes, AI can automate repetitive tasks, can help create first drafts, streamline workflows, and has enormous potential but it must be properly regulated, be transparent about its sources and above all have journalists and the public interest at the core of its development, implementation, and daily use.

Former BBC Director of News Richard Sambrook says AI can be used “constructively or malignly”. Some fear it is a precursor to a golden age of fake news. The data set on which AI apps are based can be infected to the core with false facts. CNET had to issue corrections for 41 out of 77 stories written using AI tools. Fact-checking group NewsGuard called ChatGPT “the next great misinformation superspreader”.

It also has the potential to exacerbate stereotypes and discrimination – image software DALL-E-2 has attracted widespread criticism for the racial and gender bias embedded in its algorithms.

But does that mean we should just turn our back on technology? What makes business sense and what is good for journalism do not always make easy bedfellows. That is why it is essential that journalists and their professional unions are at the heart of the conversation about the future of newsrooms. Journalists should work closely with developers, keeping a keen check on the potential consequences of algorithmic decision-making for society. Al copies produced should be rigorously fact-checked.

It is inevitable that the systems used in modern newsrooms will change. We should not fight against technological development – but shape how it is used. Central to that is a just transition, ensuring workplace policies are put in place which require corporate accountability in terms of job displacement, retraining programs, and job change possibilities.

As Richard Sambrook says “as it develops further we need guidelines around its use – if not, in due course, regulation”. 

Among those guidelines and ethical principles must be a requirement for transparency so it is possible to discover how, and why, the system made a decision or acted in the way it did. A code of ethics for the development, application, and use of AI should ensure such systems serve people and the public interest not corporate or political priorities and should increase the principles of human rights, freedom, privacy, and cultural and gender diversity. Systems must be designed and controlled to ensure negative or harmful human bias – be that gender, race, sexual orientation, or age – are identified and not propagated by the system. There must be a human-in-command approach, ensuring workers have the right to access, manage and control data. 

Given the explosion of automated news, this is now an urgent discussion. How that debate unfolds will define the true contribution of AI to journalism – constructive or malign?

This article has been written by journalist and IFJ external consultant Jeremy Dear and originally appeared on Brussels Morning.