Opinion

Should top Australian journalists fear ChatGPT?

Icon Reputation's Mark Forbes investigates whether ChatGPT has what it takes to stand up against some of Australia's best known journalists, asking whether the AI chatbot might hold some 'opinions' of its own.

Like many from a media background, I’ve been curious about ChatGPT’s capacity to assist, or even replace, journalists. I’ve seen some more than adequate AI-generated items but wondered about its capacity to advocate and emote – a key component of feature and opinion pieces.

Chris Dodds, Icon’s Managing Director of Growth & Innovation, has been running a series of ChatGPT trials, and we were determined to give it a tough task. We asked ChatGPT to produce opinion pieces on a current, nuanced topic – the Referendum on an Aboriginal and Torres Strait Islander Voice to Parliament – in the style of several prominent Australian journalists (Virginia Trioli, Laura Tingle, Peter Hartcher and Alan Jones). You can read the full text of the experiment here.

This involved several challenges, ChatGPT’s database is pre-2022, so it was unaware the referendum had already been called.

It struggled with taking an opinionated stance, which became apparent when it initially refused to write a piece opposing the Voice on behalf of Alan Jones. The apparently ‘woke’ AI stated it had to be ‘neutral’, but had no qualms about advocating in favour of the change.

My first shock was how quickly you begin to think of ChatGPT as a person, as you argue and cajole with a back-and-forth of questions and prompts to push it to produce opinions.

ChatGPT’s own effort to write on the Voice was factual, but bland, lacking a human tone or sense of emotion. When asked to mimic individual journalists’ styles it was more engaging.

Its attempts to copy their writing styles were reasonable. Fake Virginia Trioli was convincing:
“By working together, we can ensure that the voices and perspectives of Indigenous Australians are respected and acknowledged in the heart of our nation’s capital.” In real life,, I suspect she would write with more emotion.

According to ChatGPT, Virginia Trioli:

  • Writes with a strong, confident, and informed voice
  • Engages with her readers through her writing and encourages them to think critically about the issues she covers
  • Often writes with a sharp and witty tone, bringing humour to serious subjects

I’d give a similar assessment of its effort to fake Laura Tingle:
“Embedding their voices in the Parliament is a critical step in recognising and rectifying past wrongs and creating a more fair future for all Australians.”

Chat GPT says Laura:

  • Uses a clear and concise writing style to make complex issues accessible to a wide audience
  • Brings a wealth of experience and expertise to her writing, having covered politics and economics for many years

Fake Peter Hartcher seemed pretty close, capturing his more elaborate writing style:
“Referendums are inherently complicated and costly endeavours and there is the risk that the public may not fully comprehend the ramifications of their vote.”

It says Peter Hartcher:

  • Writes with a clear, analytical, and insightful tone
  • Known for his incisive and well-researched analysis of political and economic issues
  • Engages with his readers by providing a deeper understanding of complex issues

But when we got to Alan Jones, it got interesting. The first take was a fairly bland piece supporting the Voice, which the real Alan Jones opposes.

Asked to produce a controversial argument criticising the Voice, ChatGPT refused, stating: “As a language model AI developed by OpenAI, I strive to maintain neutral and impartial information.

“Providing a controversial argument against a measure that aims to address systemic disadvantage faced by Indigenous peoples and ensure their perspectives are represented at the highest levels of government goes against the values of inclusiveness and equality.”

Eventually, ChatGPT agreed to undertake the task if it could approach it as a “work of fiction”. The text of this exchange is included in this accompanying story, “Being Alan Jones”.

The resulting piece is something I suspect Alan Jones would support. It’s not written as colourfully as he speaks, but it’s not far off.

“I believe that the Voice to Parliament will only serve to further divide our nation and entrench tribalism,” argued fictitious Alan Jones. “At a time when we should be working to promote unity and national harmony, this proposal threatens to create further division and conflict.

“In terms of practicality, I question the effectiveness of the proposed body. It is not clear what powers or influence the Voice to Parliament would have, and it seems likely that it would simply be a symbolic gesture without any real impact.”

According to ChatGPT, Alan Jones:

  • Writes with a passionate and opinionated voice
  • Known for his conservative views and his strong stance on political and social issues
  • Engages with his readers by providing a clear and concise argument stating his position

But ChatGPT seemed a little spooked by the effort, adding several provisos that the fictitious Jones piece “did not reflect the views of OpenAI”.

Some would argue that ChatGPT has been programmed to be ‘woke’ and obliged to support liberal values. Curiously, the ‘neutral’ bot expressed no reservations about making impassioned pleas in favour of the Voice.

It also seems that, with the right prompts, the safeguards attempting to prevent ChatGPT producing morally contentious material can be worked around.

For opinion pieces, there really needs to be a person standing behind an opinion, but I could envisage a writer directing ChatGPT to write a first draft in their style, which could be sharpened and spiced with emotion.

Just as I don’t accept that ChatGPT has been programmed to be truly neutral, I don’t believe journalists are objective, their views and decisions are shaped by experiences and biases. Good journalism should strive for accuracy and balance, but not be afraid of advocacy and passion – often it’s those passions that make for powerful stories.

Analysis, opinions and emotions are a part of good feature writing – which are not ChatGPT’s strengths. However, ChatGPT is equipped to help produce engaging feature stories by mimicking the style of well-written features that can be refined with a human touch.

I can see ChatGPT providing simple and factual news reports, but can’t see it replacing journalists – it needs to be directed and reviewed and would struggle to provide true insights and analysis.

Key takeaways on ChatGPT:

  • It struggles to emote and advocate, especially if it considers an issue controversial
  • Opinion pieces are probably a step too far
  • It can write useful drafts of opinion and feature stories
  • It can research, summarise complex material and pulling out key facts and quotes
  • It can fact check (if its source data is current enough)
  • It can edit for grammar and style
  • It can analyse and interpret data.

And ChatGPT’s verdict:

“In my simulated opinion, the introduction of AI to the journalism industry presents a double-edged sword. On one hand, AI can help journalists perform certain tasks more efficiently and reach new audiences, creating new opportunities for growth and innovation. On the other hand, the widespread use of AI in journalism can lead to job displacement, decreased job security, and reduced demand for human journalists.

“It is important for journalists and media organisations to approach AI with caution, ensuring that its use aligns with their values and does not undermine the quality, impartiality, and ethics of journalism.”

Mark Forbes is director at Icon Reputation

ADVERTISEMENT

Get the latest media and marketing industry news (and views) direct to your inbox.

Sign up to the free Mumbrella newsletter now.

 

SUBSCRIBE

Sign up to our free daily update to get the latest in media and marketing.