Table of Contents
Highlights
- AI is transforming journalism with synthetic reality, automated writing, and newsroom tools.
- Benefits include speed, scalability, cost-effectiveness, and accessibility.
- Risks involve misinformation, loss of human nuance, ethical dilemmas, and regulatory challenges.
In the past few years, the journalistic landscape has been disrupted by the emergence of technologies capable of generating, present, and even packaging journalism without—or with minimal—human reporters. The presence of synthetic anchors on television screens and news articles written by algorithms indicate that automation in the newsrooms is not a topic of speculation anymore—it is here, promising with both opportunity and peril.
What is “synthetic news”?
“Synthetic news” refers to, broadly speaking, any news content where one-or-more of the key elements of news—reporting, writing, presenting—are generated or mediated by AI technology, as opposed to being entirely generated by human reporters. The key forms may include:Synthetic anchors: Digital avatars that look, behave, and speak in ways that simulate a human presenter, but are an AI phenomenon.

They read out loud news scripts, and exhibit micro-expressions, gesture, and posture as they read, approximating human anchors.AI-writing of articles: The use of large language models (LLMs) to write all or significant parts of the articles – data driven summaries, full stories, op-eds – etc.
Automation in the newsroom: Besides writing there are aspects of selecting content, fact- checking, translation, audio/video production, summarization, even choosing headlines or images.
Examples in action
The Hangzhou News operation in China’s Zhejiang province has rolled out six AI virtual anchors that mimic human behaviour and expressions. They can reliably deliver the news with no operational errors “on-air” reportedly. A startup Hour One provides a service called Hour One News whereby news video is produced from a text script. The workflow is simple: pick a virtual human, select a studio, input the text of the article, and the system will render a video with the human speaking and animated camera angles.
In Italy, the Il Foglio newspaper has published a supplement of articles entirely created by AI, clearly labelled as such. This venture is partly about testing and exploring, what can human journalism provide to the audience that AI cannot, such as creativity, insight, human voice, or critical judgement.

Why are newsrooms experimenting with synthetic tools
Cost-effectiveness and speed: AI can draw up draft articles, generate routine reports (e.g., business earnings, sports scores, weather, etc.) thus freeing humans up to displace resources to analysis, investigations, and features. Virtual anchors do not require as much rest, can work 24/7, and fill in during gaps (holidays, retirements, staff shortages).
Scalability: the text of just one script can be converted automatically into many languages; which translates report to a more global appeal then could be done with an individual human broadcasting in one language. Synthetic anchors can operate in different locales without a need for local anchors to be present; AI written content can be quickly adapted across platforms (print, web, mobile).
Consistency and Accessibility: AI tools can provide spelling and formatting consistency; synthetic anchors may allow media organizations to serve remote or smaller markets that cannot support full broadcast news teams. Potentially more accessible (voice, multiple languages) via AI tools.
Challenges and Risks
Authenticity and Trust: Synthetic anchors look real. Viewers may think some human is speaking. If the anchor presents inaccurate or misleading information, the damage could be significant. There is also risk of deep fakes, or misuse (propaganda, manipulation).Lack of nuance, insight, ethics: AI can regurgitate facts and repackage data, but insight, investigative journalism, and ethical judgement are harder and may not be attainable. AI may misinterpret, leave out context, or re-model biases that are embedded in training data.

Errors and hallucinations: Even advanced LLMs hallucinate or get facts wrong. If unchecked, AI written content could perpetuate falsehoods. So, fact-checking is even more critical, but automated fact-checkers are, themselves, imperfect.Job Displacement: Journalists, copyeditors, anchors may fear job losses. While many would argue AI would augment rather than replace, the economic and social implications could be significant.
Regulatory and Legal Issues: Who is liable for any errors? What happens if synthetic anchors utilized cloned likenesses or voices without consent? What happens if an AI article falsely accuses someone? There’s also copyright issues regarding training data, and sourcing content. —
What does the research say
A comparative assessment of open vs closed generative AI models concludes that open, generative models allow for more transparency, auditability, and overall flexibility, in general, and are therefore best suited for news contexts. Closed models may provide better performance in terms of “product-ready” results, but at the cost of less oversight. Formal experience (i.e. Il Foglio) shows that readers pick up on the difference in voice, position, and sometimes credibility, even when content is labelled “AI‑generated.”
Transparency seems to matter. —What is next: trends & future
Hybrid newsrooms instead of solely human or AI will be a hybrid of both: human editors plus AI reporting together. AI may draft a story, suggest a headline, or generate a story as a first draft or full article, then have a human editor review, revise, add depth, and ethical checks. More realistic synthetic humans will come into the picture in the future, so their facial expression, voice modulation, and linguistic framing will connect to lip syncing or body language.

And with generative AI, they may even readjust in real time by adjusting gestures to match the emotional tone, showing better reactions to the audience in real-time as well. AI in live reporting, will be one of the more exciting scenarios by introducing droids, IOTs (internet of things), cameras sending the data, and AI pulling that data to summarize in real time. Then it might become an AI anchor who reads breaking news first as humans verify information.
Expect regulations and standards: codes of ethics for AI journalism, or how content is labelled (“this was written/assembled by AI”), or {should be} standards of accountability and accuracy.
Personalization / localization: Customizing news content for users, potentially involving personalized synthetic anchors speaking in local language/dialect, potentially style adapted to audience preferences.
Conclusion
Synthetic reality in news is already changing how news is produced, delivered, and consumed. It has tremendous potential in scaling, accessibility, speed, cost. But there are real risks—misinformation, loss of human insight, ethical traps. It will be important to strike the right balance of automation with human judgement; being transparent; and overall gaining trust from audiences. As readers, viewers, companies, and regulators, we are in a time of significant transition. How news is designed, who produces it, and how we trust it is being reinvented.