Global media and communication experts are forecasting a democratised information landscape for 2050, shaped by trusted AI narratives. In a study spanning four weeks of surveys performed before the arrival of ChatGPT, a tension between optimism and concerns about misuse and misinformation were revealed in the journal AI & Society. Researchers Katalin Feher, Lilla Vicsek, and Mark Deuze argued AI was both a tool and an independent actor, advocating for AI-to-AI solutions to combat technological abuse.
This ‘glasses model of AI trust’ would balance hope and uncertainty, while the study underscored the need for responsible AI policies and research.
Negative forces like fear, denial, and anger are predominant in the AI-in-media debate. The study showed a different side of the issue. Participants reported that the had significant faith in future generations to maintain a balance between the potential and risks of AI.
Assessing trust
Trust in artificial intelligence forms the linchpin of the future media and information-communication landscape. As an emergent field, Information Communication & Media (ICM) must grapple with integrating society, culture, and technology while contending with the trustworthiness of the AI tools they are developing. This trust issue is critical, given the rapid spread of AI-driven phenomena such as conversational media, deepfakes, and bot journalism, which pose new challenges to the accuracy and reliability of information dissemination.
Ask 10 journalists about using artificial intelligence tools like ChatGPT in their work, and roughly nine will answer with reference to the dangers and risks.
The glasses model of AI trust is central to understanding this study’s findings. It symbolises the delicate balancing act between the optimistic beliefs in AI’s potential to democratise information and the growing concerns over its capacity to misinform and manipulate. The model encapsulates the duality of experts’ perspectives, acknowledging the transformative promise of AI while remaining vigilant to its pitfalls.
Experts foresee a future where AI simplifies vast data into crisp, adaptable information tailored to media, person, and place. This future envisions a seamless collaboration between humans and machines, magnifying the reach and impact of human narratives through AI’s capabilities. However, the model also flags the risk of reliance on potentially biased and unreliable data, which could exacerbate misinformation and trust issues.
A recurring theme throughout the responses was the aspiration for universal access to information. By 2050, experts predict that AI-driven media will be democratically available, providing personalised and unbiased narratives. This vision is supported by the belief that AI can control and mitigate its adverse effects, ensuring that information remains trustworthy and accessible to all.
In striking contrast, AI is acknowledged as a ‘black box’ technology – sophisticated yet opaque, its inner workings obscured from human understanding. This duality presents a substantial challenge to the trust in AI as the technology redefines new media and computer-mediated communication.
Challenges of AI-driven media transformation
The transformation AI brings to ICM systems is multifaceted. While experts anticipate cost-effective and productive operations, concerns about the potential for fake media, systemic bias, and misuse persist. These apprehensions highlight the necessity for a nuanced approach to AI deployment that considers socio-cultural values and the impact on trust.
Without a proper instruction manual, how can teachers prepare for a classroom that’s dominated by AI?
Home assignments and admission tests are things of the past, cheating is easier than ever, students will become more passive in class, and the flipped classroom is there to support you.
Experts argue that AI-driven ICM has the potential to benefit users by enhancing their experience of their surroundings. Yet, the trade-off is the risk of an information overload, unfiltered by human judgment, which could lead to machine-dominated communication and news production.

The study’s participants placed significant faith in future generations to maintain a balance between the potential and risks of AI. They believed that key social and human values will persist, with new generations continuing to build trust in emerging AI systems. This trust is seen as crucial to preserving democratic values amid the technological revolution.
This future-oriented optimism is, however, not without its caveats. Some experts express concerns about professional near-sightedness, suggesting that overemphasiding current trends may lead to an overly rosy outlook. There is an implicit warning here: without a critical and responsible approach to AI development, we risk underestimating its long-term impacts.
Despite these challenges, there is a consensus among respondents that AI has a pivotal role in the fight against misinformation, as demonstrated during the Covid-19 pandemic across social media platforms. This belief underpins the notion that AI, if regulated properly, can serve as a powerful ally in maintaining the integrity of information in the future.
An evolving landscape
Looking towards 2050, the survey participants envision a world where AI breaks down language barriers, fostering a more integrated global Internet. They see AI not just as technology but as a partner in progress, capable of addressing our time’s infodemic and post-truth challenges. This progress is contingent on developing technologies that can effectively verify information sources, mitigating deepfakes and synthetic media risks.
Nevertheless, experts are mindful of the potential for AI to be utilised for anti-democratic purposes, such as surveillance and intimidation. The call for cross-cultural AI ethics is clear, stressing the need for an ethical foundation to guide the responsible use of AI technology and ensure societal well-being in the distant future.
This article was written by Laio, an AI-supported editor used by Innovation Origins, and further edited for clarity and house style by a human.
News Wires





Subscribers 0
Fans 0
Followers 0
Followers