The media industry's adoption of artificial intelligence (AI) has presented both challenges and opportunities. According to an article by Actual News Magazine, media outlets are considered to have a "primitive" approach to AI, with experts suggesting that they should focus on addressing the needs of their audience [491eb90c].
Julien Toyer, editor-in-chief of Thomson Reuters, believes that media is not approaching AI in the right way and should prioritize understanding and meeting the audience's needs. Matthew Kaminski, editor-in-chief of an unknown publication, highlights that major media outlets are struggling to capture readers' attention and reclaim their time, and AI can provide powerful solutions. However, he considers the current efforts in terms of AI to be very "primitive" [491eb90c].
The article also mentions the media's reluctance to adapt to new platforms, such as TikTok, which parallels their caution about AI. The advent of the Internet has already led to the closure of numerous newspapers and job losses for journalists. Muhammad Lila, founder of Goodable, warns that AI can lead to hyper-personalization and lock readers into echo chambers, potentially increasing the spread of misinformation [491eb90c].
Despite the risks, speakers at the Collision conference see the positive aspects of AI in the media industry. AI is seen as a tool that can break down barriers and allow people to tell their own stories. It is also considered an enabler of original content and an improvement to journalism [491eb90c].
A survey of 300 researchers conducted by Elsevier reveals that corporate researchers are positive about adopting artificial intelligence (AI) but have concerns around misinformation and inaccuracy. The survey found that 96% of participants think AI will accelerate knowledge discovery, while 71% say the impact of AI in their area will be transformative or significant. However, participants are clear about specific concerns that need to be addressed for AI adoption, including the belief that AI will be used for misinformation (96%), cause critical errors (84%), and lead to weakened critical thinking (86%) [a183dbf5].
Transparency, ethics, and the ability to validate AI outputs are seen as crucial for trust in AI tools. The survey also highlights the risk of "shadow AI," where AI systems are used without explicit approval or oversight, which poses a significant risk to heavily regulated industries. Elsevier emphasizes the importance of domain-specific expertise and data to mitigate potential risks and ensure precision and accuracy in research [a183dbf5].
Following the assassination attempt on Donald Trump, there has been an increase in AI-generated fake images and conspiracy theories circulating on social media platforms. The incident occurred on [date] and has sparked discussions about the role of AI in spreading misinformation. Social media platforms are struggling to identify and remove these fake images, highlighting the challenges of moderating content. The spread of conspiracy theories surrounding the assassination attempt is also a concern, with some suggesting that it was a staged event or a false flag operation. These theories are gaining traction among certain groups, further fueling the spread of misinformation. The incident has raised questions about the impact of AI on the dissemination of news and the need for improved content moderation strategies [1ec4b51b].
In a recent analysis by RAND researchers, the potential for AI to manipulate social media and public opinion has been highlighted as a significant threat to democracies, particularly in the context of the upcoming 2024 U.S. elections. Li Bicheng, a Chinese academic, proposed a plan in 2019 to use AI for creating fake social media accounts, which could go unnoticed if executed effectively. The analysis notes that while China's previous online disinformation efforts have been simplistic, advancements in AI could enhance their effectiveness. Nathan Beauchamp-Mustafaga warns that AI could make interference in the 2024 elections more effective, echoing concerns raised by U.S. threat assessments indicating that China is becoming more sophisticated in its disinformation tactics. The article advocates for social media platforms to enhance their identification of fake accounts and for the public to maintain skepticism towards social media content, emphasizing the urgency of addressing AI manipulation before the elections [01a18791].