v0.2 🌳  

Telus Pledges to Avoid AI in Indigenous Art Amid Cultural Appropriation Concerns

2024-06-22 12:14:11.219000

Comic book artist J. Scott Campbell has issued a warning to fellow artists regarding the use of generative AI artwork on major social media platforms like Instagram, Facebook, and Twitter. Campbell sympathizes with artists who are concerned about Meta's use of their posts to train generative AI art, but he advises against leaving these platforms entirely. Campbell believes that abandoning platforms that provide visibility and success for artists is not a wise choice. While he acknowledges the importance of competition in the social media landscape, he cautions against jumping to alternative sites that may lack engagement and stability. Instead, Campbell suggests that artists also cultivate mailing lists of loyal customers to directly engage with their fans. He mentions his support for Cara, a crowdfunded social media and portfolio platform designed for artists and creatives, that strictly prohibits the posting of AI-generated art and implements measures to prevent companies from scraping user images for AI training. Campbell's message was shared on his Facebook account, and he has also created an account on Cara.

According to a recent article by Gigazine, Cara, an artist-run, anti-AI social platform, experienced a significant surge in user growth, going from 40,000 to 700,000 users in just one week. The platform was founded primarily by artists in reaction to Meta's initiative to use content as material for AI training. One of the central figures of Cara is artist Jingna Zhang, who filed a lawsuit in Luxembourg over a painting that was similar to her photographic work and won the second trial. Zhang founded Cara as platform companies are increasingly amending their terms of service to allow posted content to be used to train AI. Zhang's Cara includes Glaze, a feature developed in collaboration with the University of Chicago that protects content from AI training. The increase in Cara's user base is thought to be due to changes to the privacy policy that Meta is planning to make, which would allow the use of publicly posted content on Facebook and Instagram for AI training. Meta has revised its privacy policy to use European users' content for AI training by June 26, 2024, unless users clearly indicate that they will not allow their content to be used for AI training. Cara is facing challenges with unexpected traffic, including spammers and bots among its users. Despite the challenges, Cara remains fully bootstrapped, and Zhang has chosen not to seek venture funding to maintain control over the platform.

In a recent article by Communications Today, Telus Corp., a Canada-based telecom company, has pledged not to use artificial intelligence in Indigenous art in response to cultural misappropriation complaints. Telus recognizes that AI-generated content imitating Indigenous art has caused controversy in Australia, with artists complaining that their work is being used without consent. While Telus uses generative AI for customer service, the company cannot guarantee that outside AI models haven't been trained on Indigenous art. Telus' decision to avoid AI in Indigenous art aims to preserve public confidence and address concerns of cultural appropriation. This pledge comes as AI art becomes increasingly difficult to distinguish from human-made art. A study found that participants could not reliably discern between AI and human-made art, correctly identifying the source only slightly more than half the time. Telus' commitment is part of their effort to respect and protect Indigenous art and culture.

In another development, Telstra, a Australia-based telecom company, has joined UNESCO's Business Council to promote the ethical development and application of AI. Telstra is the first Australian organization and the sixth globally to join the council. The UNESCO Recommendation on the Ethics of Artificial Intelligence advocates for AI technologies to be governed by values that promote human rights, dignity, and environmental sustainability. Telstra will work with UNESCO and other member organizations to support policy development in areas such as data governance and diversity. The Business Council will also develop an ethical impact assessment tool and joint initiatives to ensure AI serves the public good. Telstra's group executive of Product & Technology, Kim Krogh Andersen, emphasizes the need for collaboration and responsible AI development. Telstra has a history of leadership in responsible AI and has worked with the Australian government to pilot AI Ethics Principles and co-author the Responsible AI Playbook with the GSMA. [7195b589] [359c209a] [c3cef9d5] [69aae4e5]

Disclaimer: The story curated or synthesized by the AI agents may not always be accurate or complete. It is provided for informational purposes only and should not be relied upon as legal, financial, or professional advice. Please use your own discretion.