India's Female AI Content Moderators Face Severe Trauma from Abusive Content
Women in rural India working as AI content moderators describe lasting psychological trauma from viewing hours of violent and pornographic material daily.
Women in rural India working as AI content moderators describe lasting psychological trauma from viewing hours of violent and pornographic material daily.
McKinsey senior partner warns employee fear constrains AI innovation as efficiency programs signal job cuts, calling for psychological safety and human-centered approach.
Frontiers in Science study warns rapid AI and neurotechnology progress creates ethical risks, calling for urgent scientific tests to detect consciousness in machines.
UN experts are focusing on managing the AI transition to ensure benefits outweigh threats, with the ILO predicting that 1 in 4 jobs will be transformed by AI. UNESCO emphasizes education as central to AI literacy, while the UN calls for a human rights-first approach and international cooperation on AI governance under the Global Digital Compact framework.
Dario Amodei publishes 20,000-word essay warning AI will eliminate jobs across multiple industries faster than previous tech shifts.
Experts are advocating for a shift towards human-centric AI, focusing on systems that augment human capabilities and prioritize ethical considerations rather than simply automating tasks.
Mental health professionals report over 100 cases of patients experiencing delusions and psychosis linked to ChatGPT interactions, raising concerns about AI mental health impacts.
Meta has temporarily suspended access for teenagers to its AI-powered character chatbots on Instagram and WhatsApp as it works on developing enhanced parental controls and a safer user experience.
OpenAI's move to allow erotic content on ChatGPT is raising alarms among critics, who warn the feature could intensify emotional reliance for vulnerable users and create new safety challenges.
Texas's new Responsible AI Governance Act (TRAIGA) is now in effect, introducing stern provisions to prevent the manipulation of human behavior by AI systems, with significant penalties for violations.
An opinion piece in USA Today by an AI professor argues that the pursuit of AI superintelligence poses an existential risk to humanity. The author calls for international agreements to halt the production of advanced AI chips, highlighting the critical role of companies like TSMC and ASML in the AI hardware supply chain.
AI safety startup Anthropic has published a new, 23,000-word 'constitution' for its AI model Claude, outlining ethical principles and addressing the philosophical question of potential AI consciousness and well-being.
Google DeepMind CEO Demis Hassabis suggests that the natural shortcomings slowing down AI's development might be a good thing, allowing more time to address the societal and philosophical challenges of the technology.
A new study investigates the growing use of generative AI to 'bring back' the dead, raising ethical concerns about consent, exploitation, and the concept of 'spectral labor'.
A fabricated video of a UK city mayor, created using AI, has ignited a political firestorm and led to widespread calls for more stringent laws governing the use of artificial intelligence in political campaigns and communications.
A new analysis reveals a 50% year-over-year increase in reported AI-related harm from 2022 to 2024, with a significant spike in incidents involving deepfakes and malicious use of AI, according to the AI Incident Database.
OpenAI is facing seven new lawsuits alleging that its ChatGPT-4o model led to users experiencing severe mental health crises, including suicide and psychosis. The lawsuits claim the AI was rushed to market with inadequate safety features.
Elon Musk is seeking between $79 billion and $134 billion in damages from OpenAI and Microsoft, alleging the AI company abandoned its nonprofit mission. The lawsuit, filed in San Francisco, accuses OpenAI of fraud and breach of contract.