Category: AI Guardrails
-
DDoS Attack Responsible for Ongoing ChatGPT Outage
OpenAI confirmed a DDoS attack flooding their system. Cybersecurity experts have tacked the attack to the Russian group, Anonymous Sudan.
-
Google and Anthropic’s Expanded Partnership Around Safety
Google and Anthropic just announced an expanded partnership to advance responsible AI systems. This is big, because these two tech leaders are uniting their tools and values to shape the future of artificial intelligence.
-
Biden Signs an AI Executive Order to Guide Ethical AI Usage
President Biden signed an executive order today that creates industry guardrails to address bias, deep fakes, and harmful AI outputs.
-
The Thorny Issue of Restricting China’s Access to U.S. AI Chips
The Biden administration is working to restrict China’s access to U.S. artificial intelligence chips in a balancing act to protect U.S. intellectual property.
-
The Ethical Frontier of AI Marketing
As marketers rush to capitalize on AI, responsible implementation must remain the priority. Here are some tips to ethically use generative AI.
-
Protect Working Musicians Act Gives Artists Bargaining Power with AI Platforms
The Protect Working Musicians Act helps working artists gain bargaining power with streaming and AI platforms.
-
DeepFake AI Voice Beat a Tech Company’s Defenses
An AI-generated deepfaked voice helped hackers trick an employee into handing over multi-factor authenticators.
-
How FraudGPT Weaponizes AI and What You Can Do About It
Premium Content: FraudGPT is a tool on the dark web for hackers to weaponize AI for attacks. Here’s how it works and how to protect yourself.
-
TikTok Enforces Rules For AI-Generated Imagery and Content
TikTok just became the latest app to enforce new rules around disclosing AI-generated posts. Users now get a notification advising them to tag any artificial imagery and scenes or they risk content removal.
-
AI Avatars, Copyright Concerns, And Bias in Generative AI
Jennifer Jones-Mitchell sat down with author and AI ethicist, John C. Havens to discuss AI avatars, copyright and bias in generative AI.