Research & Writing
Ideas from the
safety frontier.
Technical research, threat analysis, and field notes from our work building foundational AI safety tooling.
analysis
46 Minutes: How a Poisoned Python Package Reached 47,000 AI Environments
A threat group called TeamPCP injected credential-stealing malware into LiteLLM versions 1.82.7 and 1.82.8 on PyPI. Nearly 47,000 downloads happened in 46 minutes. Here is what the attack did, how it started with a compromised security scanner, and what enterprises running AI agents need to check now.
research
When the Assembly Line Becomes the Attack Surface: Supply Chain Threats in the Age of AI Agents
Software supply chain attacks can steal your credentials in minutes. Now AI agents are running the same attacks autonomously. What the hackerbot-claw campaign against Microsoft, DataDog, and Aqua Security reveals about the enterprise AI security gap.
analysis
When Your AI Ignores Your Security Policies: What the Copilot DLP Failures Reveal
Microsoft Copilot bypassed DLP policies twice in eight months, and no security tool caught either failure. Here's what it means for enterprise AI governance.
research
Hidden in Plain Language: How Calendar Invites Became Data Extraction Tools Through Prompt Injection
A calendar event with crafted instructions could silently extract your private meeting data when you ask Gemini about your schedule. This reveals fundamental gaps in how AI systems handle untrusted inputs.
research
How SuperAlign Helps Enterprises Counter AI-Powered Threats
Traditional tools cannot defend against AI-orchestrated attacks. Learn how SuperAlign helps enterprises address the critical security gaps that GTG-1002 exposed.