Watch video
YouTube
April 16, 2026
video
Attacking AI is a one of a kind session releasing case studies, tactics, and methodology from Arcanum’s AI assessments in 2024 and 2025. While most AI assessment material focuses on academic AI red team content, “Attacking AI” is focused on the task of assessing AI enabled systems.
Microsoft Security Blog
March 12, 2026
guide
Microsoft Incident Response walks through how to detect prompt abuse operationally, tying prompt injection risk back to logging, telemetry, and incident response workflows.
OpenAI
March 11, 2026
analysis
OpenAI frames prompt injection as an evolving agent-security problem that increasingly resembles social engineering rather than a simple string-matching issue.
OpenAI
March 9, 2026
news
OpenAI announced plans to acquire Promptfoo, highlighting automated AI security testing, red teaming, and evaluation as core enterprise requirements.
OpenAI
December 22, 2025
analysis
OpenAI describes using automated red teaming and reinforcement learning to discover agent prompt injection attacks before they appear in the wild.
Google Cloud Blog
December 4, 2025
guide
Google Cloud outlines a defense-in-depth view of AI security spanning application controls, data protections, and infrastructure isolation.
OpenAI
November 7, 2025
guide
An accessible explanation of prompt injection risk in real AI products, including how third-party content can redirect or manipulate agent behavior.
Google Cloud Blog
March 5, 2025
news
Google introduced AI Protection and Model Armor to address prompt injection, jailbreaks, data loss, and multicloud AI workload security.
OpenAI
February 25, 2025
framework
OpenAI’s system card for deep research covers prompt injection, privacy, code execution, and external red teaming prior to release.
OpenAI
January 23, 2025
framework
The Operator system card documents red teaming and mitigation choices for a computer-using agent, with prompt injections listed as a central risk area.
Microsoft Cloud Blog
January 14, 2025
analysis
Microsoft summarizes lessons from red teaming more than one hundred generative AI products, emphasizing system-level testing, human expertise, and automation.
Microsoft Security Blog
January 13, 2025
guide
Microsoft Security highlights practical red-team lessons, including prompt injections against multimodal systems and the need to stay grounded in basic cyber hygiene.
OWASP
January 1, 2025
framework
OWASP’s GenAI security project remains a practical baseline for teams building or assessing LLM applications and agentic systems.