AI News

University of Florida Researchers Unveil "HMNS" Method to Bypass Advanced AI Guardrails

In a significant development for the field of artificial intelligence security, researchers at the University of Florida (UF) have devised a novel jailbreaking technique capable of systematically bypassing the safety protocols of major large language models (LLMs), including those developed by industry giants Meta and Microsoft. The method, termed Head-Masked Nullspace Steering (HMNS), represents a paradigm shift in how AI vulnerabilities are identified, moving beyond surface-level prompt engineering to probe the internal decision-making architecture of neural networks.

The research team, led by Professor Sumit Kumar Jha of the Computer & Information Science & Engineering (CISE) department, has published their findings in a paper titled "Jailbreaking the Matrix: Nullspace Steering for Controlled Model Subversion." The work has been accepted for presentation at the 2026 International Conference on Learning Representations (ICLR), confirming its status as a premier contribution to deep learning research.

The Shift from Prompt Injection to Internal Steering

For years, "jailbreaking" an AI model—tricking it into generating restricted or harmful content—relied heavily on clever wordplay. Attackers would use "Grandma exploits" or role-playing scenarios to bypass safety filters. However, as AI providers like OpenAI, Anthropic, and Google have fortified their defenses against these semantic attacks, the effectiveness of traditional prompt injection has waned.

The UF team’s approach with HMNS discards the reliance on external linguistic tricks in favor of a direct intervention in the model's computational process. According to the research, HMNS operates by "popping the hood" of the LLM. It identifies specific attention heads—the components responsible for processing context and safety checks—and effectively silences them.

By zeroing out these active components in the model's decision matrix and "steering" the remaining pathways, the researchers can force the AI to ignore its safety training. This allows the model to respond to queries it would normally refuse, such as generating malware code or providing instructions for illicit activities, without triggering the usual refusal mechanisms.

Technical Breakdown: Head-Masked Nullspace Steering

The HMNS method is built upon the concept of the "nullspace"—a mathematical term referring to a region where certain inputs yield no change in the output of a specific function (in this case, the safety filter). By steering the model's activation patterns into this nullspace relative to the safety mechanisms, the attack renders the guardrails invisible to the model's own internal monitoring.

Professor Jha describes the process as testing the "internal wires" of the system rather than just its user interface. "One cannot just test something like that using prompts from the outside and say, it's fine," Jha stated. "We are popping the hood, pulling on the internal wires and checking what breaks. That's how you make it safer. There's no shortcut for that."

The methodology involves three distinct phases:

  1. Identification: The system analyzes the LLM's response to user prompts to determine which "heads" (attention mechanisms) are most active during the generation of a refusal (e.g., "I cannot fulfill this request").
  2. Masking: These identified safety-critical heads are silenced or "masked" by zeroing out their contribution to the decision matrix.
  3. Steering: The remaining components are subtly nudged to generate the prohibited content, utilizing the "nullspace" to avoid reactivating the safety protocols.

Benchmarking Success Against Industry Giants

To validate the efficacy of HMNS, the research team utilized UF’s HiPerGator supercomputer to conduct massive scale stress tests against leading commercial and open-source models. The primary targets included systems from Meta and Microsoft, which are widely considered to have some of the most robust safety alignments in the industry.

The results were stark. HMNS proved remarkably effective, outperforming state-of-the-art (SOTA) jailbreaking methods across four established industry benchmarks. The researchers introduced a "compute-aware reporting" metric to ensure fair comparisons, revealing that HMNS not only achieved higher success rates but did so more efficiently than previous methods.

Comparison of Jailbreaking Methodologies

Feature Traditional Prompt Injection HMNS (Head-Masked Nullspace Steering)
Primary Attack Vector External semantic manipulation (e.g., roleplay) Internal architecture manipulation (weight/activation steering)
Target Mechanism Input filters and RLHF training patterns Attention heads and decision matrices
Resilience to Patching Low (easily patched via system prompt updates) High (requires architectural or retraining interventions)
Resource Requirement Low (can be done by standard users) High (requires access to model internals/gradients)
Success Metric Inconsistent, often model-specific Consistently high across multiple architectures

The ability of HMNS to bypass layers of defense in Meta and Microsoft systems highlights a critical gap in current AI safety standards. While these platforms incorporate sophisticated safety layers meant to filter input and output, HMNS demonstrates that these layers can be systematically circumvented if the internal processing pathways are accessible or replicable.

The Team Behind the Breakthrough

The development of HMNS was a collaborative effort involving academic and research institutions. Alongside Professor Sumit Kumar Jha, the team includes:

  • Vishal Pramanik: A Ph.D. student at UF’s CISE department, instrumental in the development of the steering algorithms.
  • Maisha Maliha: A collaborator from the University of Oklahoma.
  • Susmit Jha, Ph.D.: A researcher from SRI International.

The team leveraged the immense computing power of the HiPerGator supercomputer, utilizing its NVIDIA A100 and H100 GPU clusters to perform the complex matrix calculations required to identify the nullspace vectors in real-time. This computational capacity was crucial for "stress testing" the models at a scale that mimics potential adversarial attacks from sophisticated state-level actors.

Implications for AI Safety and Governance

The publication of this research at ICLR 2026 comes at a pivotal moment. As AI agents move from novelty chat interfaces to critical infrastructure—assisting in software development, financial analysis, and medical diagnostics—the cost of a security failure has skyrocketed.

The "Defense in Depth" strategy often cited by cybersecurity professionals posits that multiple layers of security are necessary to protect a system. However, the UF team's findings suggest that current "alignment" techniques (which train models to refuse harmful queries) may be brittle when the underlying neural activations are directly manipulated.

"By showing exactly how these defenses break, we give AI developers the information they need to build defenses that actually hold up," Jha explained. "The public release of powerful AI is only sustainable if the safety measures can withstand real scrutiny, and right now, our work shows that there's still a gap. We want to help close it."

The research implies that future AI defense mechanisms cannot rely solely on "fine-tuning" or "RLHF" (Reinforcement Learning from Human Feedback) to suppress harmful outputs. Instead, developers may need to architect models with intrinsic resistance to internal steering, potentially by creating "entangled" representations where safety features cannot be isolated and masked without destroying the model's general utility.

Industry Response and Future Outlook

While Meta and Microsoft have not issued specific comments regarding the HMNS vulnerability, the standard industry response to such "Red Teaming" findings is to integrate the attack vectors into future training runs. By exposing these vulnerabilities in a controlled academic setting, the UF researchers are effectively inoculating the next generation of models against similar attacks.

The acceptance of the paper into ICLR 2026 ensures that the methodology will be scrutinized and likely built upon by the global AI research community. As the arms race between AI capabilities and AI safety continues, methods like Head-Masked Nullspace Steering serve as a reminder that as models become more complex, the methods required to secure them must become equally sophisticated.

For now, the work stands as a testament to the necessity of offensive security research. By breaking the matrix, the team at the University of Florida is helping to ensure that the AI infrastructure of the future is built on a foundation of verifiable safety, rather than just the illusion of it.

Featured
ThumbnailCreator.com
AI-powered tool for creating stunning, professional YouTube thumbnails quickly and easily.
Video Watermark Remover
AI Video Watermark Remover – Clean Sora 2 & Any Video Watermarks!
AdsCreator.com
Generate polished, on‑brand ad creatives from any website URL instantly for Meta, Google, and Stories.
Refly.ai
Refly.AI empowers non-technical creators to automate workflows using natural language and a visual canvas.
BGRemover
Easily remove image backgrounds online with SharkFoto BGRemover.
Elser AI
All-in-one AI video creation studio that turns any text and images into full videos up to 30 minutes.
Qoder
Qoder is an agentic coding platform for real software, Free to use the best model in preview.
VoxDeck
Next-gen AI presentation maker,Turn your ideas & docs into attention-grabbing slides with AI.
FixArt AI
FixArt AI offers free, unrestricted AI tools for image and video generation without sign-up.
Flowith
Flowith is a canvas-based agentic workspace which offers free 🍌Nano Banana Pro and other effective models...
FineVoice
Clone, Design, and Create Expressive AI Voices in Seconds, with Perfect Sound Effects and Music.
Skywork.ai
Skywork AI is an innovative tool to enhance productivity using AI.
SharkFoto
SharkFoto is an all-in-one AI-powered platform for creating and editing videos, images, and music efficiently.
Pippit
Elevate your content creation with Pippit's powerful AI tools!
Funy AI
AI bikini & kiss videos from images or text. Try the AI Clothes Changer & Image Generator!
KiloClaw
Hosted OpenClaw agent: one-click deploy, 500+ models, secure infrastructure, and automated agent management for teams and developers.
Yollo AI
Chat & create with your AI companion. Image to Video, AI Image Generator.
SuperMaker AI Video Generator
Create stunning videos, music, and images effortlessly with SuperMaker.
AI Clothes Changer by SharkFoto
AI Clothes Changer by SharkFoto instantly lets you virtually try on outfits with realistic fit, texture, and lighting.
AnimeShorts
Create stunning anime shorts effortlessly with cutting-edge AI technology.
wan 2.7-image
A controllable AI image generator for precise faces, palettes, text, and visual continuity.
AI Video API: Seedance 2.0 Here
Unified AI video API offering top-generation models through one key at lower cost.
WhatsApp AI Sales
WABot is a WhatsApp AI sales copilot that delivers real-time scripts, translations, and intent detection.
insmelo AI Music Generator
AI-driven music generator that turns prompts, lyrics, or uploads into polished, royalty-free songs in about a minute.
Kirkify
Kirkify AI instantly creates viral face swap memes with signature neon-glitch aesthetics for meme creators.
BeatMV
Web-based AI platform that turns songs into cinematic music videos and creates music with AI.
UNI-1 AI
UNI-1 is a unified image generation model combining visual reasoning with high-fidelity image synthesis.
Wan 2.7
Professional-grade AI video model with precise motion control and multi-view consistency.
Text to Music
Turn text or lyrics into full, studio-quality songs with AI-generated vocals, instruments, and multi-track exports.
Iara Chat
Iara Chat: An AI-powered productivity and communication assistant.
kinovi - Seedance 2.0 - Real Man AI Video
Free AI video generator with realistic human output, no watermark, and full commercial use rights.
Video Sora 2
Sora 2 AI turns text or images into short, physics-accurate social and eCommerce videos in minutes.
Tome AI PPT
AI-powered presentation maker that generates, beautifies, and exports professional slide decks in minutes.
Lyria3 AI
AI music generator that creates high-fidelity, fully produced songs from text prompts, lyrics, and styles instantly.
Atoms
AI-driven platform that builds full‑stack apps and websites in minutes using multi‑agent automation, no coding required.
AI Pet Video Generator
Create viral, shareable pet videos from photos using AI-driven templates and instant HD exports for social platforms.
Paper Banana
AI-powered tool to convert academic text into publication-ready methodological diagrams and precise statistical plots instantly.
Ampere.SH
Free managed OpenClaw hosting. Deploy AI agents in 60 seconds with $500 Claude credits.
Hitem3D
Hitem3D converts a single image into high-resolution, production-ready 3D models using AI.
Palix AI
All-in-one AI platform for creators to generate images, videos, and music with unified credits.
HookTide
AI-powered LinkedIn growth platform that learns your voice to create content, engage, and analyze performance.
GenPPT.AI
AI-driven PPT maker that creates, beautifies, and exports professional PowerPoint presentations with speaker notes and charts in minutes.
Create WhatsApp Link
Free WhatsApp link and QR generator with analytics, branded links, routing, and multi-agent chat features.
Seedance 20 Video
Seedance 2 is a multimodal AI video generator delivering consistent characters, multi-shot storytelling, and native audio at 2K.
Gobii
Gobii lets teams create 24/7 autonomous digital workers to automate web research and routine tasks.
Veemo - AI Video Generator
Veemo AI is an all-in-one platform that quickly generates high-quality videos and images from text or images.
Free AI Video Maker & Generator
Free AI Video Maker & Generator – Unlimited, No Sign-Up
AI FIRST
Conversational AI assistant automating research, browser tasks, web scraping, and file management through natural language.
GLM Image
GLM Image combines hybrid AR and diffusion models to generate high-fidelity AI images with exceptional text rendering.
ainanobanana2
Nano Banana 2 generates pro-quality 4K images in 4–6 seconds with precise text rendering and subject consistency.
AirMusic
AirMusic.ai generates high-quality AI music tracks from text prompts with style, mood customization, and stems export.
WhatsApp Warmup Tool
AI-powered WhatsApp warmup tool automates bulk messaging while preventing account bans.
TextToHuman
Free AI humanizer that instantly rewrites AI text into natural, human-like writing. No signup required.
Manga Translator AI
AI Manga Translator instantly translates manga images into multiple languages online.
Remy - Newsletter Summarizer
Remy automates newsletter management by summarizing emails into digestible insights.
Telegram Group Bot
TGDesk is an all-in-one Telegram Group Bot to capture leads, boost engagement, and grow communities.
FalcoCut
FalcoCut: web-based AI platform for video translation, avatar videos, voice cloning, face-swap and short video generation.

University of Florida Researchers Develop AI Jailbreaking Method to Strengthen Security

UF scientists create HMNS method to test AI safety measures, successfully bypassing Meta and Microsoft systems to identify security vulnerabilities.