Critical vLLM RCE Vulnerability Allows Remote Code Execution via Video Links
1 min readSecurity researchers have disclosed a critical remote code execution vulnerability in vLLM (CVE-2026-22778) that allows attackers to compromise inference servers through specially crafted video links. The flaw affects millions of AI servers running vLLM for local and distributed LLM inference.
The vulnerability exploits vLLM's video processing capabilities, enabling attackers to execute arbitrary code on target systems. This poses significant risks for organizations running vLLM in production environments, particularly those with public-facing inference endpoints or multi-tenant deployments.
Local LLM practitioners using vLLM should immediately update to the latest patched version and review their deployment security. The incident underscores the importance of keeping inference frameworks updated and implementing proper network security controls. Organizations should also consider restricting multimodal input processing if not required for their use cases. Full technical details are available at OX Security.
Source: OX Security · Relevance: 9/10