CVE-2026-22778
criticalCVSS v3 Base Score
9.8
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
EPSS Score
0.1%
Exploitation probability in 30 days
Top 76% most likely to be exploited
Attack Characteristics
Attack Vector
Network
Attack Complexity
Low
Privileges Required
None
User Interaction
None
Confidentiality
High
Integrity
High
Availability
High
Published: February 2, 2026 (101 days ago)
Last Modified: February 2, 2026
Vendor: Red Hat
Fix Available: ✓ Yes
Vulnerability Report
Generated by CyberWatcher
Description
vLLM is an inference and serving engine for large language models (LLMs). From 0.8.3 to before 0.14.1, when an invalid image is sent to vLLM's multimodal endpoint, PIL throws an error. vLLM returns this error to the client, leaking a heap address. With this leak, we reduce ASLR from 4 billion guesses to ~8 guesses. This vulnerability can be chained a heap overflow with JPEG2000 decoder in OpenCV/FFmpeg to achieve remote code execution. This vulnerability is fixed in 0.14.1.
CWE
CWE-209Affected Products
Red Hat AI Inference ServerRed Hat Enterprise Linux AI (RHEL AI) 3Red Hat AI Inference Server 3.2Red Hat OpenShift AI 2.25Red Hat OpenShift AI 3.3