CVE-2026-22773
mediumCVSS v3 Base Score
6.5
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H
EPSS Score
0.0%
Exploitation probability in 30 days
Top 96% most likely to be exploited
Attack Characteristics
Attack Vector
Network
Attack Complexity
Low
Privileges Required
Low
User Interaction
None
Confidentiality
None
Integrity
None
Availability
High
Published: January 10, 2026 (124 days ago)
Last Modified: January 10, 2026
Vendor: Red Hat
Fix Available: ✓ Yes
Vulnerability Report
Generated by CyberWatcher
Description
vLLM is an inference and serving engine for large language models (LLMs). In versions from 0.6.4 to before 0.12.0, users can crash the vLLM engine serving multimodal models that use the Idefics3 vision model implementation by sending a specially crafted 1x1 pixel image. This causes a tensor dimension mismatch that results in an unhandled runtime error, leading to complete server termination. This issue has been patched in version 0.12.0.
CWE
CWE-770Affected Products
Red Hat AI Inference ServerRed Hat Enterprise Linux AI (RHEL AI) 3Red Hat OpenShift AI (RHOAI)Red Hat AI Inference Server 3.2