CVE-2026-34756
highCVSS v3 Base Score
6.5
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H
Attack Characteristics
Attack Vector
Network
Attack Complexity
Low
Privileges Required
Low
User Interaction
None
Confidentiality
None
Integrity
None
Availability
High
Vulnerability Report
Generated by CyberWatcher
Description
A flaw was found in vLLM, an inference and serving engine for large language models (LLMs). An unauthenticated attacker can exploit this vulnerability by sending a specially crafted HTTP request with an excessively large 'n' parameter to the vLLM OpenAI-compatible API server. This can lead to a Denial of Service (DoS) by consuming excessive memory and blocking the system's event loop, causing the server to crash.
CWE
CWE-1284Affected Products
Red Hat AI Inference ServerRed Hat Enterprise Linux AI (RHEL AI) 3Red Hat OpenShift AI (RHOAI)