CVE-2026-25960
highCVSS v3 Base Score
7.1
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:L
Vulnerability Report
Generated by CyberWatcher
Description
A flaw was found in vLLM, an inference and serving engine for large language models (LLMs). A remote attacker can exploit this Server-Side Request Forgery (SSRF) bypass vulnerability in the `load_from_url_async` method. The flaw occurs because the URL validation and the actual HTTP request handling use different parsing libraries, leading to inconsistencies. This allows an attacker to bypass existing SSRF protections, potentially leading to the disclosure of sensitive information from internal network resources.
CWE
CWE-474Affected Products
Red Hat AI Inference ServerRed Hat Enterprise Linux AI (RHEL AI) 3Red Hat OpenShift AI (RHOAI)