Skip to content

[Bug]: run on cpu: ModuleNotFoundError: No module named 'vllm.benchmarks' #15812

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
1 task done
BlueSkyyyyyy opened this issue Mar 31, 2025 · 7 comments · Fixed by #17159
Closed
1 task done

[Bug]: run on cpu: ModuleNotFoundError: No module named 'vllm.benchmarks' #15812

BlueSkyyyyyy opened this issue Mar 31, 2025 · 7 comments · Fixed by #17159
Labels
bug Something isn't working

Comments

@BlueSkyyyyyy
Copy link

Your current environment

enviroment install follow:
official guide

run:
vllm serve facebook/opt-125m

error info:

INFO 03-31 18:44:44 [init.py:239] Automatically detected platform cpu.
Traceback (most recent call last):
File "/opt/conda/envs/vllm/bin/vllm", line 33, in
sys.exit(load_entry_point('vllm==0.8.3.dev136+geffc5d24.cpu', 'console_scripts', 'vllm')())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/vllm/bin/vllm", line 25, in importlib_load_entry_point
return next(matches).load()
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/vllm/lib/python3.12/importlib/metadata/init.py", line 205, in load
module = import_module(match.group('module'))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/vllm/lib/python3.12/importlib/init.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "", line 1387, in _gcd_import
File "", line 1360, in _find_and_load
File "", line 1331, in _find_and_load_unlocked
File "", line 935, in _load_unlocked
File "", line 999, in exec_module
File "", line 488, in _call_with_frames_removed
File "/opt/conda/envs/vllm/lib/python3.12/site-packages/vllm-0.8.3.dev136+geffc5d24.cpu-py3.12-linux-x86_64.egg/vllm/entrypoints/cli/main.py", line 7, in
import vllm.entrypoints.cli.benchmark.main
File "/opt/conda/envs/vllm/lib/python3.12/site-packages/vllm-0.8.3.dev136+geffc5d24.cpu-py3.12-linux-x86_64.egg/vllm/entrypoints/cli/benchmark/main.py", line 4, in
import vllm.entrypoints.cli.benchmark.serve
File "/opt/conda/envs/vllm/lib/python3.12/site-packages/vllm-0.8.3.dev136+geffc5d24.cpu-py3.12-linux-x86_64.egg/vllm/entrypoints/cli/benchmark/serve.py", line 4, in
from vllm.benchmarks.serve import add_cli_args, main
ModuleNotFoundError: No module named 'vllm.benchmarks'

🐛 Describe the bug

export VLLM_LOGGING_LEVEL=DEBUG && vllm serve facebook/opt-125m

Traceback (most recent call last):
File "/opt/conda/envs/vllm/bin/vllm", line 33, in
sys.exit(load_entry_point('vllm==0.8.3.dev136+geffc5d24.cpu', 'console_scripts', 'vllm')())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/vllm/bin/vllm", line 25, in importlib_load_entry_point
return next(matches).load()
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/vllm/lib/python3.12/importlib/metadata/init.py", line 205, in load
module = import_module(match.group('module'))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/vllm/lib/python3.12/importlib/init.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "", line 1387, in _gcd_import
File "", line 1360, in _find_and_load
File "", line 1331, in _find_and_load_unlocked
File "", line 935, in _load_unlocked
File "", line 999, in exec_module
File "", line 488, in _call_with_frames_removed
File "/opt/conda/envs/vllm/lib/python3.12/site-packages/vllm-0.8.3.dev136+geffc5d24.cpu-py3.12-linux-x86_64.egg/vllm/entrypoints/cli/main.py", line 7, in
import vllm.entrypoints.cli.benchmark.main
File "/opt/conda/envs/vllm/lib/python3.12/site-packages/vllm-0.8.3.dev136+geffc5d24.cpu-py3.12-linux-x86_64.egg/vllm/entrypoints/cli/benchmark/main.py", line 4, in
import vllm.entrypoints.cli.benchmark.serve
File "/opt/conda/envs/vllm/lib/python3.12/site-packages/vllm-0.8.3.dev136+geffc5d24.cpu-py3.12-linux-x86_64.egg/vllm/entrypoints/cli/benchmark/serve.py", line 4, in
from vllm.benchmarks.serve import add_cli_args, main
ModuleNotFoundError: No module named 'vllm.benchmarks'

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@BlueSkyyyyyy BlueSkyyyyyy added the bug Something isn't working label Mar 31, 2025
@bigPYJ1151
Copy link
Contributor

Can't reproduce the issue with the latest main branch, I installed vLLM via the source code building.

Did you execute the command under the vLLM source code path? Maybe you can try to execute the command outside of the source code path. If the error also exists, please post your install procedure.

@BlueSkyyyyyy
Copy link
Author

BlueSkyyyyyy commented Apr 1, 2025

Can't reproduce the issue with the latest main branch, I installed vLLM via the source code building.

Did you execute the command under the vLLM source code path? Maybe you can try to execute the command outside of the source code path. If the error also exists, please post your install procedure.

I find the install destination folder have no benchmarks. So I copy "benchmarks" to vllm install diretory。
cp -r vllm_source/vllm/benchmarks /opt/conda/envs/vllm/lib/python3.12/site-packages/vllm-0.8.3.dev136+geffc5d24.cpu-py3.12-linux-x86_64.egg/vllm/
It works, but I have no idea why files "benchmarks" missing.

@hasaki321
Copy link

Hello, I have the same problem as you, I followed the offical guide to install cpu verison of vllm: vllm-0.8.3.dev225+g5e125e74. And the same error as you occured, but moving benchmarks folder directly to the sitepackages dosen't work for me. The current version is lacking /serve subfolder:

Image

@yang-ybb
Copy link

yang-ybb commented Apr 9, 2025

Can't reproduce the issue with the latest main branch, I installed vLLM via the source code building.

Did you execute the command under the vLLM source code path? Maybe you can try to execute the command outside of the source code path. If the error also exists, please post your install procedure.

@bigPYJ1151
i met the same issue.
first build .whl in vllm 0.8.3 source folder, use this command: python3 setup.py bdist_wheel, will get vllm-0.8.3+cu126-cp310-cp310-linux_x86_64.whl;
second use pip3 install vllm-0.8.3+cu126-cp310-cp310-linux_x86_64.whl to install vllm;
finally launch vllm will meet [No module named 'vllm.benchmarks']; if copy vllm_0_8_3/vllm/benchmark folder into /home/tiger/.local/lib/python3.10/site-packages, launch vllm success.

@yang-ybb
Copy link

yang-ybb commented Apr 9, 2025

Hello, I have the same problem as you, I followed the offical guide to install cpu verison of vllm: vllm-0.8.3.dev225+g5e125e74. And the same error as you occured, but moving benchmarks folder directly to the sitepackages dosen't work for me. The current version is lacking /serve subfolder:

Image

@hasaki321 you may copy a wrong benchmark folder. should copy vllm_source/vllm/benchmarks not vllm_source/benchmarks

@gn64
Copy link

gn64 commented Apr 9, 2025

I had the same problem with 0.8.3, but when I used the latest version of the main branch, the problem went away.

@tiran
Copy link
Contributor

tiran commented Apr 25, 2025

The problem is still present in vLLM 0.8.4 release when compiling from raw sources under certain conditions.

The problem does not occur when installing from the sdist on PyPI or from a git checkout. In fact the vLLM project got lucky. vLLM is using setuptools-scm as build dependency. The setuptools-scm hooks into setuptools and modifies its behavior. Amongst others it changes how setuptools find packages and subpackages -- but only if there is a VCS directory like .git present. If no VCS is present, then setuptools-scm does not modify setuptool's package finder.

setuptools default package finder with namespace = false does not include subpackages that are missing a __init__.py file. The vllm/benchmark and vllm/vllm_flash_attn directories don't have a __init__.py and are therefore considered namespace packages. But vLLM is excluding namespace packages...

The exclude in

vllm/pyproject.toml

Lines 47 to 50 in 6498189

[tool.setuptools.packages.find]
where = ["."]
exclude = ["benchmarks", "csrc", "docs", "examples", "tests*"]
namespaces = false
may also contribute to the problem. The exclude should be replaced by an include.

Reproducer

The reproducer assumes a virtual env with CPU Torch and all build dependencies

$ curl -OLf https://github.com/vllm-project/vllm/archive/refs/tags/v0.8.4.tar.gz
$ tar xf v0.8.4.tar.gz
$ SETUPTOOLS_SCM_PRETEND_VERSION=0.8.4 \
    VLLM_TARGET_DEVICE=cpu \
    VLLM_CPU_DISABLE_AVX512=true \
    pip wheel -vv --no-build-isolation --no-deps vllm-0.8.4/
$ unzip -l vllm-0.8.4+cpu-cp311-cp311-linux_x86_64.whl | grep vllm/benchmark | wc -l
0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants