DEV Community

alice
alice

Posted on

Openchat Installation

Update 2

Ok, the actual problem I found and was able to reproduce:

following the original github instructions I ran into dependency issues:

conda create -y --name openchat python=3.11
conda activate openchat

pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

pip3 install ochat
Enter fullscreen mode Exit fullscreen mode

and after much testing, I was able to install without conflicts by running these commands:

conda create -y --name openchat-1 python=3.11.5
conda activate openchat-1
pip install xformers==0.0.22 # this installs torch 2.0.1
pip install ochat
pip install torchaudio==2.0.2
pip install torchvision==0.15.2
Enter fullscreen mode Exit fullscreen mode

so it was really an issue with torch, torchaudio, and torchvision versions that led to the dep conflicts.

Update

actually, I tried to reproduce this problem but xformer==0.0.22 wasn't the issue. here's all the stuff I typed in the terminal 😢 I'll update again for the actual solution.

  205  conda create -y --name openchat python=3.11.5
  206  conda activate openchat
  207  pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
  208  pip3 install ochat
  212  pip uninstall torch torchaudio torchvision
  213  pip install torch==2.1.0 torchaudio==2.1.0 torchvision==0.16.0
  218  pip install --upgrade xformers
  219  pip uninstall torch torchaudio torchvision
  220  pip install --upgrade xformers
  221  pip install torch==2.0.1
  222  pip install torch==2.1.0
  223  pip uninstall xformers
  224  pip install xformers==0.0.22
  229  pip check
  230  python -m ochat.serving.openai_api_server --model openchat/openchat_3.5
Enter fullscreen mode Exit fullscreen mode

Original

Today I followed the instructions to install openchat 3.5. I tried to install it via anaconda and I ran into some dependency issues, and a warning about python version installed was 3.11.6 but xformers was built for 3.11.5. Here's what I did to achieve a working installation:

first set the python version to exactly 3.11.5:

conda create -y --name openchat python=3.11.5

then as usual per the github instructions:

conda activate openchat
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
pip3 install ochat

Enter fullscreen mode Exit fullscreen mode

and now I find these dep errors:

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
xformers 0.0.22.post7 requires torch==2.1.0, but you have torch 2.0.1 which is incompatible.
vllm 0.2.1.post1 requires xformers==0.0.22, but you have xformers 0.0.22.post7 which is incompatible.
Enter fullscreen mode Exit fullscreen mode

and if I installed torch-2.1.0:

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
vllm 0.2.1.post1 requires torch==2.0.1, but you have torch 2.1.0 which is incompatible.
vllm 0.2.1.post1 requires xformers==0.0.22, but you have xformers 0.0.22.post7 which is incompatible.
Enter fullscreen mode Exit fullscreen mode

so now all I had to do was set xformers==0.0.22 since 0.0.22.post7 doesn't count:

pip uninstall xformers
pip install xformers==0.0.22
Enter fullscreen mode Exit fullscreen mode

and now it runs with no errors, huge success!

$ pip check
No broken requirements found.
$ python -m ochat.serving.openai_api_server --model openchat/openchat_3.5
FlashAttention not found. Install it if you need to train models.
FlashAttention not found. Install it if you need to train models.
2023-11-09 03:58:46,624 INFO worker.py:1673 -- Started a local Ray instance.
(pid=45563) FlashAttention not found. Install it if you need to train models.
(pid=45563) FlashAttention not found. Install it if you need to train models.
INFO 11-09 03:58:49 llm_engine.py:72] Initializing an LLM engine with config: model='openchat/openchat_3.5', tokenizer='openchat/openchat_3.5', tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=8192, download_dir=None, load_format=auto, tensor_parallel_size=1, quantization=None, seed=0)
(AsyncTokenizer pid=45563) Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
INFO 11-09 04:00:10 llm_engine.py:207] # GPU blocks: 2726, # CPU blocks: 2048
INFO:     Started server process [45364]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://localhost:18888 (Press CTRL+C to quit)
Enter fullscreen mode Exit fullscreen mode

I've raised the issue to openchat on github, so maybe it'll be fixed soon hehe!

Top comments (0)