-
Notifications
You must be signed in to change notification settings - Fork 652
Description
🐛 Describe the bug
from hf_olmo import *
throws error
ImportError: cannot import name 'cache' from 'functools'
This is because cache
is not available before Python 3.9 in functools
Changing from functools import cache
to from functools import lru_cache as cache
in <env>/lib/python3.8/site-packages/olmo/util.py
also does not work as there is an error:
line 84, in <module>
class BufferCache(dict, MutableMapping[str, torch.Tensor]):
TypeError: 'ABCMeta' object is not subscriptable
What is the feasibility of using OLMo with Python 3.8?
Versions
Python 3.8.10
ai2-olmo==0.2.4
antlr4-python3-runtime==4.9.3
boto3==1.34.39
botocore==1.34.39
cached-path==1.5.1
cachetools==5.3.2
certifi==2024.2.2
charset-normalizer==3.3.2
filelock==3.12.4
fsspec==2024.2.0
google-api-core==2.17.0
google-auth==2.27.0
google-cloud-core==2.4.1
google-cloud-storage==2.14.0
google-crc32c==1.5.0
google-resumable-media==2.7.0
googleapis-common-protos==1.62.0
huggingface-hub==0.19.4
idna==3.6
Jinja2==3.1.3
jmespath==1.0.1
markdown-it-py==3.0.0
MarkupSafe==2.1.5
mdurl==0.1.2
mpmath==1.3.0
networkx==3.1
numpy==1.24.4
nvidia-cublas-cu12==12.1.3.1
nvidia-cuda-cupti-cu12==12.1.105
nvidia-cuda-nvrtc-cu12==12.1.105
nvidia-cuda-runtime-cu12==12.1.105
nvidia-cudnn-cu12==8.9.2.26
nvidia-cufft-cu12==11.0.2.54
nvidia-curand-cu12==10.3.2.106
nvidia-cusolver-cu12==11.4.5.107
nvidia-cusparse-cu12==12.1.0.106
nvidia-nccl-cu12==2.18.1
nvidia-nvjitlink-cu12==12.3.101
nvidia-nvtx-cu12==12.1.105
omegaconf==2.3.0
packaging==23.2
pillow==10.2.0
protobuf==4.25.2
pyasn1==0.5.1
pyasn1-modules==0.3.0
pygments==2.17.2
python-dateutil==2.8.2
PyYAML==6.0.1
regex==2023.12.25
requests==2.31.0
rich==13.7.0
rsa==4.9
s3transfer==0.10.0
safetensors==0.4.2
six==1.16.0
sympy==1.12
tokenizers==0.15.1
torch==2.1.2
torchaudio==2.2.0
torchvision==0.17.0
tqdm==4.66.1
transformers==4.37.2
triton==2.1.0
typing-extensions==4.9.0
urllib3==2.2.0