flash-attn install troubleshooting
This page targets the common path: you tried to install flash-attn and hit an error. Start with compatibility, then install a matching wheel.
First steps
- 1) Confirm environment:
python --versionpython -c "import torch; print(torch.__version__)"nvidia-smi - 2) Use the compatibility checklist: flash-attn compatibility
- 3) Install from a matching wheel: wheel finder → copy pip/uv command.
No wheels found
If the finder shows no results, it’s usually a version mismatch. Try:
- - Change Python version (often 3.10/3.11 is easiest)
- - Match your installed PyTorch version exactly
- - Try a supported CUDA version for your torch build
If you must, use the from-source install guide.
ImportError / undefined symbol
This is commonly caused by ABI mismatches (wheel vs your torch/CUDA runtime). Fix by reinstalling a matching wheel:
- - Re-check compatibility
- - Pick a different wheel (same flash-attn version, different CUDA/PyTorch combo)
FAQ
No wheels found — what do I do?
This usually means your Python/PyTorch/CUDA/platform combination doesn’t match available builds. Try adjusting versions or install from source as a fallback.
ImportError / undefined symbol after install
This often indicates an ABI mismatch between your installed PyTorch/CUDA runtime and the wheel you installed. Re-check compatibility and pick a matching wheel.
Build from source fails
Source builds require a compatible CUDA toolkit and compiler toolchain. Use the official instructions and confirm your environment versions are supported.