-
Notifications
You must be signed in to change notification settings - Fork 121
Issues: thu-ml/SageAttention
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Comfyui is trying to use triton when i select "fp16_cuda" and got SM89 error when i select fp8_cuda
#187
opened Jun 10, 2025 by
Demuzx
Distortion in Generated Video with Sage-Attention for Wan2.1
#186
opened Jun 8, 2025 by
varadrane1707
Releasing SageAttention3 code
enhancement
New feature or request
#185
opened Jun 7, 2025 by
showgood163
Reason for using INT8 rather than FP8 in SageAttention 3 backward?
#184
opened Jun 6, 2025 by
TheTinyTeddy
Where is the SageAttention2++ code?
enhancement
New feature or request
#182
opened May 28, 2025 by
aikitoria
RuntimeError: Cannot find CUDA_HOME. CUDA must be available to build the package.
#179
opened May 25, 2025 by
wujpia
wonder why using different(128/64) block-size in sageattn-v1 for quant q, k
#167
opened May 8, 2025 by
ZJLi2013
How to simply verify the successful installation of sageattention?
#165
opened May 6, 2025 by
aswordok
BUG: RTX 50XX nan returned by _fused.mean_scale_fuse_quant_cuda and _fused.scale_fuse_quant_cuda
#164
opened Apr 30, 2025 by
deepbeepmeep
The accuracy loss in the CUDA version is much more than Triton version for llama-3.2
#154
opened Apr 7, 2025 by
WanliZhong
K Sampler [WinError 2] The system cannot find the file specified.
#152
opened Apr 2, 2025 by
REG-0422
Previous Next
ProTip!
Type g p on any issue or pull request to go back to the pull request listing page.