-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WHLs for cuda 11.7, 11.8, and 12.0 for future Releases #62
Comments
Hi @Qubitium , thank you for your attention. Indeed, bitblas is not officially released yet. We are currently working on performance-related optimizations, And there are still many items on our roadmap, such as CI/CD integration and support for VLLM. We are committed to completing these tasks and releasing more WHL packages in our official release. We expect to complete these tasks in approximately two weeks. |
We're waiting for this action. Thank you for your efforts :) |
@tngh5004 , thanks for you attention, we will arrange this item as soon as possible. |
Thank you very much, I'm currently busy finalizing my thesis, but I'll experiment with it as soon as possible to see if it works. |
Currently the bitblas whl support is too limited >= 12.1. I understand that building so many whl/python/torch combos is a headache but I think it may be worth it.
Include support for all Cuda supported by Torch >= 2.0.0 which need to add 11.7, 11.8, and 12.0 to the WHL builds.
Reasons:
Tasks
The text was updated successfully, but these errors were encountered: