Skip to content

Conversation

yao-matrix
Copy link
Contributor

No description provided.

@yao-matrix yao-matrix marked this pull request as draft May 13, 2025 23:11
)
# NOTE: xpu doesn't have `default_stream` API, use `current_stream` instead
self.s0 = (
torch.xpu.current_stream() if self.accelerator_type == "xpu" else torch.cuda.default_stream()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gujinghui, pls help review this logic. Since xpu doesn't have default_stream, so i am using current_stream to WA.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. As long as the workload has no specific assumption on cuda default stream, the current stream is good enough to replace it for functionality.

@yao-matrix yao-matrix marked this pull request as ready for review May 14, 2025 06:50
@yao-matrix
Copy link
Contributor Author

@kashif , pls help review, thx.

@kashif kashif self-assigned this May 14, 2025
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@yao-matrix
Copy link
Contributor Author

seems the ci failures are not brought by my PR @kashif

@kashif kashif merged commit 64aa064 into huggingface:main May 19, 2025
10 checks passed
shirinyamani pushed a commit that referenced this pull request May 19, 2025
Signed-off-by: Matrix Yao <matrix.yao@intel.com>
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
@yao-matrix yao-matrix deleted the activation-off-xpu branch May 19, 2025 22:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants