Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[XPU] Add multi-head self/cross attention fused ops. #10037

Merged
merged 1 commit into from
Mar 2, 2023

Conversation

stevenshen36
Copy link
Contributor

@stevenshen36 stevenshen36 commented Feb 27, 2023

PR devices

XPU

PR types

New features

PR changes

Kernels/Pass/API (without compute).

Description

Add multi-head self/cross attention fused ops for xpu backend (only for pass pattern match, not include compute logic).

@stevenshen36 stevenshen36 force-pushed the xpu_mhsa_mhca branch 2 times, most recently from a651e45 to fdb25c4 Compare February 28, 2023 11:42
lite/operators/__xpu__multihead_cross_attn_op.cc Outdated Show resolved Hide resolved
lite/operators/__xpu__multihead_cross_attn_op.cc Outdated Show resolved Hide resolved
lite/operators/op_params.h Outdated Show resolved Hide resolved
Copy link
Collaborator

@zhupengyang zhupengyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@zhupengyang zhupengyang merged commit 5e23e64 into PaddlePaddle:develop Mar 2, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants