Skip to content
/ SGA Public

The official code of "Eliminating Gradient Conflict in Reference-based Line-art Colorization" (ECCV2022)

Notifications You must be signed in to change notification settings

kunkun0w0/SGA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Eliminating Gradient Conflict in Reference-based Line-Art Colorization

The official code of ECCV2022 paper: Eliminating Gradient Conflict in Reference-based Line-Art Colorization
Zekun Li, Zhengyang Geng, Zhao Kang, Wenyu Chen, Yibo Yang

We propose a new attention module called Stop-Gradient Attention. Our main idea is detaching the gradient when backpropagating the attention map.

The module is illustrated as follows:

The core code is showed as follow:

# input:
# X: feature maps -> tensor(b, wh, c)
# Y: feature maps -> tensor(b, wh, c)
# output:
# Z: feature maps -> tensor(b, wh, c)
# other objects:
# Wq, Wv: embedding matrix -> nn.Linear(c,c)
# A: attention map -> tensor(b, wh, wh)
# leaky_relu: leaky relu activation function
with torch.no_grad():
    A = X.bmm(Y.permute(0, 2, 1))
    A = softmax(A, dim=-1)
    A = normalize(A, p=1, dim=-2)
X = leaky_relu(Wq(X))
Y = leaky_relu(Wv(Y))
Z = torch.bmm(A,Y) + X

bibtex

@inproceedings{li2022eliminating,
  title={Eliminating Gradient Conflict in Reference-based Line-Art Colorization},
  author={Li, Zekun and Geng, Zhengyang and Kang, Zhao and Chen, Wenyu and Yang, Yibo},
  booktitle={European Conference on Computer Vision},
  pages={579--596},
  year={2022},
  organization={Springer}
}

About

The official code of "Eliminating Gradient Conflict in Reference-based Line-art Colorization" (ECCV2022)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages