Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Concern about in-place operation within code #45

Open
anbyaa opened this issue Aug 31, 2024 · 0 comments
Open

Concern about in-place operation within code #45

anbyaa opened this issue Aug 31, 2024 · 0 comments

Comments

@anbyaa
Copy link

anbyaa commented Aug 31, 2024

Thank you for your pioneering work on the FDS approach for addressing imbalanced regression problems. I have been applying FDS to my own network, but I encountered an issue with PyTorch, specifically the error: "RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation."

After some debugging, I suspect that the calibrate_mean_var method (defined at Line 97 in ./agedb-dir/utils.py) might be causing this issue. The error seems to be linked to an in-place operation that disrupts the gradient computation for the features variable used in the smooth method (Line 124 in ./agedb-dir/FDS.py).

Here is the relevant code for calibrate_mean_var:

def calibrate_mean_var(matrix, m1, v1, m2, v2, clip_min=0.1, clip_max=10):
    if torch.sum(v1) < 1e-10:
        return matrix
    if (v1 == 0.).any():
        valid = (v1 != 0.)
        factor = torch.clamp(v2[valid] / v1[valid], clip_min, clip_max)
        matrix[:, valid] = (matrix[:, valid] - m1[valid]) * torch.sqrt(factor) + m2[valid]
        return matrix

    factor = torch.clamp(v2 / v1, clip_min, clip_max)
    return (matrix - m1) * torch.sqrt(factor) + m2

The issue appears when 0 is present in v1, leading to the creation of the valid mask to avoid division by zero. The line matrix"[:, valid] = (matrix[:, valid] - m1[valid]) * torch.sqrt(factor) + m2[valid]" performs slice assignment, which is generally considered an in-place operation in PyTorch.

A potential solution to avoid this in-place operation could be to compute the smoothed matrix in one step and return it without modifying matrix directly, just as the line of return in the original code.

Could you please share your thoughts on this approach or suggest an alternative solution?

Thank you for your attention to this matter. I look forward to your response.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant