Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Moved mask computation into adapter initialization #25

Open
wants to merge 3 commits into
base: develop
Choose a base branch
from

Conversation

PhilipHildebrand
Copy link
Contributor

This PR is to move the mask computation into the initialization of the adapter. This also resolves the issue about the dirty fix.

@PhilipHildebrand
Copy link
Contributor Author

I've added a figure that shows the time improvement for the partitioned heat equation tutorial (for ny = 9, 18, 36, 72 and 144 respectively). For ny = 72 and ny = 144, the version with the dirty fix in it still wasn't finished after several hours , so I've omitted those cases.
timeEval-1.pdf

@IshaanDesai
Copy link
Member

You can add the figure by directly dropping it in the comment while you are writing it. That would make the figure visible inside the comment. I am surprised to know that for ny = 72 or 144, we already have saturation. Around a hundred nodes along the coupling interface is not a lot. Can you investigate on which part of the adapter is taking long? Is it the mask computation again?

@IshaanDesai
Copy link
Member

Also, please make sure that all tests pass. Currently one tutorial seems to fail and there is also an error from the formatter.

@PhilipHildebrand
Copy link
Contributor Author

You can add the figure by directly dropping it in the comment while you are writing it. That would make the figure visible inside the comment. I am surprised to know that for ny = 72 or 144, we already have saturation. Around a hundred nodes along the coupling interface is not a lot. Can you investigate on which part of the adapter is taking long? Is it the mask computation again?

Actually, the adapter is slower all around, not just the mask computation. For reference, for ny = 72, roughly 51% of the runtime are spent during the mask computation, for ny = 144it is roughly 63%.

@IshaanDesai
Copy link
Member

Actually, the adapter is slower all around, not just the mask computation. For reference, for ny = 72, roughly 51% of the runtime are spent during the mask computation, for ny = 144it is roughly 63%.

How much slower is the adapter as compared to the FEniCS adapter? This would be a good thing to find out. Also, is FEniCSx slower than FEniCS or is our adapter slower?

@PhilipHildebrand
Copy link
Contributor Author

Actually, the adapter is slower all around, not just the mask computation. For reference, for ny = 72, roughly 51% of the runtime are spent during the mask computation, for ny = 144it is roughly 63%.

How much slower is the adapter as compared to the FEniCS adapter? This would be a good thing to find out. Also, is FEniCSx slower than FEniCS or is our adapter slower?

I tested the FEniCS tutorial using the preCICE VM and it is performs way better for high ny, at ny = 144 it doesn't exceed 12 s. However, I don't know whether it's because of our adapter or because FEniCSx is slower than FEniCS in general.
Fenics

FenicsVsFenicsx

@IshaanDesai
Copy link
Member

However, I don't know whether it's because of our adapter or because FEniCSx is slower than FEniCS in general.

It should be straightforward to find this out. Using functionality in Python to measure time, you can get how much time is being spent in adapter functions and also how much time the solver is taking. It would be a good insight to find out if the solvers are the issue or is it something on our side.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants