Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ARPACK fails with identity matrix #378

Closed
juanjosegarciaripoll opened this issue Oct 27, 2022 · 7 comments
Closed

ARPACK fails with identity matrix #378

juanjosegarciaripoll opened this issue Oct 27, 2022 · 7 comments

Comments

@juanjosegarciaripoll
Copy link
Contributor

Expected behavior

Given the identity matrix it should be able to return the right eigenvalues.

When substituting in TESTS/icb_arpack_c.c the matrix multiplication routine for

void dMatVec(const double* x, double* y) {
  int i;
  for (i = 0; i < 1000; ++i) y[i] = x[i]; /* ((double)(i + 1)) * x[i];*/
};

the test should run and return the right vectors.

Actual behavior

Program fails on first iteration without meaningful message. In code that I have and which uses arpack, an error related to "non convergence" is returned.

Where/how to reproduce the problem

  • arpack-ng: release or commit (SHA)
  • OS: debian
  • compiler: gcc

Steps to reproduce the problem

  • Change TESTS/icb_arpack_c.c as described above
  • Compile and run tests
  • The test associated to icb_arpack_c fails.

Error message

$ ./TESTS/icb_arpack_c
1.000000 - 992.000000 - 991.000000

Notes, remarks

This is related to this issue #142
I did not have this problem with earlier versions of ARPACK so it seems a regression.

@fghoussen
Copy link
Collaborator

Isn't that an ill-case? Any random vector can be an eigen vector: as there is no unique solution, not surprising to me that arpack hits numerical problems

@juanjosegarciaripoll
Copy link
Contributor Author

It is actually not an ill case because any vector is an eigenvector and Arnoldi should converge on one step, as the residual is zero after the first iteration.

Unfortunately I don't speak Fortran and cannot follow the algorithm, but a trivial Python implementation of implicitly restarted Arnoldi as described by the original paper actually captures this case.

And Arpack itself worked with it fine in earlier versions (pulled from this repo some years ago, or in Matlab or in other software using it, such as Numpy).

@fghoussen
Copy link
Collaborator

Did you try to initialize your v0 vector (initial guess) with rand? JuliaLinearAlgebra/Arpack.jl#138 (comment)

Can you checkout before merge of #80 and #143: do you reproduce the problem initializing v0 with \ without rand?

Arpack itself worked with it fine in earlier versions

Witch version / commit?

@fghoussen
Copy link
Collaborator

$ ./TESTS/icb_arpack_c
1.000000 - 992.000000 - 991.000000

You need to change both dMatVec and the expected associated eigen value the test check for. This

>> git diff
diff --git a/TESTS/icb_arpack_c.c b/TESTS/icb_arpack_c.c
index bdf78ce..4a34840 100644
--- a/TESTS/icb_arpack_c.c
+++ b/TESTS/icb_arpack_c.c
@@ -23,7 +23,7 @@
 
 void dMatVec(const double* x, double* y) {
   int i;
-  for (i = 0; i < 1000; ++i) y[i] = ((double)(i + 1)) * x[i];
+  for (i = 0; i < 1000; ++i) y[i] = x[i];
 };
 
 int ds() {
@@ -80,7 +80,7 @@ int ds() {
   int i; // C99 compliant.
   for (i = 0; i < nev; ++i) {
     double val = d[i];
-    double ref = (N-(nev-1)+i);
+    double ref = 1.;
     double eps = fabs(val - ref);
     printf("%f - %f - %f\n", val, ref, eps);
 

works

>> ./TESTS/icb_arpack_c
1.000000 - 1.000000 - 0.000000
1.000000 - 1.000000 - 0.000000
1.000000 - 1.000000 - 0.000000
1.000000 - 1.000000 - 0.000000
1.000000 - 1.000000 - 0.000000
1.000000 - 1.000000 - 0.000000
1.000000 - 1.000000 - 0.000000
1.000000 - 1.000000 - 0.000000
1.000000 - 1.000000 - 0.000000
...

Not sure what it "works" anyway: any random vector is eigen vector with lambda = 1. Not sure to understand what the expected result should be: there is no unique solution.

@fghoussen
Copy link
Collaborator

Closing this as eigenvalues are OK.

@juanjosegarciaripoll
Copy link
Contributor Author

Apologies, but this seems to be a symptom of something more fundamental, even if I botched the test code above. The test I produced was a somewhat artificial way to reproduce something that I am observing in my own work with arpack, which is that if the initial estimate is close to an eigenvector, the library does not detect this and has problems to converge.

In my work I am solving PDE's that are progressively solved in finer and finer grids. This means that I solve for a grid with 2^(N+1) points starting from the solution with 2^N which is a very good approximation. In some circumstances it is so god, that Arpack simply fails with an error. In other circumstances the algorithm does not break, but it takes a time to converge that grows exponentially with the problem size.

When I mentioned I did not have the errors before, I referred to Arpack-2 as delivered by Debian (https://packages.debian.org/buster/libarpack2). Unfortunately, even for this version the exponential growth in convergence time is still there.

In comparison, when using Primme to solve the same eigenvalue problem, the convergence time does not grow, but rather decreases with problem size, as interpolation provides a good enough starting point that becomes better with "N".

@fghoussen
Copy link
Collaborator

a grid with 2^(N+1) points starting from the solution with 2^N which is a very good approximation. In some circumstances it is so god, that Arpack simply fails with an error.

Is your error tolerance small enough to allow for looking for better solution from 2^N to 2^(N+1) grid?
According to your problem, do you use the most appropriate mode?

c dnaupd is usually called iteratively to solve one of the

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants