Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jacobi elliptic functions and complete elliptic integral of the first kind #79

Open
wants to merge 18 commits into
base: master
Choose a base branch
from

Conversation

ettersi
Copy link

@ettersi ettersi commented Mar 5, 2018

Implements the Jacobi elliptic functions and the complete elliptic integral of the first kind.

Both sets of functions work for real and complex arguments as well as all FloatXX and BigFloat types. The test sets involve comparison with Python's mpmath library (http://mpmath.org/) as well as checks that the results are within a few eps() of the exact result.

@simonbyrne
Copy link
Member

Thanks! Is this based on the mpmath code? If so, that's fine as it's a compatible licence, but we need to acknowledge the original copyright.

@@ -71,7 +71,12 @@ end
export sinint,
cosint

export ellipj,
jss,jsc,jsd,jsn,jcs,jcc,jcd,jcn,jds,jdc,jdd,jdn,jns,jnc,jnd,jnn,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are all these exports?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I see, they're generated by macros. The names are a bit cryptic: are these standard? Maybe prefix them (elliptic_jss) or stick them in a module (Elliptic.jss)?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are 16 Jacobi elliptic functions, namely all pairs of the letters scdn. Since cd is already taken, I decided to put a j (for Jacobi) in front of the function to avoid name collision. Elliptic.jl solved this issue by putting these functions into a module Jacobi, but I'd say jsn(u,m) is more convenient than Jacobi.sn(u,m). Might be worth discussing, though.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah ok, how about jacobisn? This matches nicely with besselk/besselj, etc.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

jacobisn possibly makes it hard to distinguish the functions, and it wouldn't really save much typing compared to Jacobi.sn. It might be worth to get input from someone who uses these functions regularly, though.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at other packages:

  • Mathematica: JacobiSN
  • Matlab & SciPy provides them all at once via ellipj function.
  • GSL provides them all via gsl_sf_elljac_e

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We might also want to rename the K and iK functions. It's just too much of a hassle not to be able to use K for anything else. For reference:

  • Mathematica: EllipticK
  • Matlab: ellipticK
  • Scipy: ellipk
  • GSL: gsl_sf_ellint_Kcomp
    I'd vote for ellipK, as the K is usually uppercase in mathematical notation. Definitely not ellipticK unless we rename ellipj.

Also, I am a bit unsure about providing iK and how it should be named. If you look at the code, it definitely makes sense to provide iK because K(m) is really just iK(1-m) and so K(1-m) = iK(1-(1-m)) which is just stupid. But AFAIK no other software package provides this, which means I might be doing something wrong. And the commonly used mathematical notation for K(1-m) seems to be K', but obviously we can't use that so we need to think of something else.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd still prefer to rename the functions to longer names.

@ettersi
Copy link
Author

ettersi commented Mar 5, 2018

I skimmed through the mpmath codes (and a few others), but at most I took ideas, no code.

src/elliptic.jl Outdated Show resolved Hide resolved
src/elliptic.jl Outdated
#----------------
# Pick algorithm

Base.@pure puresqrt(x) = sqrt(x)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't sqrt already pure?

Copy link
Member

@simonbyrne simonbyrne Mar 6, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, it isn't for BigFloat, but that is intentional, since we can change precision via setprecision. Using @pure here is incorrect, and will cause problems when computing at different precisions.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It also isn't for the FloatXX types:

julia> foo(T) = Val{sqrt(eps(T))}()
foo (generic function with 1 method)

julia> @code_warntype foo(Float64)
Variables:
  #self# <optimized out>
  T <optimized out>

Body:
  begin 
      return ((Core.apply_type)(Main.Val, (Base.Math.sqrt_llvm)(2.220446049250313e-16)::Float64)::Type{Val{_}} where _)()::Val{_} where _
  end::Val{_} where _

julia> foo(T) = Val{puresqrt(eps(T))}()
foo (generic function with 1 method)

julia> @code_warntype foo(Float64)
Variables:
  #self# <optimized out>
  T <optimized out>

Body:
  begin 
      return $(QuoteNode(Val{1.4901161193847656e-8}()))
  end::Val{1.4901161193847656e-8}

But I agree, I should have done

puresqrt(x) = sqrt(x)
Base.@pure puresqrt(x::Union{Float16,Float32,Float64}) = sqrt(x)

With this, I get

julia> foo(T) = Val{puresqrt(eps(T))}()
foo (generic function with 1 method)

julia> @code_warntype foo(Float64)
Variables:
  #self# <optimized out>
  T <optimized out>

Body:
  begin 
      return $(QuoteNode(Val{1.4901161193847656e-8}()))
  end::Val{1.4901161193847656e-8}

julia> @code_warntype foo(BigFloat)
Variables:
  #self# <optimized out>
  T::Type{BigFloat}

Body:
  begin 
      SSAValue(0) = $(Expr(:invoke, MethodInstance for eps(::Type{BigFloat}), :(Main.eps), :(T)))
      return ((Core.apply_type)(Main.Val, $(Expr(:invoke, MethodInstance for sqrt(::BigFloat), :(Main.sqrt), SSAValue(0))))::Type{Val{_}} where _)()::Val{_} where _
  end::Val{_} where _

Just out of curiosity, is there any fundamental obstacle to do this in Base, and possible for all numerical functions?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, maybe it would be better to just do

Base.@pure puresqrt(x) = Float64(sqrt(x))

Then puresqrt is pure, and since the nsteps computations are done in Float64 anyway, I don't lose anything in terms of generality.

@simonbyrne
Copy link
Member

Overall, this looks pretty good to me. It would be nice to have a few more comments, particularly as to how the generated functions work and why they're needed.

I'm sure @stevengj will have some thoughts as well.

@simonbyrne
Copy link
Member

Actually, I am still slightly hesitant about all the generated functions. It would be nice if we could reduce their use a bit.

  • I think the ellipj_* functions could be written as simple iterators, does that impact performance?
  • shrinkm is a bit more difficult, but currently it is unstable for arbitrary precision types like BigFloat (well, it should be, see my comment on puresqrt). Perhaps you could have it return tuples for fixed-precision types (Float64, Float32, etc.) and arrays for BigFloat?

@ettersi
Copy link
Author

ettersi commented Mar 6, 2018

The main reason for the generated functions is to avoid heap allocation. The structure of the algorithms is to first compute k_i for i = 1:nsteps and then use them in the order reverse(1:nsteps). Without generated functions, I would have to either

  • allocate an array on the heap, or
  • recompute them from the last k_i.
    I don't like the heap allocation because it's performance impact can be a bit unpredictable, and recomputing them is probably slower and could affect numerical accuracy. Because of the "up and down" structure of the algorithm, I am not sure what would be gained if I implemented the ellipj_shrinkm and ellipj_growm functions as iterators.

Regarding the instability of BigFloats: as far as I understand, the only difference is that eps(BigFloat) is no longer pure, hence the number of Landen transformation steps has to be computed at runtime and the calls in ellipj_dispatch have to be resolved at runtime rather than compile-time. That seems like a small price to pay, though, especially since the alternative would be to do heap allocation of the k_i which would probably have a similar performance penalty. Making the shrinkm function switch between returning tuples or arrays would either lead to code duplication or at least complicate the rather simple code we have at the moment. Both alternatives seem undesirable for maintenance. If you are worried about the number of methods that will be compiled because of this generated functions business, I don't think that will be much of a concern. Most users probably don't bother changing the precision of BigFloats, and even if they do, the Landen transformation converges quadratically so even nsteps(1e-300, Complex) is just 8.

There is a slight hick-up in that currently we compute the number of Landen steps twice for BigFloats falling into the last two branches of ellipj_dispatch. I'll fix that.

@simonbyrne
Copy link
Member

What I meant was

function ellipj_growm(sn,cn,dn, k)
    # [1, Sec 16.12] / https://dlmf.nist.gov/22.7.i
    for kk in reverse(k)
        sn,cn,dn = (1+kk)*sn/(1+kk*sn^2), cn*dn/(1+kk*sn^2), (1-kk*sn^2)/(1+kk*sn^2)
                     # ^ Use [1, 16.9.1]. Idea taken from [2]
    end
    return sn,cn,dn
end

Basically, generated functions should be considered a weapon of last resort: they are powerful, but do add considerable complexity, both for the compiler (i.e. they make precompilation more difficult) and for humans reading the code.

As for BigFloats: these are already heap allocated, so array vs tuple for them are a bit of a wash (though, again, benchmarking would be useful).

@ettersi
Copy link
Author

ettersi commented Mar 6, 2018

Right, I agree I overused @generated for these two functions. Sorry about that, fixed it.

I also agree that there probably isn't much point in avoiding heap allocations for BigFloats, but the types we want to optimise for are Float64 and Float32 and there it probably matters. And I'd still say there is not much point in having two versions of the same code because of this.

src/elliptic.jl Outdated Show resolved Hide resolved
@ettersi
Copy link
Author

ettersi commented Mar 20, 2018

The only remaining issues with this PR seem to be about the names of the functions. Currently, we have

  • ellipj following Matlab and SciPy
  • jpq for p,q in (s,c,d,n). This is my own invention.
  • ellipK compared to ellipticK (Matlab) and ellipk (SciPy).

Some more feedback regarding this would be helpful.

There further was some discussion regarding my use of generated functions and the @pure annotation. The current usage seems both correct and reasonable to me, but please let me know if there are any doubts remaining.

@MasonProtter
Copy link
Contributor

Is there a timeline for this to merge?

@MasonProtter
Copy link
Contributor

I couldn't find an appropriate place in the code to comment on this so I'll do it here if thats okay. Currently the algorithm doesn't know to use the analytic continuation for real valued arguments > 1:

julia> ellipK(2.0)
ERROR: DomainError:
Stacktrace:
 [1] ellipiK_agm(::Float64) at /Users/mason/.julia/v0.6/SpecialFunctions/src/elliptic.jl:207
 [2] ellipK(::Float64) at /Users/mason/.julia/v0.6/SpecialFunctions/src/elliptic.jl:214

whereas giving a complex argument will cause it to use the correct analytic continuation

julia> ellipK(2.0+0im)
1.3110287771460596 - 1.3110287771460596im

Is this intended behaviour?

@ettersi
Copy link
Author

ettersi commented Mar 29, 2018

This is intentional and consistent with Base.sqrt:

julia> sqrt(-1)
ERROR: DomainError:
sqrt will only return a complex result if called with a complex argument. Try sqrt(complex(x)).
Stacktrace:
 [1] sqrt(::Int64) at ./math.jl:434

We could and possible should print a similar error message for ellipK, but I couldn't quite figure out how to do this on 0.6. Advice from a more experienced Julia user would be appreciated. I guess in 0.7 I should just call Base.Math.throw_complex_domainerror()?

@ettersi
Copy link
Author

ettersi commented Jun 28, 2018

Repeating Mason Protter's question: Is there a timeline for this to merge?

@Datseris
Copy link

Datseris commented Sep 8, 2018

If you ever do 2nd kind elliptic integrals, please tag me, so I can benchmark and compare with https://github.com/nolta/Elliptic.jl , which is my current dependency.

(assuming of course they ever get merged)

@ettersi
Copy link
Author

ettersi commented Feb 21, 2019

Once again, is there any timeline on getting this merged? If not, why?

@jebej
Copy link

jebej commented Jul 14, 2020

bump

Comment on lines +101 to +110
Base.@pure function nsteps(m,ε)
i = 0
while abs(m) > ε
m = descstep(m)^2
i += 1
end
return i
end
Base.@pure nsteps(ε,::Type{<:Real}) = nsteps(0.5,ε) # Guarantees convergence in [-1,0.5]
Base.@pure nsteps(ε,::Type{<:Complex}) = nsteps(0.5+sqrt(3)/2im,ε) # This is heuristic.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still don't think the @pures here are necessary (unless you have benchmarks suggesting otherwise

Copy link
Contributor

@MasonProtter MasonProtter Jul 14, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These functions aren’t pure anyways...

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, this needs to be looked into.

# For all FloatXX types, this can be done at compile time, while for
# BigFloat this has to be done at runtime.
T = promote_type(typeof(u),typeof(m))
ε = puresqrt(Float64(eps(real(typeof(m)))))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
ε = puresqrt(Float64(eps(real(typeof(m)))))
ε = sqrt(Float64(eps(real(typeof(m)))))

should be sufficient in most cases.

@ettersi
Copy link
Author

ettersi commented Jul 16, 2020

Here's a list of remaining issues with this PR:

Regarding the last point: the advantage of this implementation is that it supports complex arguments and arbitrary number types. On the other hand, #135 provides a uniform implementation for both ellipk and ellipe. Also, we should check whether there's a significant performance difference between the two codes.

Sorting this out might take a couple of days to read through the literature, benchmark, maybe implement ellipe in a similar style as the one in this PR, etc.

If anyone is willing to tackle these issues, please feel free to do so. I'd be happy to give a hand, but I won't invest any further time unless there's a clear path to how this PR might finally get merged.

@stevengj
Copy link
Member

Sorry I haven't chimed in here, but @simonbyrne's comments all seem pretty reasonable to me and I had been waiting fo them to be addressed before I looked at this PR closely.

Comment on lines +101 to +119
Base.@pure function nsteps(m,ε)
i = 0
while abs(m) > ε
m = descstep(m)^2
i += 1
end
return i
end
Base.@pure nsteps(ε,::Type{<:Real}) = nsteps(0.5,ε) # Guarantees convergence in [-1,0.5]
Base.@pure nsteps(ε,::Type{<:Complex}) = nsteps(0.5+sqrt(3)/2im,ε) # This is heuristic.
function ellipj_nsteps(u,m)
# Compute the number of Landen steps required to reach machine precision.
# For all FloatXX types, this can be done at compile time, while for
# BigFloat this has to be done at runtime.
T = promote_type(typeof(u),typeof(m))
ε = puresqrt(Float64(eps(real(typeof(m)))))
N = nsteps(ε,typeof(m))
return ellipj_dispatch(u,m,Val{N}())::NTuple{3,T}
end
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi. Sorry if this isn't in the correct place. I was hoping to this would be finished. From my reading of this pull request the major stumbling block in this pull request are these lines that use a lot of @pure. When I tried to test this I found that the @pure where needed to prevent dynamic dispatch. I tried to modify the code a bit to prevent this and wound up at the following

Suggested change
Base.@pure function nsteps(m,ε)
i = 0
while abs(m) > ε
m = descstep(m)^2
i += 1
end
return i
end
Base.@pure nsteps(ε,::Type{<:Real}) = nsteps(0.5,ε) # Guarantees convergence in [-1,0.5]
Base.@pure nsteps(ε,::Type{<:Complex}) = nsteps(0.5+sqrt(3)/2im,ε) # This is heuristic.
function ellipj_nsteps(u,m)
# Compute the number of Landen steps required to reach machine precision.
# For all FloatXX types, this can be done at compile time, while for
# BigFloat this has to be done at runtime.
T = promote_type(typeof(u),typeof(m))
ε = puresqrt(Float64(eps(real(typeof(m)))))
N = nsteps(ε,typeof(m))
return ellipj_dispatch(u,m,Val{N}())::NTuple{3,T}
end
Base.@pure function nsteps(M)
m = _convfac(M)
ε = _vareps(M)
i = 0
while abs(m) > ε
m = descstep(m)^2
i += 1
end
return i
end
@inline _convfac(::Type{<:Real}) = 0.5
@inline _convfac(::Type{<:Complex}) = 0.5 + sqrt(3)/2im
@inline _vareps(::Type{<:Complex{T}}) where {T} = Float64(sqrt(eps(T)))
@inline _vareps(T::Type{<:Real}) = Float64(sqrt(eps(T)))
function ellipj_nsteps(u,m)
# Compute the number of Landen steps required to reach machine precision.
# For all FloatXX types, this can be done at compile time, while for
# BigFloat this has to be done at runtime.
T = promote_type(typeof(u),typeof(m))
N = nsteps(T)
return ellipj_dispatch(u,m,Val{N}())::NTuple{3,T}
end

This seems to work in my testing. For versions < 1.7 I still needed the @pure to prevent dynamic dispatch. I am not sure if this usage of @pure is alright, but it seems to be more inline with what I read in the docs. The good news is that for 1.7 everything infers correctly so the @pure is not needed.

What's the chances that this gets implemented soonish? I can try to take over this pull request @ettersi if any additional work is needed. I just need this implementation of the jacobi functions for some research I am doing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants