Skip to content

Commit

Permalink
Making updates based on code review.
Browse files Browse the repository at this point in the history
  • Loading branch information
kmacdonald-stsci committed Sep 4, 2024
1 parent 2fbfe80 commit 9469506
Show file tree
Hide file tree
Showing 5 changed files with 153 additions and 165 deletions.
25 changes: 13 additions & 12 deletions docs/stcal/ramp_fitting/description.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@ Description

This step determines the mean count rate, in units of counts per second, for
each pixel by performing a linear fit to the data in the input file. The default
is done using the "ordinary least squares" method using based on the Fixsen fitting
algorithm described by
method uses "ordinary least squares" based on the Fixsen fitting algorithm
described by
`Fixsen et al. (2011) <https://ui.adsabs.harvard.edu/abs/2000PASP..112.1350F>`_.

The count rate for each pixel is determined by a linear fit to the
Expand All @@ -18,9 +18,10 @@ saturation flags are found. Pixels are processed simultaneously in blocks
using the array-based functionality of numpy. The size of the block depends
on the image size and the number of groups.

There is a likelihood algorithm implemented based on Timoth Brandt's papers:
There is a likelihood algorithm implemented based on Timothy Brandt's papers:
Optimal Fitting and Debiasing for Detectors Read Out Up-the-Ramp
Likelihood-Based Jump Detection and Cosmic Ray Rejection for Detectors Read Out Up-the-Ramp
Likelihood-Based Jump Detection and Cosmic Ray Rejection for Detectors Read Out Up-the-Ramp.
This algorithm is currently in beta phase as an alternative to the OLS method.

.. _ramp_output_products:

Expand Down Expand Up @@ -323,12 +324,12 @@ product.
Likelihood Algorithm Details
----------------------------
As an alternative to the OLS algorithm, a likelihood algorithm can be selected
with for ``--ramp_fitting.algorithm=LIKELY``. If this algorithm is selected,
the normal pipeline jump detection algorithm is skipped because this algorithm
has its own jump detection algorithm. The jump detection for this algorithm
requires NGROUPS to be a minimum of four (4). If NGROUPS :math:`\le` 3, then
this algorithm is deselected, defaulting to the above described OLS algorithm
and the normal jump detection pipeline step is run.
with the step argument ``--ramp_fitting.algorithm=LIKELY``. If this algorithm
is selected, the normal pipeline jump detection algorithm is skipped because
this algorithm has its own jump detection algorithm. The jump detection for
this algorithm requires NGROUPS to be a minimum of four (4). If NGROUPS
:math:`\le` 3, then this algorithm is deselected, defaulting to the above
described OLS algorithm and the normal jump detection pipeline step is run.

Each pixel is independently processed, but rather than operate on the each
group/resultant directly, the likelihood algorithm is based on differences of
Expand All @@ -345,10 +346,10 @@ Differentiating, setting to zero, then solving for :math:`a` results in
The covariance matrix :math:`C` is a tridiagonal matrix, due to the nature of the
differences. Because the covariance matrix is tridiagonal, the computational
complexity from :math:`O(n^3)` to :math:`O(n)`. To see the detailed derivation
complexity reduces from :math:`O(n^3)` to :math:`O(n)`. To see the detailed derivation
and computations implemented, refer to
`Brandt (2024) <https://iopscience.iop.org/article/10.1088/1538-3873/ad38d9>`_ and
`Brandt (2024) <https://iopscience.iop.org/article/10.1088/1538-3873/ad38da>`_.
`Brandt (2024) <https://iopscience.iop.org/article/10.1088/1538-3873/ad38da>`__.
The Poisson and read noise computations are based on equations (27) and (28), defining
:math:`\alpha_i`, the diagonal of :math:`C`, and :math:`\beta_i`, the off diagonal.

Expand Down
74 changes: 38 additions & 36 deletions src/stcal/ramp_fitting/likely_algo_classes.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ def get_results(self, result, integ, row):
Parameters
----------
result : Ramp_Result
result : RampResult
Holds computed ramp fitting information.
integ : int
Expand All @@ -58,7 +58,7 @@ def get_results(self, result, integ, row):
self.var_rnoise[integ, row, :] = result.var_rnoise


class Ramp_Result:
class RampResult:
def __init__(self):
"""
Contains the ramp fitting results.
Expand All @@ -73,15 +73,15 @@ def __init__(self):
self.uncert_pedestal = None
self.covar_countrate_pedestal = None

self.countrate_twoomit = None
self.chisq_twoomit = None
self.uncert_twoomit = None
self.countrate_two_omit = None
self.chisq_two_omit = None
self.uncert_two_omit = None

self.countrate_oneomit = None
self.jumpval_oneomit = None
self.jumpsig_oneomit = None
self.chisq_oneomit = None
self.uncert_oneomit = None
self.countrate_one_omit = None
self.jumpval_one_omit = None
self.jumpsig_one_omit = None
self.chisq_one_omit = None
self.uncert_one_omit = None

def __repr__(self):
"""
Expand All @@ -96,36 +96,38 @@ def __repr__(self):
ostring += f"\nuncert_pedestal = \n{self.uncert_pedestal}"
ostring += f"\ncovar_countrate_pedestal = \n{self.covar_countrate_pedestal}\n"
ostring += f"\ncountrate_twoomit = \n{self.countrate_twoomit}"
ostring += f"\nchisq_twoomit = \n{self.chisq_twoomit}"
ostring += f"\nuncert_twoomit = \n{self.uncert_twoomit}"
ostring += f"\ncountrate_two_omit = \n{self.countrate_two_omit}"
ostring += f"\nchisq_two_omit = \n{self.chisq_two_omit}"
ostring += f"\nuncert_two_omit = \n{self.uncert_two_omit}"
ostring += f"\ncountrate_oneomit = \n{self.countrate_oneomit}"
ostring += f"\njumpval_oneomit = \n{self.jumpval_oneomit}"
ostring += f"\njumpsig_oneomit = \n{self.jumpsig_oneomit}"
ostring += f"\nchisq_oneomit = \n{self.chisq_oneomit}"
ostring += f"\nuncert_oneomit = \n{self.uncert_oneomit}"
ostring += f"\ncountrate_one_omit = \n{self.countrate_one_omit}"
ostring += f"\njumpval_one_omit = \n{self.jumpval_one_omit}"
ostring += f"\njumpsig_one_omit = \n{self.jumpsig_one_omit}"
ostring += f"\nchisq_one_omit = \n{self.chisq_one_omit}"
ostring += f"\nuncert_one_omit = \n{self.uncert_one_omit}"
'''

return ostring

def fill_masked_reads(self, diffs2use):
"""
Mask groups to use for ramp fitting.
Replace countrates, uncertainties, and chi squared values that
are NaN because resultant differences were doubly omitted.
For these cases, revert to the corresponding values in with
fewer omitted resultant differences to get the correct values
without double-coundint omissions.
without double-counting omissions.
This function replaces the relevant entries of
self.countrate_twoomit, self.chisq_twoomit,
self.uncert_twoomit, self.countrate_oneomit, and
self.chisq_oneomit in place. It does not return a value.
self.countrate_two_omit, self.chisq_two_omit,
self.uncert_two_omit, self.countrate_one_omit, and
self.chisq_one_omit in place. It does not return a value.
Parameters
----------
diffs2use : ndarray
A 2D array matching self.countrate_oneomit in shape with zero
A 2D array matching self.countrate_one_omit in shape with zero
for resultant differences that were masked and one for
differences that were not masked.
"""
Expand All @@ -134,21 +136,21 @@ def fill_masked_reads(self, diffs2use):
omit = diffs2use == 0
ones = np.ones(diffs2use.shape)

self.countrate_oneomit[omit] = (self.countrate * ones)[omit]
self.chisq_oneomit[omit] = (self.chisq * ones)[omit]
self.uncert_oneomit[omit] = (self.uncert * ones)[omit]
self.countrate_one_omit[omit] = (self.countrate * ones)[omit]
self.chisq_one_omit[omit] = (self.chisq * ones)[omit]
self.uncert_one_omit[omit] = (self.uncert * ones)[omit]

omit = diffs2use[1:] == 0

self.countrate_twoomit[omit] = (self.countrate_oneomit[:-1])[omit]
self.chisq_twoomit[omit] = (self.chisq_oneomit[:-1])[omit]
self.uncert_twoomit[omit] = (self.uncert_oneomit[:-1])[omit]
self.countrate_two_omit[omit] = (self.countrate_one_omit[:-1])[omit]
self.chisq_two_omit[omit] = (self.chisq_one_omit[:-1])[omit]
self.uncert_two_omit[omit] = (self.uncert_one_omit[:-1])[omit]

omit = diffs2use[:-1] == 0

self.countrate_twoomit[omit] = (self.countrate_oneomit[1:])[omit]
self.chisq_twoomit[omit] = (self.chisq_oneomit[1:])[omit]
self.uncert_twoomit[omit] = (self.uncert_oneomit[1:])[omit]
self.countrate_two_omit[omit] = (self.countrate_one_omit[1:])[omit]
self.chisq_two_omit[omit] = (self.chisq_one_omit[1:])[omit]
self.uncert_two_omit[omit] = (self.uncert_one_omit[1:])[omit]


class Covar:
Expand Down Expand Up @@ -245,7 +247,7 @@ def _compute_means_and_taus(self, readtimes, pedestal):

def _compute_alphas_and_betas(self, mean_t, tau, N, delta_t):
"""
Computes the means and taus of defined in EQNs 28 and 29 in paper 1.
Computes the means and taus defined in EQNs 28 and 29 in paper 1.
Parameters
----------
Expand All @@ -269,7 +271,7 @@ def _compute_alphas_and_betas(self, mean_t, tau, N, delta_t):

def _compute_pedestal(self, mean_t, tau, N, delta_t):
"""
Computes the means and taus of defined in EQNs 28 and 29 in paper 1.
Computes the means and taus defined in EQNs 28 and 29 in paper 1.
Parameters
----------
Expand Down Expand Up @@ -303,7 +305,7 @@ def _compute_pedestal(self, mean_t, tau, N, delta_t):
def calc_bias(self, countrates, sig, cvec, da=1e-7):
"""
Calculate the bias in the best-fit count rate from estimating the
covariance matrix. This calculation is derived in the paper.
covariance matrix.
Section 5 of paper 1.
Expand All @@ -314,7 +316,7 @@ def calc_bias(self, countrates, sig, cvec, da=1e-7):
Array of count rates at which the bias is desired.
sig : float
Single read noise]
Single read noise.
cvec : ndarray
Weight vector on resultant differences for initial estimation
Expand Down
Loading

0 comments on commit 9469506

Please sign in to comment.