Skip to content

Commit dfd1fc7

Browse files
penaguerrerodrlaw1558melanieclarketapastro
authored
JP-3622 Update refpix step for NIR detectors to use convolution kernel (#8726)
Co-authored-by: David Law <[email protected]> Co-authored-by: Melanie Clarke <[email protected]> Co-authored-by: Tyler Pauly <[email protected]>
1 parent 2968d3f commit dfd1fc7

File tree

9 files changed

+493
-66
lines changed

9 files changed

+493
-66
lines changed

changes/8726.refpix.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
Implemented SIRS algorithm instead of running median for side pixels of NIR full-frame data. Running median is still default.

docs/jwst/refpix/arguments.rst

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -52,3 +52,25 @@ in IRS2 mode will be processed along with the normal pixels and preserved
5252
in the output. This option is intended for calibration or diagnostic reductions
5353
only. For normal science operation, this argument should always be False,
5454
so that interleaved pixels are stripped before continuing processing.
55+
56+
* ``--refpix_algorithm``
57+
58+
The ``refpix_algorithm`` argument is only relevant for all NIR full-frame
59+
data, and can be set to 'median' (default) to use the running median or
60+
'sirs' to use the Simple Improved Reference Subtraction (SIRS).
61+
62+
* ``--sigreject``
63+
64+
The ``sigreject`` argument is the number of sigmas to reject as outliers in the
65+
SIRS algorithm. The value is expected to be a float.
66+
67+
* ``--gaussmooth``
68+
69+
The ``gaussmooth`` argument is the width of Gaussian smoothing kernel to use as
70+
a low-pass filter. The numerical value is expected to be a float.
71+
72+
* ``--halfwidth``
73+
74+
The ``halfwidth`` argument is the half-width of convolution kernel to build. The
75+
numerical value is expected to be an integer.
76+

docs/jwst/refpix/description.rst

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -76,6 +76,18 @@ NIR Detector Data
7676
(set by the step parameter ``side_gain``, which defaults to 1.0) is
7777
subtracted from the full group on a row-by-row basis. Note that the ``odd_even_rows``
7878
parameter is ignored for NIR data when the side reference pixels are processed.
79+
If the ``--refpix_algorithm`` option is set to 'sirs', the Simple Improved
80+
Reference Subtraction (SIRS) method will be used instead of the running median.
81+
The SIRS revision uses the left and right side reference pixels as described
82+
in https://doi.org/10.1117/1.JATIS.8.2.028002. This implementation uses a
83+
mathematically equivalent formulation using convolution kernels rather than
84+
Fourier transforms, with the convolution kernel truncated where the weights
85+
approach zero. There are two convolution kernels for each readout channel,
86+
one for each of the left and right reference pixels. These kernels are the
87+
Fourier transforms of the weight coefficients (further description in the paper).
88+
The approach implemented here makes nearly optimal use of the reference pixels.
89+
It reduces read noise by 5-10% relative to a running median filter of the
90+
reference pixels, and reduces 1/f "striping" noise by a larger factor than this.
7991
#. Transform the data back to the JWST focal plane, or DMS, frame.
8092

8193
MIR Detector Data

jwst/refpix/optimized_convolution.py

Lines changed: 180 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,180 @@
1+
#
2+
# Module for using the Simple Improved Reference Subtraction (SIRS) algorithm
3+
# to improve the 1/f noise, only for full frame non-IRS2 NIR data
4+
#
5+
6+
import logging
7+
import numpy as np
8+
9+
log = logging.getLogger(__name__)
10+
log.setLevel(logging.DEBUG)
11+
12+
13+
def make_kernels(sirs_kernel_model, detector, gaussmooth, halfwidth):
14+
"""
15+
Make convolution kernels from Fourier coefficients in the reference file.
16+
17+
Parameters:
18+
-----------
19+
20+
sirs_kernel_model : `~jwst.datamodels.SIRSKernelModel`
21+
Data model containing the Fourier coefficients from the reference files for
22+
Simple Improved Reference Subtraction (SIRS)
23+
24+
detector : str
25+
Name of the detector of the input data
26+
27+
gaussmooth : float
28+
Width of Gaussian smoothing kernel to use as a low-pass filter on reference file's coefficients
29+
30+
halfwidth : int
31+
Half-width of convolution kernel to build from reference file's coefficients
32+
33+
Returns:
34+
--------
35+
kernels: list
36+
List of kernels appropriate for convolution with the left and right reference pixels.
37+
38+
"""
39+
40+
gamma, zeta = get_conv_kernel_coeffs(sirs_kernel_model, detector)
41+
if gamma is None or zeta is None:
42+
log.info(f'Optimized convolution kernel coefficients NOT found for detector {detector}')
43+
return None
44+
45+
kernels_left = []
46+
kernels_right = []
47+
for chan in range(gamma.shape[0]):
48+
n = len(gamma[chan]) - 1
49+
kernel_left = np.fft.fftshift(np.fft.irfft(gamma[chan]))[n - halfwidth:n + halfwidth + 1]
50+
kernel_right = np.fft.fftshift(np.fft.irfft(zeta[chan]))[n - halfwidth:n + halfwidth + 1]
51+
52+
x = np.arange(-halfwidth, halfwidth + 1)
53+
window = np.exp(-x ** 2 / (2 * gaussmooth ** 2))
54+
window /= np.sum(window)
55+
56+
kernel_right = np.convolve(kernel_right, window, mode='same')
57+
kernel_left = np.convolve(kernel_left, window, mode='same')
58+
59+
kernels_right += [kernel_right]
60+
kernels_left += [kernel_left]
61+
62+
return [kernels_left, kernels_right]
63+
64+
65+
def get_conv_kernel_coeffs(sirs_kernel_model, detector):
66+
"""
67+
Get the convolution kernels coefficients from the reference file
68+
69+
Parameters:
70+
-----------
71+
72+
sirs_kernel_model : `~jwst.datamodels.SIRSKernelModel`
73+
Data model containing the Fourier coefficients from the reference files for
74+
Simple Improved Reference Subtraction (SIRS)
75+
76+
detector : str
77+
Name of the detector of the input data
78+
79+
Returns:
80+
--------
81+
82+
gamma: numpy array
83+
Fourier coefficients
84+
85+
zeta: numpy array
86+
Fourier coefficients
87+
"""
88+
mdl_dict = sirs_kernel_model.to_flat_dict()
89+
gamma, zeta = None, None
90+
for item in mdl_dict:
91+
det = item.split(sep='.')[0]
92+
if detector.lower() == det.lower():
93+
arr_name = item.split(sep='.')[1]
94+
if arr_name == 'gamma':
95+
gamma = np.array(mdl_dict[item])
96+
elif arr_name == 'zeta':
97+
zeta = np.array(mdl_dict[item])
98+
if gamma is not None and zeta is not None:
99+
break
100+
return gamma, zeta
101+
102+
103+
def apply_conv_kernel(data, kernels, sigreject=4.0):
104+
"""
105+
Apply the convolution kernel.
106+
107+
Parameters:
108+
-----------
109+
110+
data : 2-D numpy array
111+
Data to be corrected
112+
113+
kernels : list
114+
List containing the left and right kernels
115+
116+
sigreject: float
117+
Number of sigmas to reject as outliers
118+
119+
Returns:
120+
--------
121+
122+
data : 2-D numpy array
123+
Data model with convolution
124+
"""
125+
data = data.astype(float)
126+
npix = data.shape[-1]
127+
128+
kernels_l, kernels_r = kernels
129+
nchan = len(kernels_l)
130+
131+
L = data[:, :4]
132+
R = data[:, -4:]
133+
134+
# Find the approximate standard deviations of the reference pixels
135+
# using an outlier-robust median approach. Mask pixels that differ
136+
# by more than sigreject sigma from this level.
137+
# NOTE: The Median Absolute Deviation (MAD) is calculated as the
138+
# median of the absolute differences between data values and their
139+
# median. For normal distribution MAD is equal to 1.48 times the
140+
# standard deviation but is a more robust estimate of the dispersion
141+
# of data values.The calculation of MAD is straightforward but
142+
# time-consuming, especially if MAD estimates are needed for the
143+
# local environment around every pixel of a large image. The
144+
# calculation is MAD = np.median(np.abs(x-np.median(x))).
145+
# Reference: https://www.interstellarmedium.org/numerical_tools/mad/
146+
MAD = 1.48
147+
medL = np.median(L)
148+
sigL = MAD * np.median(np.abs(L - medL))
149+
medR = np.median(R)
150+
sigR = MAD * np.median(np.abs(R - medR))
151+
152+
# nL and nR are the number of good reference pixels in the left and right
153+
# channel in each row. These will be used in lieu of replacing the values
154+
# of those pixels directly.
155+
goodL = 1 * (np.abs(L - medL) <= sigreject * sigL)
156+
nL = np.sum(goodL, axis=1)
157+
goodR = 1 * (np.abs(R - medR) <= sigreject * sigR)
158+
nR = np.sum(goodR, axis=1)
159+
160+
# Average of the left and right channels, replacing masked pixels with zeros.
161+
# Appropriate normalization factors will be computed later.
162+
L = np.sum(L * goodL, axis=1) / 4
163+
R = np.sum(R * goodR, axis=1) / 4
164+
for chan in range(nchan):
165+
kernel_l = kernels_l[chan]
166+
kernel_r = kernels_r[chan]
167+
168+
# Compute normalizations so that we don't have to directly
169+
# replace the values of flagged/masked reference pixels.
170+
normL = np.convolve(np.ones(nL.shape), kernel_l, mode='same')
171+
normL /= np.convolve(nL / 4, kernel_l, mode='same')
172+
normR = np.convolve(np.ones(nR.shape), kernel_r, mode='same')
173+
normR /= np.convolve(nR / 4, kernel_r, mode='same')
174+
template = np.convolve(L, kernel_l, mode='same') * normL
175+
template += np.convolve(R, kernel_r, mode='same') * normR
176+
data[:, chan * npix * 3 // 4:(chan + 1) * npix * 3 // 4] -= template[:, np.newaxis]
177+
178+
log.debug('Optimized convolution kernel applied')
179+
return data
180+

0 commit comments

Comments
 (0)