Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 2 additions & 6 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,8 +65,6 @@ def setup(app):
with open(Path(__file__).parent.parent / "pyproject.toml", "rb") as metadata_file:
metadata = tomllib.load(metadata_file)['project']

on_rtd = os.environ.get('READTHEDOCS', None) == 'True'

# Configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {
'python': ('https://docs.python.org/3/', None),
Expand All @@ -78,6 +76,7 @@ def setup(app):
'gwcs': ('https://gwcs.readthedocs.io/en/stable/', None),
'stdatamodels': ('https://stdatamodels.readthedocs.io/en/latest/', None),
'stcal': ('https://stcal.readthedocs.io/en/latest/', None),
'stpipe': ('https://stpipe.readthedocs.io/en/latest/', None),
'drizzle': ('https://drizzlepac.readthedocs.io/en/latest/', None),
'tweakwcs': ('https://tweakwcs.readthedocs.io/en/latest/', None),
}
Expand All @@ -104,12 +103,9 @@ def setup(app):
'sphinx_automodapi.automodsumm',
'sphinx_automodapi.autodoc_enhancements',
'sphinx_automodapi.smart_resolver',
'sphinx.ext.imgmath',
'sphinx.ext.mathjax',
]

if on_rtd:
extensions.append('sphinx.ext.mathjax')

# Add any paths that contain templates here, relative to this directory.
# templates_path = ['_templates']

Expand Down
118 changes: 56 additions & 62 deletions docs/jwst/ami_analyze/description.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
Description
-----------

:Class: `jwst.ami.AmiAnalyzeStep`
:Class: `jwst.ami.ami_analyze_step.AmiAnalyzeStep`
:Alias: ami_analyze

The ``ami_analyze`` step is one of the AMI-specific steps in the ``ami``
Expand All @@ -20,44 +20,34 @@ the SUB80 subarray, in order to reduce execution time.

Arguments
---------
The ``ami_analyze`` step has several optional arguments. In most cases the
The ``ami_analyze`` step has several optional arguments. In most cases the
default arguments will be suitable but more advanced users may wish to test
other options:

:--oversample: The oversampling factor to be used in the model fit (default=3).

:--rotation: Initial guess for the rotation of the PSF in the input image, in
units of degrees (default=0.0).

:--psf_offset: List of PSF offset values to use when creating the model array
(default='0.0 0.0').

:--rotation_search: List of start, stop, and step values that define the list of
rotation search values. The default setting of '-3 3 1'
results in search values of [-3, -2, -1, 0, 1, 2, 3].

:--bandpass: ASDF file containing suitable array to override filter/source
(default=None)

:--usebp: If True, exclude pixels marked DO_NOT_USE from fringe fitting
(default=True)

:--firstfew: If not None, process only the first few integrations (default=None)

:--chooseholes: If not None, fit only certain fringes e.g. ['B4','B5','B6','C2']
(default=None)

:--affine2d: ASDF file containing user-defined affine parameters (default='commissioning')

:--run_bpfix: Run Fourier bad pixel fix on cropped data (default=True)


Note that the `affine2d` default argument is a special case; 'commissioning' is currently the only string other than an ASDF filename that is accepted. If `None` is passed, it will perform a rotation search (least-squares fit to a PSF model) and use that for the affine transform.
* ``--oversample``: The oversampling factor to be used in the model fit (default=3).
* ``--rotation``: Initial guess for the rotation of the PSF in the input image, in
units of degrees (default=0.0).
* ``--psf_offset``: List of PSF offset values to use when creating the model array
(default='0.0 0.0').
* ``--rotation_search``: List of start, stop, and step values that define the list of
rotation search values. The default setting of '-3 3 1'
results in search values of [-3, -2, -1, 0, 1, 2, 3].
* ``--bandpass``: ASDF file containing suitable array to override filter/source
(default=None)
* ``--usebp``: If True, exclude pixels marked DO_NOT_USE from fringe fitting
(default=True)
* ``--firstfew``: If not None, process only the first few integrations (default=None)
* ``--chooseholes``: If not None, fit only certain fringes e.g. ['B4','B5','B6','C2']
(default=None)
* ``--affine2d``: ASDF file containing user-defined affine parameters (default='commissioning')
* ``--run_bpfix``: Run Fourier bad pixel fix on cropped data (default=True)

Note that the ``affine2d`` default argument is a special case; 'commissioning' is currently the only string other than an ASDF filename that is accepted. If `None` is passed, it will perform a rotation search (least-squares fit to a PSF model) and use that for the affine transform.


Creating ASDF files
^^^^^^^^^^^^^^^^^^^
The optional arguments `bandpass` and `affine2d` must be written to `ASDF <https://asdf-standard.readthedocs.io/>`_
The optional arguments ``bandpass`` and ``affine2d`` must be written to `ASDF <https://asdf-standard.readthedocs.io/>`_
files to be used by the step. The step expects the contents to be stored with particular keys but the format is not currently
enforced by a schema; incorrect ASDF file contents will cause the step to revert back to the defaults for each argument.

Expand All @@ -78,7 +68,7 @@ Examples of how to create ASDF files containing the properly formatted informati
throughput_model = datamodels.open(throughput_file)

filt_spec = utils.get_filt_spec(throughput_model)
src_spec = SourceSpectrum.from_vega()
src_spec = SourceSpectrum.from_vega()
bandpass = utils.combine_src_filt(filt_spec,
src_spec,
trim=0.01,
Expand All @@ -100,20 +90,20 @@ Examples of how to create ASDF files containing the properly formatted informati

import asdf
tree = {
'mx': 1., # dimensionless x-magnification
'my': 1., # dimensionless y-magnification
'sx': 0., # dimensionless x shear
'sy': 0., # dimensionless y shear
'xo': 0., # x-offset in pupil space
'yo': 0., # y-offset in pupil space
'rotradccw': None
}
'mx': 1., # dimensionless x-magnification
'my': 1., # dimensionless y-magnification
'sx': 0., # dimensionless x shear
'sy': 0., # dimensionless y shear
'xo': 0., # x-offset in pupil space
'yo': 0., # y-offset in pupil space
'rotradccw': None
}

affineasdf = 'affine.asdf'

with open(affineasdf, 'wb') as fh:
af = asdf.AsdfFile(tree)
af.write_to(fh)
af = asdf.AsdfFile(tree)
af.write_to(fh)
af.close()


Expand All @@ -123,15 +113,19 @@ Inputs

3D calibrated image
^^^^^^^^^^^^^^^^^^^
:Data model: `~jwst.datamodels.DataModel`
:Data model: `~stdatamodels.DataModel`
:File suffix: _calints

The ``ami_analyze`` step takes a single calibrated image cube as input, which should be
the "_calints" product resulting from :ref:`calwebb_image2 <calwebb_image2>` processing.
Multiple exposures can be processed via use of an ASN file that is used as input
to the :ref:`calwebb_ami3 <calwebb_ami3>` pipeline. **Note:** The ``ami_analyze`` step will also
accept a 2D "_cal" product but errors will not be computed in the output.
The ``ami_analyze`` step itself does not accept an ASN as input.
to the :ref:`calwebb_ami3 <calwebb_ami3>` pipeline.

.. note::

The ``ami_analyze`` step will also
accept a 2D "_cal" product but errors will not be computed in the output.
The ``ami_analyze`` step itself does not accept an ASN as input.

Outputs
-------
Expand All @@ -150,35 +144,35 @@ Interferometric observables
The inteferometric observables are saved as OIFITS files, a registered FITS format
for optical interferometry, containing the following list of extensions:

1) ``OI_ARRAY``: AMI subaperture information
2) ``OI_TARGET``: target properties
3) ``OI_T3``: extracted closure amplitudes, triple-product phases
4) ``OI_VIS``: extracted visibility (fringe) amplitudes, phases
5) ``OI_VIS2``: squared visibility (fringe) amplitudes
6) ``OI_WAVELENGTH``: filter information
1. ``OI_ARRAY``: AMI subaperture information
2. ``OI_TARGET``: target properties
3. ``OI_T3``: extracted closure amplitudes, triple-product phases
4. ``OI_VIS``: extracted visibility (fringe) amplitudes, phases
5. ``OI_VIS2``: squared visibility (fringe) amplitudes
6. ``OI_WAVELENGTH``: filter information

For more information on the format and contents of OIFITS files, see the `OIFITS2 standard <https://doi.org/10.1051/0004-6361/201526405>`_.

The _ami-oi.fits file contains tables of observables averaged over all integrations of the input file. The error is taken to be the standard error of the mean, where the variance is the covariance between amplitudes and phases (e.g. fringe amplitudes and fringe phases, closure phases and triple-product amplitudes).
The _amimulti-oi.fits file contains observables for each integration, and does not contain error estimates. The
structure is the same as the _ami-oi.fits file, but the following data columns are 2D, with the second dimension being
structure is the same as the _ami-oi.fits file, but the following data columns are 2D, with the second dimension being
the number of integrations: "PISTONS", "PIST_ERR", "VISAMP", "VISAMPERR", "VISPHI", "VISPHIERR", "VIS2DATA", "VIS2ERR", "T3AMP", "T3AMPERR", "T3PHI", "T3PHIERR".

LG model parameters
^^^^^^^^^^^^^^^^^^^
:Data model: `~jwst.datamodels.AmiLgFitModel`
:File suffix: _amilg.fits

The _amilg.fits output file contains the cropped and cleaned data, model, and residuals (data - model) as well as
The _amilg.fits output file contains the cropped and cleaned data, model, and residuals (data - model) as well as
the parameters of the best-fit LG model. It contains the following extensions:

1) ``CTRD``: a 3D image of the centered, cropped data
2) ``N_CTRD``: a 3D image CTRD normalized by data peak
3) ``FIT``: a 3D image of the best-fit model
4) ``N_FIT``: a 3D image of FIT normalized by data peak
5) ``RESID``: a 3D image of the fit residuals
6) ``N_RESID``: a 3D image of RESID normalized by data peak
7) ``SOLNS``: table of fringe coefficients
1. ``CTRD``: a 3D image of the centered, cropped data
2. ``N_CTRD``: a 3D image CTRD normalized by data peak
3. ``FIT``: a 3D image of the best-fit model
4. ``N_FIT``: a 3D image of FIT normalized by data peak
5. ``RESID``: a 3D image of the fit residuals
6. ``N_RESID``: a 3D image of RESID normalized by data peak
7. ``SOLNS``: table of fringe coefficients

Reference Files
---------------
Expand Down
2 changes: 1 addition & 1 deletion docs/jwst/badpix_selfcal/description.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ in the :ref:`calwebb_spec2 <calwebb_spec2>` pipeline.
Input details
-------------
The input data must be in the form of a `~jwst.datamodels.IFUImageModel` or
a `~jwst.datamodels.ModelContainer` containing exactly one
a `~jwst.datamodels.container.ModelContainer` containing exactly one
science exposure and any number of additional exposures.
A fits or association file
that can be read into one of these data models is also acceptable.
Expand Down
2 changes: 1 addition & 1 deletion docs/jwst/outlier_detection/outlier_detection_ifu.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Integral Field Unit (IFU) Data

This module serves as the interface for applying ``outlier_detection`` to IFU
observations, like those taken with NIRSpec and MIRI. A :ref:`Stage 3 association <asn-level3-techspecs>`,
which is loaded into a :py:class:`~jwst.datamodels.ModelContainer` object,
which is loaded into a :py:class:`~jwst.datamodels.container.ModelContainer` object,
serves as the basic format for all processing performed by this step.

After launch it was discovered that the bad pixels on the MIRI detectors vary with time.
Expand Down
6 changes: 3 additions & 3 deletions docs/jwst/outlier_detection/outlier_detection_imaging.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Imaging Data
This module serves as the interface for applying ``outlier_detection`` to direct
image observations, like those taken with MIRI, NIRCam, and NIRISS.
A :ref:`Stage 3 association <asn-level3-techspecs>`,
which is loaded into a :py:class:`~jwst.datamodels.ModelLibrary` object,
which is loaded into a :py:class:`~jwst.datamodels.library.ModelLibrary` object,
serves as the basic format for all processing performed by this step.
This routine performs the following operations:

Expand Down Expand Up @@ -107,10 +107,10 @@ Control over this memory model happens
with the use of the ``in_memory`` parameter, which defaults to True.
The full impact of setting this parameter to `False` includes:

#. The input :py:class:`~jwst.datamodels.ModelLibrary` object is loaded with `on_disk=True`.
#. The input :py:class:`~jwst.datamodels.library.ModelLibrary` object is loaded with `on_disk=True`.
This ensures that input models are loaded into memory one at at time,
and saved to a temporary file when not in use; these read-write operations are handled internally by
the :py:class:`~jwst.datamodels.ModelLibrary` object.
the :py:class:`~jwst.datamodels.library.ModelLibrary` object.

#. Computing the median image works by writing the resampled data frames to appendable files
on disk that are split into sections spatially but contain the entire ``groups``
Expand Down
6 changes: 3 additions & 3 deletions docs/jwst/outlier_detection/outlier_detection_spec.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,9 @@ spectroscopic observations. The algorithm is very similar to the
:ref:`imaging algorithm <outlier-detection-imaging>`, and much of the same code is used.
Please refer to those docs for more information.
A :ref:`Stage 3 association <asn-level3-techspecs>`,
which is loaded into a :py:class:`~jwst.datamodels.ModelContainer` object,
serves as the input and output to this step, and the :py:class:`~jwst.datamodels.ModelContainer`
is converted into a :py:class:`~jwst.datamodels.ModelLibrary` object to allow sharing code
which is loaded into a :py:class:`~jwst.datamodels.container.ModelContainer` object,
serves as the input and output to this step, and the :py:class:`~jwst.datamodels.container.ModelContainer`
is converted into a :py:class:`~jwst.datamodels.library.ModelLibrary` object to allow sharing code
with the imaging mode.

This routine performs identical operations to the imaging mode, with the following exceptions:
Expand Down
2 changes: 1 addition & 1 deletion docs/jwst/pipeline/calwebb_ami3.rst
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ Inputs
3D calibrated images
^^^^^^^^^^^^^^^^^^^^

:Data model: `~jwst.datamodels.DataModel`
:Data model: `~stdatamodels.DataModel`
:File suffix: _calints

The inputs to ``calwebb_ami3`` need to be in the form of an ASN file that lists
Expand Down
12 changes: 6 additions & 6 deletions docs/jwst/skymatch/description.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
Description
===========

:Class: `jwst.skymatch.SkymatchStep`
:Class: `jwst.skymatch.SkyMatchStep`
:Alias: skymatch

Overview
Expand Down Expand Up @@ -95,10 +95,10 @@ step.

Identification of images that belong to the same "exposure" and therefore
can be grouped together is based on several attributes described in
`jwst.datamodels.ModelContainer`. This grouping is performed automatically
`jwst.datamodels.container.ModelContainer`. This grouping is performed automatically
in the ``skymatch`` step using the
`jwst.datamodels.ModelContainer.models_grouped` property or
:py:meth:`jwst.datamodels.ModelLibrary.group_indices`.
:attr:`~jwst.datamodels.container.ModelContainer.models_grouped` or
:attr:`~stpipe.library.AbstractModelLibrary.group_indices` attribute.

However, when background across different detectors in a single "exposure"
(or "group") is dominated by unpredictable background components, we no longer
Expand All @@ -107,7 +107,7 @@ it may be desirable to match image backgrounds independently. This can be
achieved either by setting the ``image_model.meta.group_id`` attribute to a
unique string or integer value for each image, or by adding the ``group_id``
attribute to the ``members`` of the input ASN table - see
`~jwst.datamodels.ModelContainer` for more details.
`~jwst.datamodels.container.ModelContainer` for more details.

.. note::
Group ID (``group_id``) is used by both ``tweakreg`` and ``skymatch`` steps
Expand Down Expand Up @@ -179,7 +179,7 @@ instead provides additive corrections that can be used to equalize the signal
between overlapping images.

User-Supplied Sky Values
-------------------------
------------------------
The ``skymatch`` step can also accept user-supplied sky values for each image.
This is useful when sky values have been determined based on a custom workflow
outside the pipeline. To use this feature, the user must provide a list of sky
Expand Down
2 changes: 1 addition & 1 deletion docs/jwst/stack_refs/description.rst
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ multiple integrations within each exposure.
It is assumed that the ``stack_refs`` step will be called from the
:ref:`calwebb_coron3 <calwebb_coron3>` pipeline, which is given an ASN file as input,
specifying one or more PSF target exposures.
The actual input passed to the ``stack_refs`` step will be a `~jwst.datamodels.ModelContainer`
The actual input passed to the ``stack_refs`` step will be a `~jwst.datamodels.container.ModelContainer`
created by the :ref:`calwebb_coron3 <calwebb_coron3>` pipeline, containing a
`~jwst.datamodels.CubeModel` data model for each PSF "_calints" exposure listed in the
ASN file. See :ref:`calwebb_coron3 <calwebb_coron3>` for more details on the contents of
Expand Down
Loading
Loading