satpy package¶
Subpackages¶
- satpy.composites package
- satpy.demo package
- satpy.enhancements package
- satpy.readers package
- Submodules
- satpy.readers.aapp_l1b module
- satpy.readers.abi_base module
- satpy.readers.abi_l1b module
- satpy.readers.abi_l2_nc module
- satpy.readers.acspo module
- satpy.readers.agri_l1 module
- satpy.readers.ahi_hsd module
- satpy.readers.amsr2_l1b module
- satpy.readers.avhrr_l1b_gaclac module
- satpy.readers.caliop_l2_cloud module
- satpy.readers.clavrx module
- satpy.readers.electrol_hrit module
- satpy.readers.eps_l1b module
- satpy.readers.eum_base module
- satpy.readers.fci_l1c_fdhsi module
- satpy.readers.file_handlers module
- satpy.readers.generic_image module
- satpy.readers.geocat module
- satpy.readers.ghrsst_l3c_sst module
- satpy.readers.goes_imager_hrit module
- satpy.readers.goes_imager_nc module
- satpy.readers.grib module
- satpy.readers.hdf4_utils module
- satpy.readers.hdf5_utils module
- satpy.readers.hdfeos_base module
- satpy.readers.hrit_base module
- satpy.readers.hrit_jma module
- satpy.readers.hrpt module
- satpy.readers.hsaf_grib module
- satpy.readers.iasi_l2 module
- satpy.readers.li_l2 module
- satpy.readers.maia module
- satpy.readers.mersi2_l1b module
- satpy.readers.modis_l1b module
- satpy.readers.modis_l2 module
- satpy.readers.msi_safe module
- satpy.readers.netcdf_utils module
- satpy.readers.nucaps module
- satpy.readers.nwcsaf_nc module
- satpy.readers.olci_nc module
- satpy.readers.omps_edr module
- satpy.readers.safe_sar_l2_ocn module
- satpy.readers.sar_c_safe module
- satpy.readers.scatsat1_l2b module
- satpy.readers.scmi module
- satpy.readers.seviri_base module
- satpy.readers.seviri_l1b_hrit module
- satpy.readers.seviri_l1b_native module
- satpy.readers.seviri_l1b_native_hdr module
- satpy.readers.seviri_l1b_nc module
- satpy.readers.slstr_l1b module
- satpy.readers.tropomi_l2 module
- satpy.readers.utils module
- satpy.readers.vaisala_gld360 module
- satpy.readers.viirs_compact module
- satpy.readers.viirs_edr_active_fires module
- satpy.readers.viirs_edr_flood module
- satpy.readers.viirs_l1b module
- satpy.readers.viirs_sdr module
- satpy.readers.virr_l1b module
- satpy.readers.xmlformat module
- satpy.readers.yaml_reader module
- Module contents
- satpy.writers package
Submodules¶
satpy.config module¶
Satpy Configuration directory and file handling.
-
satpy.config.
check_satpy
(readers=None, writers=None, extras=None)[source]¶ Check the satpy readers and writers for correct installation.
Parameters: - readers (list or None) – Limit readers checked to those specified
- writers (list or None) – Limit writers checked to those specified
- extras (list or None) – Limit extras checked to those specified
- Returns: bool
- True if all specified features were successfully loaded.
-
satpy.config.
check_yaml_configs
(configs, key)[source]¶ Get a diagnostic for the yaml configs.
key is the section to look for to get a name for the config at hand.
-
satpy.config.
config_search_paths
(filename, *search_dirs, **kwargs)[source]¶ Get the environment variable value every time (could be set dynamically).
-
satpy.config.
get_config
(filename, *search_dirs, **kwargs)[source]¶ Blends the different configs, from package defaults to .
-
satpy.config.
get_config_path
(filename, *search_dirs)[source]¶ Get the appropriate path for a filename, in that order: filename, ., PPP_CONFIG_DIR, package’s etc dir.
-
satpy.config.
glob_config
(pattern, *search_dirs)[source]¶ Return glob results for all possible configuration locations.
- Note: This method does not check the configuration “base” directory if the pattern includes a subdirectory.
- This is done for performance since this is usually used to find all configs for a certain component.
satpy.dataset module¶
Dataset objects.
-
class
satpy.dataset.
DatasetID
[source]¶ Bases:
satpy.dataset.DatasetID
Identifier for all Dataset objects.
DatasetID is a namedtuple that holds identifying and classifying information about a Dataset. There are two identifying elements,
name
andwavelength
. These can be used to generically refer to a Dataset. The other elements of a DatasetID are meant to further distinguish a Dataset from the possible variations it may have. For example multiple Datasets may be called by onename
but may exist in multiple resolutions or with different calibrations such as “radiance” and “reflectance”. If an element is None then it is considered not applicable.A DatasetID can also be used in Satpy to query for a Dataset. This way a fully qualified DatasetID can be found even if some of the DatasetID elements are unknown. In this case a None signifies something that is unknown or not applicable to the requested Dataset.
Parameters: - name (str) – String identifier for the Dataset
- wavelength (float, tuple) – Single float wavelength when querying for a Dataset. Otherwise 3-element tuple of floats specifying the minimum, nominal, and maximum wavelength for a Dataset. None if not applicable.
- resolution (int, float) – Per data pixel/area resolution. If resolution varies across the Dataset then nadir view resolution is preferred. Usually this is in meters, but for lon/lat gridded data angle degrees may be used.
- polarization (str) – ‘V’ or ‘H’ polarizations of a microwave channel. None if not applicable.
- calibration (str) – String identifying the calibration level of the Dataset (ex. ‘radiance’, ‘reflectance’, etc). None if not applicable.
- level (int, float) – Pressure/altitude level of the dataset. This is typically in hPa, but may be in inverse meters for altitude datasets (1/meters).
- modifiers (tuple) – Tuple of strings identifying what corrections or other modifications have been performed on this Dataset (ex. ‘sunz_corrected’, ‘rayleigh_corrected’, etc). None or empty tuple if not applicable.
Create new DatasetID.
-
class
satpy.dataset.
MetadataObject
(**attributes)[source]¶ Bases:
object
A general metadata object.
Initialize the class with attributes.
-
id
¶ Return the DatasetID of the object.
-
-
satpy.dataset.
average_datetimes
(dt_list)[source]¶ Average a series of datetime objects.
Note
This function assumes all datetime objects are naive and in the same time zone (UTC).
Parameters: dt_list (iterable) – Datetime objects to average Returns: Average datetime as a datetime object
-
satpy.dataset.
combine_metadata
(*metadata_objects, **kwargs)[source]¶ Combine the metadata of two or more Datasets.
If any keys are not equal or do not exist in all provided dictionaries then they are not included in the returned dictionary. By default any keys with the word ‘time’ in them and consisting of datetime objects will be averaged. This is to handle cases where data were observed at almost the same time but not exactly.
Parameters: - *metadata_objects – MetadataObject or dict objects to combine
- average_times (bool) – Average any keys with ‘time’ in the name
Returns: the combined metadata
Return type: dict
-
satpy.dataset.
create_filtered_dsid
(dataset_key, **dfilter)[source]¶ Create a DatasetID matching dataset_key and dfilter.
If a proprety is specified in both dataset_key and dfilter, the former has priority.
satpy.multiscene module¶
MultiScene object to work with multiple timesteps of satellite data.
-
class
satpy.multiscene.
MultiScene
(scenes=None)[source]¶ Bases:
object
Container for multiple Scene objects.
Initialize MultiScene and validate sub-scenes.
Parameters: scenes (iterable) – Scene objects to operate on (optional) Note
If the scenes passed to this object are a generator then certain operations performed will try to preserve that generator state. This may limit what properties or methods are available to the user. To avoid this behavior compute the passed generator by converting the passed scenes to a list first:
MultiScene(list(scenes))
.-
all_same_area
¶ Determine if all contained Scenes have the same ‘area’.
-
blend
(blend_function=<function stack>)[source]¶ Blend the datasets into one scene.
Note
Blending is not currently optimized for generator-based MultiScene.
-
first_scene
¶ First Scene of this MultiScene object.
-
classmethod
from_files
(files_to_sort, reader=None, **kwargs)[source]¶ Create multiple Scene objects from multiple files.
This uses the
satpy.readers.group_files()
function to group files. See this function for more details on possible keyword arguments.New in version 0.12.
-
is_generator
¶ Contained Scenes are stored as a generator.
-
loaded_dataset_ids
¶ Union of all Dataset IDs loaded by all children.
-
save_animation
(filename, datasets=None, fps=10, fill_value=None, batch_size=1, ignore_missing=False, client=True, **kwargs)[source]¶ Save series of Scenes to movie (MP4) or GIF formats.
Supported formats are dependent on the imageio library and are determined by filename extension by default.
Note
Starting with
imageio
2.5.0, the use of FFMPEG depends on a separateimageio-ffmpeg
package.By default all datasets available will be saved to individual files using the first Scene’s datasets metadata to format the filename provided. If a dataset is not available from a Scene then a black array is used instead (np.zeros(shape)).
This function can use the
dask.distributed
library for improved performance by computing multiple frames at a time (see batch_size option below). If the distributed library is not available then frames will be generated one at a time, one product at a time.Parameters: - filename (str) – Filename to save to. Can include python string
formatting keys from dataset
.attrs
(ex. “{name}_{start_time:%Y%m%d_%H%M%S.gif”) - datasets (list) – DatasetIDs to save (default: all datasets)
- fps (int) – Frames per second for produced animation
- fill_value (int) – Value to use instead creating an alpha band.
- batch_size (int) – Number of frames to compute at the same time.
This only has effect if the dask.distributed package is
installed. This will default to 1. Setting this to 0 or less
will attempt to process all frames at once. This option should
be used with care to avoid memory issues when trying to
improve performance. Note that this is the total number of
frames for all datasets, so when saving 2 datasets this will
compute
(batch_size / 2)
frames for the first dataset and(batch_size / 2)
frames for the second dataset. - ignore_missing (bool) – Don’t include a black frame when a dataset is missing from a child scene.
- client (bool or dask.distributed.Client) – Dask distributed client
to use for computation. If this is
True
(default) then any existing clients will be used. If this isFalse
orNone
then a client will not be created anddask.distributed
will not be used. If this is a daskClient
object then it will be used for distributed computation. - kwargs – Additional keyword arguments to pass to imageio.get_writer.
- filename (str) – Filename to save to. Can include python string
formatting keys from dataset
-
save_datasets
(client=True, batch_size=1, **kwargs)[source]¶ Run save_datasets on each Scene.
Note that some writers may not be multi-process friendly and may produce unexpected results or fail by raising an exception. In these cases
client
should be set toFalse
. This is currently a known issue for basic ‘geotiff’ writer work loads.Parameters: - batch_size (int) – Number of scenes to compute at the same time. This only has effect if the dask.distributed package is installed. This will default to 1. Setting this to 0 or less will attempt to process all scenes at once. This option should be used with care to avoid memory issues when trying to improve performance.
- client (bool or dask.distributed.Client) – Dask distributed client
to use for computation. If this is
True
(default) then any existing clients will be used. If this isFalse
orNone
then a client will not be created anddask.distributed
will not be used. If this is a daskClient
object then it will be used for distributed computation. - kwargs – Additional keyword arguments to pass to
save_datasets()
. Notecompute
can not be provided.
-
scenes
¶ Get list of Scene objects contained in this MultiScene.
Note
If the Scenes contained in this object are stored in a generator (not list or tuple) then accessing this property will load/iterate through the generator possibly
Dataset IDs shared by all children.
-
satpy.node module¶
Nodes to build trees.
-
class
satpy.node.
DependencyTree
(readers, compositors, modifiers, available_only=False)[source]¶ Bases:
satpy.node.Node
Structure to discover and store Dataset dependencies.
Used primarily by the Scene object to organize dependency finding. Dependencies are stored used a series of Node objects which this class is a subclass of.
Collect Dataset generating information.
Collect the objects that generate and have information about Datasets including objects that may depend on certain Datasets being generated. This includes readers, compositors, and modifiers.
Parameters: - readers (dict) – Reader name -> Reader Object
- compositors (dict) – Sensor name -> Composite ID -> Composite Object
- modifiers (dict) – Sensor name -> Modifier name -> (Modifier Class, modifier options)
- available_only (bool) – Whether only reader’s available/loadable
datasets should be used when searching for dependencies (True)
or use all known/configured datasets regardless of whether the
necessary files were provided to the reader (False).
Note that when
False
loadable variations of a dataset will have priority over other known variations. Default isFalse
.
-
copy
()[source]¶ Copy this node tree.
Note all references to readers are removed. This is meant to avoid tree copies accessing readers that would return incompatible (Area) data. Theoretically it should be possible for tree copies to request compositor or modifier information as long as they don’t depend on any datasets not already existing in the dependency tree.
-
empty_node
= <Node ('__EMPTY_LEAF_SENTINEL__')>¶
-
find_dependencies
(dataset_keys, **dfilter)[source]¶ Create the dependency tree.
Parameters: - dataset_keys (iterable) – Strings or DatasetIDs to find dependencies for
- **dfilter (dict) – Additional filter parameters. See satpy.readers.get_key for more details.
Returns: Root node of the dependency tree and a set of unknown datasets
Return type: (Node, set)
-
class
satpy.node.
Node
(name, data=None)[source]¶ Bases:
object
A node object.
Init the node object.
-
flatten
(d=None)[source]¶ Flatten tree structure to a one level dictionary.
Parameters: d (dict, optional) – output dictionary to update Returns: - Node.name -> Node. The returned dictionary includes the
- current Node and all its children.
Return type: dict
-
is_leaf
¶ Check if the node is a leaf.
-
satpy.plugin_base module¶
The satpy.plugin_base
module defines the plugin API.
-
class
satpy.plugin_base.
Plugin
(ppp_config_dir=None, default_config_filename=None, config_files=None, **kwargs)[source]¶ Bases:
object
Base plugin class for all dynamically loaded and configured objects.
Load configuration files related to this plugin.
This initializes a self.config dictionary that can be used to customize the subclass.
Parameters: - ppp_config_dir (str) – Base “etc” directory for all configuration files.
- default_config_filename (str) – Configuration filename to use if no other files have been specified with config_files.
- config_files (list or str) – Configuration files to load instead of those automatically found in ppp_config_dir and other default configuration locations.
- kwargs (dict) – Unused keyword arguments.
satpy.resample module¶
Satpy resampling module.
Satpy provides multiple resampling algorithms for resampling geolocated
data to uniform projected grids. The easiest way to perform resampling in
Satpy is through the Scene
object’s
resample()
method. Additional utility functions are
also available to assist in resampling data. Below is more information on
resampling with Satpy as well as links to the relevant API documentation for
available keyword arguments.
Resampling algorithms¶
Resampler | Description | Related |
---|---|---|
nearest | Nearest Neighbor | KDTreeResampler |
ewa | Elliptical Weighted Averaging | EWAResampler |
native | Native | NativeResampler |
bilinear | Bilinear | BilinearResampler |
bucket_avg | Average Bucket Resampling | BucketAvg |
bucket_sum | Sum Bucket Resampling | BucketSum |
bucket_count | Count Bucket Resampling | BucketCount |
bucket_fraction | Fraction Bucket Resampling | BucketFraction |
gradient_search | Gradient Search Resampling | GradientSearchResampler |
The resampling algorithm used can be specified with the resampler
keyword
argument and defaults to nearest
:
>>> scn = Scene(...)
>>> euro_scn = global_scene.resample('euro4', resampler='nearest')
Warning
Some resampling algorithms expect certain forms of data. For example, the EWA resampling expects polar-orbiting swath data and prefers if the data can be broken in to “scan lines”. See the API documentation for a specific algorithm for more information.
Resampling for comparison and composites¶
While all the resamplers can be used to put datasets of different resolutions on to a common area, the ‘native’ resampler is designed to match datasets to one resolution in the dataset’s original projection. This is extremely useful when generating composites between bands of different resolutions.
>>> new_scn = scn.resample(resampler='native')
By default this resamples to the
highest resolution area
(smallest footprint per
pixel) shared between the loaded datasets. You can easily specify the lower
resolution area:
>>> new_scn = scn.resample(scn.min_area(), resampler='native')
Providing an area that is neither the minimum or maximum resolution area may work, but behavior is currently undefined.
Caching for geostationary data¶
Satpy will do its best to reuse calculations performed to resample datasets,
but it can only do this for the current processing and will lose this
information when the process/script ends. Some resampling algorithms, like
nearest
and bilinear
, can benefit by caching intermediate data on disk in the directory
specified by cache_dir and using it next time. This is most beneficial with
geostationary satellite data where the locations of the source data and the
target pixels don’t change over time.
>>> new_scn = scn.resample('euro4', cache_dir='/path/to/cache_dir')
See the documentation for specific algorithms to see availability and limitations of caching for that algorithm.
Create custom area definition¶
See pyresample.geometry.AreaDefinition
for information on creating
areas that can be passed to the resample method:
>>> from pyresample.geometry import AreaDefinition
>>> my_area = AreaDefinition(...)
>>> local_scene = global_scene.resample(my_area)
Create dynamic area definition¶
See pyresample.geometry.DynamicAreaDefinition
for more information.
Examples coming soon…
Store area definitions¶
Area definitions can be added to a custom YAML file (see pyresample’s documentation for more information) and loaded using pyresample’s utility methods:
>>> from pyresample.utils import parse_area_file
>>> my_area = parse_area_file('my_areas.yaml', 'my_area')[0]
Examples coming soon…
-
class
satpy.resample.
BaseResampler
(source_geo_def, target_geo_def)[source]¶ Bases:
object
Base abstract resampler class.
Initialize resampler with geolocation information.
Parameters: - source_geo_def (SwathDefinition, AreaDefinition) – Geolocation definition for the data to be resampled
- target_geo_def (CoordinateDefinition, AreaDefinition) – Geolocation definition for the area to resample data to.
-
get_hash
(source_geo_def=None, target_geo_def=None, **kwargs)[source]¶ Get hash for the current resample with the given kwargs.
-
precompute
(**kwargs)[source]¶ Do the precomputation.
This is an optional step if the subclass wants to implement more complex features like caching or can share some calculations between multiple datasets to be processed.
-
resample
(data, cache_dir=None, mask_area=None, **kwargs)[source]¶ Resample data by calling precompute and compute methods.
Only certain resampling classes may use cache_dir and the mask provided when mask_area is True. The return value of calling the precompute method is passed as the cache_id keyword argument of the compute method, but may not be used directly for caching. It is up to the individual resampler subclasses to determine how this is used.
Parameters: - data (xarray.DataArray) – Data to be resampled
- cache_dir (str) – directory to cache precomputed results (default False, optional)
- mask_area (bool) – Mask geolocation data where data values are invalid. This should be used when data values may affect what neighbors are considered valid.
Returns (xarray.DataArray): Data resampled to the target area
-
class
satpy.resample.
BilinearResampler
(source_geo_def, target_geo_def)[source]¶ Bases:
satpy.resample.BaseResampler
Resample using bilinear interpolation.
This resampler implements on-disk caching when the cache_dir argument is provided to the resample method. This should provide significant performance improvements on consecutive resampling of geostationary data.
Parameters: - cache_dir (str) – Long term storage directory for intermediate results.
- radius_of_influence (float) – Search radius cut off distance in meters
- epsilon (float) – Allowed uncertainty in meters. Increasing uncertainty reduces execution time.
- reduce_data (bool) – Reduce the input data to (roughly) match the target area.
Init BilinearResampler.
-
compute
(data, fill_value=None, **kwargs)[source]¶ Resample the given data using bilinear interpolation.
-
class
satpy.resample.
BucketAvg
(source_geo_def, target_geo_def)[source]¶ Bases:
satpy.resample.BucketResamplerBase
Class for averaging bucket resampling.
Bucket resampling calculates the average of all the values that are closest to each bin and inside the target area.
Parameters: - fill_value (float (default: np.nan)) – Fill value for missing data
- mask_all_nans (boolean (default: False)) – Mask all locations with all-NaN values
Initialize bucket resampler.
-
class
satpy.resample.
BucketCount
(source_geo_def, target_geo_def)[source]¶ Bases:
satpy.resample.BucketResamplerBase
Class for bucket resampling which implements hit-counting.
This resampler calculates the number of occurences of the input data closest to each bin and inside the target area.
Initialize bucket resampler.
-
class
satpy.resample.
BucketFraction
(source_geo_def, target_geo_def)[source]¶ Bases:
satpy.resample.BucketResamplerBase
Class for bucket resampling to compute category fractions.
This resampler calculates the fraction of occurences of the input data per category.
Initialize bucket resampler.
-
class
satpy.resample.
BucketResamplerBase
(source_geo_def, target_geo_def)[source]¶ Bases:
satpy.resample.BaseResampler
Base class for bucket resampling which implements averaging.
Initialize bucket resampler.
-
class
satpy.resample.
BucketSum
(source_geo_def, target_geo_def)[source]¶ Bases:
satpy.resample.BucketResamplerBase
Class for bucket resampling which implements accumulation (sum).
This resampler calculates the cumulative sum of all the values that are closest to each bin and inside the target area.
Parameters: - fill_value (float (default: np.nan)) – Fill value for missing data
- mask_all_nans (boolean (default: False)) – Mask all locations with all-NaN values
Initialize bucket resampler.
-
class
satpy.resample.
EWAResampler
(source_geo_def, target_geo_def)[source]¶ Bases:
satpy.resample.BaseResampler
Resample using an elliptical weighted averaging algorithm.
This algorithm does not use caching or any externally provided data mask (unlike the ‘nearest’ resampler).
This algorithm works under the assumption that the data is observed one scan line at a time. However, good results can still be achieved for non-scan based data provided rows_per_scan is set to the number of rows in the entire swath or by setting it to None.
Parameters: - rows_per_scan (int, None) – Number of data rows for every observed scanline. If None then the entire swath is treated as one large scanline.
- weight_count (int) – number of elements to create in the gaussian weight table. Default is 10000. Must be at least 2
- weight_min (float) – the minimum value to store in the last position of the weight table. Default is 0.01, which, with a weight_distance_max of 1.0 produces a weight of 0.01 at a grid cell distance of 1.0. Must be greater than 0.
- weight_distance_max (float) – distance in grid cell units at which to apply a weight of weight_min. Default is 1.0. Must be greater than 0.
- weight_delta_max (float) – maximum distance in grid cells in each grid dimension over which to distribute a single swath cell. Default is 10.0.
- weight_sum_min (float) – minimum weight sum value. Cells whose weight sums are less than weight_sum_min are set to the grid fill value. Default is EPSILON.
- maximum_weight_mode (bool) – If False (default), a weighted average of all swath cells that map to a particular grid cell is used. If True, the swath cell having the maximum weight of all swath cells that map to a particular grid cell is used. This option should be used for coded/category data, i.e. snow cover.
Init EWAResampler.
-
compute
(data, cache_id=None, fill_value=0, weight_count=10000, weight_min=0.01, weight_distance_max=1.0, weight_delta_max=1.0, weight_sum_min=-1.0, maximum_weight_mode=False, grid_coverage=0, **kwargs)[source]¶ Resample the data according to the precomputed X/Y coordinates.
-
class
satpy.resample.
KDTreeResampler
(source_geo_def, target_geo_def)[source]¶ Bases:
satpy.resample.BaseResampler
Resample using a KDTree-based nearest neighbor algorithm.
This resampler implements on-disk caching when the cache_dir argument is provided to the resample method. This should provide significant performance improvements on consecutive resampling of geostationary data. It is not recommended to provide cache_dir when the mask keyword argument is provided to precompute which occurs by default for SwathDefinition source areas.
Parameters: - cache_dir (str) – Long term storage directory for intermediate results.
- mask (bool) – Force resampled data’s invalid pixel mask to be used when searching for nearest neighbor pixels. By default this is True for SwathDefinition source areas and False for all other area definition types.
- radius_of_influence (float) – Search radius cut off distance in meters
- epsilon (float) – Allowed uncertainty in meters. Increasing uncertainty reduces execution time.
Init KDTreeResampler.
-
compute
(data, weight_funcs=None, fill_value=nan, with_uncert=False, **kwargs)[source]¶ Resample data.
-
load_neighbour_info
(cache_dir, mask=None, **kwargs)[source]¶ Read index arrays from either the in-memory or disk cache.
-
class
satpy.resample.
NativeResampler
(source_geo_def, target_geo_def)[source]¶ Bases:
satpy.resample.BaseResampler
Expand or reduce input datasets to be the same shape.
If data is higher resolution (more pixels) than the destination area then data is averaged to match the destination resolution.
If data is lower resolution (less pixels) than the destination area then data is repeated to match the destination resolution.
This resampler does not perform any caching or masking due to the simplicity of the operations.
Initialize resampler with geolocation information.
Parameters: - source_geo_def (SwathDefinition, AreaDefinition) – Geolocation definition for the data to be resampled
- target_geo_def (CoordinateDefinition, AreaDefinition) – Geolocation definition for the area to resample data to.
-
satpy.resample.
add_crs_xy_coords
(data_arr, area)[source]¶ Add
pyproj.crs.CRS
and x/y or lons/lats to coordinates.For SwathDefinition or GridDefinition areas this will add a crs coordinate and coordinates for the 2D arrays of lons and lats.
For AreaDefinition areas this will add a crs coordinate and the 1-dimensional x and y coordinate variables.
Parameters: - data_arr (xarray.DataArray) – DataArray to add the ‘crs’ coordinate.
- area (pyresample.geometry.AreaDefinition) – Area to get CRS information from.
-
satpy.resample.
add_xy_coords
(data_arr, area, crs=None)[source]¶ Assign x/y coordinates to DataArray from provided area.
If ‘x’ and ‘y’ coordinates already exist then they will not be added.
Parameters: - data_arr (xarray.DataArray) – data object to add x/y coordinates to
- area (pyresample.geometry.AreaDefinition) – area providing the coordinate data.
- crs (pyproj.crs.CRS or None) – CRS providing additional information about the area’s coordinate reference system if available. Requires pyproj 2.0+.
Returns (xarray.DataArray): Updated DataArray object
-
satpy.resample.
get_area_def
(area_name)[source]¶ Get the definition of area_name from file.
The file is defined to use is to be placed in the $PPP_CONFIG_DIR directory, and its name is defined in satpy’s configuration file.
-
satpy.resample.
get_area_file
()[source]¶ Find area file(s) to use.
The files are to be named areas.yaml or areas.def.
-
satpy.resample.
get_fill_value
(dataset)[source]¶ Get the fill value of the dataset, defaulting to np.nan.
-
satpy.resample.
prepare_resampler
(source_area, destination_area, resampler=None, **resample_kwargs)[source]¶ Instantiate and return a resampler.
-
satpy.resample.
resample
(source_area, data, destination_area, resampler=None, **kwargs)[source]¶ Do the resampling.
-
satpy.resample.
resample_dataset
(dataset, destination_area, **kwargs)[source]¶ Resample dataset and return the resampled version.
Parameters: - dataset (xarray.DataArray) – Data to be resampled.
- destination_area – The destination onto which to project the data, either a full blown area definition or a string corresponding to the name of the area as defined in the area file.
- **kwargs – The extra parameters to pass to the resampler objects.
Returns: A resampled DataArray with updated
.attrs["area"]
field. The dtype of the array is preserved.
-
satpy.resample.
update_resampled_coords
(old_data, new_data, new_area)[source]¶ Add coordinate information to newly resampled DataArray.
Parameters: - old_data (xarray.DataArray) – Old data before resampling.
- new_data (xarray.DataArray) – New data after resampling.
- new_area (pyresample.geometry.BaseDefinition) – Area definition for the newly resampled data.
satpy.scene module¶
Scene object to hold satellite data.
-
exception
satpy.scene.
DelayedGeneration
[source]¶ Bases:
KeyError
Mark that a dataset can’t be generated without further modification.
-
class
satpy.scene.
Scene
(filenames=None, reader=None, filter_parameters=None, reader_kwargs=None, ppp_config_dir=None, base_dir=None, sensor=None, start_time=None, end_time=None, area=None)[source]¶ Bases:
satpy.dataset.MetadataObject
The Almighty Scene Class.
Example usage:
from satpy import Scene from glob import glob # create readers and open files scn = Scene(filenames=glob('/path/to/files/*'), reader='viirs_sdr') # load datasets from input files scn.load(['I01', 'I02']) # resample from satellite native geolocation to builtin 'eurol' Area new_scn = scn.resample('eurol') # save all resampled datasets to geotiff files in the current directory new_scn.save_datasets()
Initialize Scene with Reader and Compositor objects.
To load data filenames and preferably reader must be specified. If filenames is provided without reader then the available readers will be searched for a Reader that can support the provided files. This can take a considerable amount of time so it is recommended that reader always be provided. Note without filenames the Scene is created with no Readers available requiring Datasets to be added manually:
scn = Scene() scn['my_dataset'] = Dataset(my_data_array, **my_info)
Parameters: - filenames (iterable or dict) – A sequence of files that will be used to load data from. A
dict
object should map reader names to a list of filenames for that reader. - reader (str or list) – The name of the reader to use for loading the data or a list of names.
- filter_parameters (dict) – Specify loaded file filtering parameters. Shortcut for reader_kwargs[‘filter_parameters’].
- reader_kwargs (dict) – Keyword arguments to pass to specific reader instances.
- ppp_config_dir (str) – The directory containing the configuration files for satpy.
- base_dir (str) – (DEPRECATED) The directory to search for files containing the data to load. If filenames is also provided, this is ignored.
- sensor (list or str) – (DEPRECATED: Use find_files_and_readers function) Limit used files by provided sensors.
- area (AreaDefinition) – (DEPRECATED: Use filter_parameters) Limit used files by geographic area.
- start_time (datetime) – (DEPRECATED: Use filter_parameters) Limit used files by starting time.
- end_time (datetime) – (DEPRECATED: Use filter_parameters) Limit used files by ending time.
-
aggregate
(dataset_ids=None, boundary='exact', side='left', func='mean', **dim_kwargs)[source]¶ Create an aggregated version of the Scene.
Parameters: - dataset_ids (iterable) – DatasetIDs to include in the returned Scene. Defaults to all datasets.
- func (string) – Function to apply on each aggregation window. One of ‘mean’, ‘sum’, ‘min’, ‘max’, ‘median’, ‘argmin’, ‘argmax’, ‘prod’, ‘std’, ‘var’. ‘mean’ is the default.
- boundary – Not implemented.
- side – Not implemented.
- dim_kwargs – the size of the windows to aggregate.
Returns: A new aggregated scene
See also
xarray.DataArray.coarsen
Example
scn.aggregate(func=’min’, x=2, y=2) will aggregate 2x2 pixels by applying the min function.
-
all_dataset_ids
(reader_name=None, composites=False)[source]¶ Get names of all datasets from loaded readers or reader_name if specified.
Returns: list of all dataset names
-
all_dataset_names
(reader_name=None, composites=False)[source]¶ Get all known dataset names configured for the loaded readers.
Note that some readers dynamically determine what datasets are known by reading the contents of the files they are provided. This means that the list of datasets returned by this method may change depending on what files are provided even if a product/dataset is a “standard” product for a particular reader.
-
all_same_area
¶ All contained data arrays are on the same area.
-
all_same_proj
¶ All contained data array are in the same projection.
-
available_composite_ids
()[source]¶ Get names of composites that can be generated from the available datasets.
-
available_dataset_ids
(reader_name=None, composites=False)[source]¶ Get DatasetIDs of loadable datasets.
This can be for all readers loaded by this Scene or just for
reader_name
if specified.Available dataset names are determined by what each individual reader can load. This is normally determined by what files are needed to load a dataset and what files have been provided to the scene/reader. Some readers dynamically determine what is available based on the contents of the files provided.
Returns: list of available dataset names
-
available_dataset_names
(reader_name=None, composites=False)[source]¶ Get the list of the names of the available datasets.
-
copy
(datasets=None)[source]¶ Create a copy of the Scene including dependency information.
Parameters: datasets (list, tuple) – DatasetID objects for the datasets to include in the new Scene object.
-
create_reader_instances
(filenames=None, reader=None, reader_kwargs=None)[source]¶ Find readers and return their instances.
-
crop
(area=None, ll_bbox=None, xy_bbox=None, dataset_ids=None)[source]¶ Crop Scene to a specific Area boundary or bounding box.
Parameters: - area (AreaDefinition) – Area to crop the current Scene to
- ll_bbox (tuple, list) – 4-element tuple where values are in
lon/lat degrees. Elements are
(xmin, ymin, xmax, ymax)
where X is longitude and Y is latitude. - xy_bbox (tuple, list) – Same as ll_bbox but elements are in projection units.
- dataset_ids (iterable) – DatasetIDs to include in the returned Scene. Defaults to all datasets.
This method will attempt to intelligently slice the data to preserve relationships between datasets. For example, if we are cropping two DataArrays of 500m and 1000m pixel resolution then this method will assume that exactly 4 pixels of the 500m array cover the same geographic area as a single 1000m pixel. It handles these cases based on the shapes of the input arrays and adjusting slicing indexes accordingly. This method will have trouble handling cases where data arrays seem related but don’t cover the same geographic area or if the coarsest resolution data is not related to the other arrays which are related.
It can be useful to follow cropping with a call to the native resampler to resolve all datasets to the same resolution and compute any composites that could not be generated previously:
>>> cropped_scn = scn.crop(ll_bbox=(-105., 40., -95., 50.)) >>> remapped_scn = cropped_scn.resample(resampler='native')
Note
The resample method automatically crops input data before resampling to save time/memory.
-
end_time
¶ Return the end time of the file.
-
classmethod
get_writer_by_ext
(extension)[source]¶ Find the writer matching the
extension
.Defaults to “simple_image”.
Example Mapping:
- geotiff: .tif, .tiff
- cf: .nc
- mitiff: .mitiff
- simple_image: .png, .jpeg, .jpg, …
Parameters: extension (str) – Filename extension starting with “.” (ex. “.png”). Returns: The name of the writer to use for this extension. Return type: str
-
iter_by_area
()[source]¶ Generate datasets grouped by Area.
Returns: generator of (area_obj, list of dataset objects)
-
load
(wishlist, calibration=None, resolution=None, polarization=None, level=None, generate=True, unload=True, **kwargs)[source]¶ Read and generate requested datasets.
When the wishlist contains DatasetID objects they can either be fully-specified DatasetID objects with every parameter specified or they can not provide certain parameters and the “best” parameter will be chosen. For example, if a dataset is available in multiple resolutions and no resolution is specified in the wishlist’s DatasetID then the highest (smallest number) resolution will be chosen.
Loaded DataArray objects are created and stored in the Scene object.
Parameters: - wishlist (iterable) – Names (str), wavelengths (float), or DatasetID objects of the requested datasets to load. See available_dataset_ids() for what datasets are available.
- calibration (list, str) – Calibration levels to limit available datasets. This is a shortcut to having to list each DatasetID in wishlist.
- resolution (list | float) – Resolution to limit available datasets. This is a shortcut similar to calibration.
- polarization (list | str) – Polarization (‘V’, ‘H’) to limit available datasets. This is a shortcut similar to calibration.
- level (list | str) – Pressure level to limit available datasets. Pressure should be in hPa or mb. If an altitude is used it should be specified in inverse meters (1/m). The units of this parameter ultimately depend on the reader.
- generate (bool) – Generate composites from the loaded datasets (default: True)
- unload (bool) – Unload datasets that were required to generate the requested datasets (composite dependencies) but are no longer needed.
-
max_area
(datasets=None)[source]¶ Get highest resolution area for the provided datasets.
Parameters: datasets (iterable) – Datasets whose areas will be compared. Can be either xarray.DataArray objects or identifiers to get the DataArrays from the current Scene. Defaults to all datasets.
-
min_area
(datasets=None)[source]¶ Get lowest resolution area for the provided datasets.
Parameters: datasets (iterable) – Datasets whose areas will be compared. Can be either xarray.DataArray objects or identifiers to get the DataArrays from the current Scene. Defaults to all datasets.
-
missing_datasets
¶ Set of DatasetIDs that have not been successfully loaded.
-
read
(nodes=None, **kwargs)[source]¶ Load datasets from the necessary reader.
Parameters: - nodes (iterable) – DependencyTree Node objects
- **kwargs – Keyword arguments to pass to the reader’s load method.
Returns: DatasetDict of loaded datasets
-
resample
(destination=None, datasets=None, generate=True, unload=True, resampler=None, reduce_data=True, **resample_kwargs)[source]¶ Resample datasets and return a new scene.
Parameters: - destination (AreaDefinition, GridDefinition) – area definition to resample to. If not specified then the area returned by Scene.max_area() will be used.
- datasets (list) – Limit datasets to resample to these specified DatasetID objects . By default all currently loaded datasets are resampled.
- generate (bool) – Generate any requested composites that could not be previously due to incompatible areas (default: True).
- unload (bool) – Remove any datasets no longer needed after requested composites have been generated (default: True).
- resampler (str) – Name of resampling method to use. By default,
this is a nearest neighbor KDTree-based resampling
(‘nearest’). Other possible values include ‘native’, ‘ewa’,
etc. See the
resample
documentation for more information. - reduce_data (bool) – Reduce data by matching the input and output areas and slicing the data arrays (default: True)
- resample_kwargs – Remaining keyword arguments to pass to individual
resampler classes. See the individual resampler class
documentation
here
for available arguments.
-
save_dataset
(dataset_id, filename=None, writer=None, overlay=None, decorate=None, compute=True, **kwargs)[source]¶ Save the
dataset_id
to file usingwriter
.Parameters: - dataset_id (str or Number or DatasetID) – Identifier for the dataset to save to disk.
- filename (str) – Optionally specify the filename to save this dataset to. It may include string formatting patterns that will be filled in by dataset attributes.
- writer (str) – Name of writer to use when writing data to disk.
Default to
"geotiff"
. If not provided, butfilename
is provided then the filename’s extension is used to determine the best writer to use. SeeScene.get_writer_by_ext()
for details. - overlay (dict) – See
satpy.writers.add_overlay()
. Only valid for “image” writers like geotiff or simple_image. - decorate (dict) – See
satpy.writers.add_decorate()
. Only valid for “image” writers like geotiff or simple_image. - compute (bool) – If True (default), compute all of the saves to disk. If False then the return value is either a dask:delayed object or two lists to be passed to a dask.array.store call. See return values below for more details.
- kwargs – Additional writer arguments. See Writers for more information.
Returns: Value returned depends on compute. If compute is True then the return value is the result of computing a dask:delayed object or running
dask.array.store()
. If compute is False then the returned value is either a dask:delayed object that can be computed using delayed.compute() or a tuple of (source, target) that should be passed todask.array.store()
. If target is provided the the caller is responsible for calling target.close() if the target has this method.
-
save_datasets
(writer=None, filename=None, datasets=None, compute=True, **kwargs)[source]¶ Save all the datasets present in a scene to disk using
writer
.Parameters: - writer (str) – Name of writer to use when writing data to disk.
Default to
"geotiff"
. If not provided, butfilename
is provided then the filename’s extension is used to determine the best writer to use. SeeScene.get_writer_by_ext()
for details. - filename (str) – Optionally specify the filename to save this dataset to. It may include string formatting patterns that will be filled in by dataset attributes.
- datasets (iterable) – Limit written products to these datasets
- compute (bool) – If True (default), compute all of the saves to disk. If False then the return value is either a dask:delayed object or two lists to be passed to a dask.array.store call. See return values below for more details.
- kwargs – Additional writer arguments. See Writers for more information.
Returns: Value returned depends on compute keyword argument. If compute is True the value is the result of a either a dask.array.store operation or a dask:delayed compute, typically this is None. If compute is False then the result is either a dask:delayed object that can be computed with delayed.compute() or a two element tuple of sources and targets to be passed to
dask.array.store()
. If targets is provided then it is the caller’s responsibility to close any objects that have a “close” method.- writer (str) – Name of writer to use when writing data to disk.
Default to
-
show
(dataset_id, overlay=None)[source]¶ Show the dataset on screen as an image.
Show dataset on screen as an image, possibly with an overlay.
Parameters: - dataset_id (DatasetID or str) – Either a DatasetID or a string representing a DatasetID, that has been previously loaded using Scene.load.
- overlay (dict, optional) – Add an overlay before showing the image. The keys/values for
this dictionary are as the arguments for
add_overlay()
. The dictionary should contain at least the key"coast_dir"
, which should refer to a top-level directory containing shapefiles. See the pycoast package documentation for coastline shapefile installation instructions.
-
slice
(key)[source]¶ Slice Scene by dataset index.
Note
DataArrays that do not have an
area
attribute will not be sliced.
-
start_time
¶ Return the start time of the file.
-
to_geoviews
(gvtype=None, datasets=None, kdims=None, vdims=None, dynamic=False)[source]¶ Convert satpy Scene to geoviews.
Parameters: - gvtype (gv plot type) – One of gv.Image, gv.LineContours, gv.FilledContours, gv.Points
Default to
geoviews.Image
. See Geoviews documentation for details. - datasets (list) – Limit included products to these datasets
- kdims (list of str) – Key dimensions. See geoviews documentation for more information.
- vdims – list of str, optional Value dimensions. See geoviews documentation for more information. If not given defaults to first data variable
- dynamic – boolean, optional, default False
Returns: geoviews object
- gvtype (gv plot type) – One of gv.Image, gv.LineContours, gv.FilledContours, gv.Points
Default to
-
to_xarray_dataset
(datasets=None)[source]¶ Merge all xr.DataArrays of a scene to a xr.DataSet.
Parameters: datasets (list) – List of products to include in the xarray.Dataset
Returns:
xarray.Dataset
-
unload
(keepables=None)[source]¶ Unload all unneeded datasets.
Datasets are considered unneeded if they weren’t directly requested or added to the Scene by the user or they are no longer needed to generate composites that have yet to be generated.
Parameters: keepables (iterable) – DatasetIDs to keep whether they are needed or not.
- filenames (iterable or dict) – A sequence of files that will be used to load data from. A
satpy.utils module¶
Module defining various utilities.
-
class
satpy.utils.
OrderedConfigParser
(*args, **kwargs)[source]¶ Bases:
object
Intercepts read and stores ordered section names.
Cannot use inheritance and super as ConfigParser use old style classes.
Initialize the instance.
-
satpy.utils.
atmospheric_path_length_correction
(data, cos_zen, limit=88.0, max_sza=95.0)[source]¶ Perform Sun zenith angle correction.
This function uses the correction method proposed by Li and Shibata (2006): https://doi.org/10.1175/JAS3682.1
The correction is limited to
limit
degrees (default: 88.0 degrees). For larger zenith angles, the correction is the same as at thelimit
ifmax_sza
is None. The default behavior is to gradually reduce the correction pastlimit
degrees up tomax_sza
where the correction becomes 0. Bothdata
andcos_zen
should be 2D arrays of the same shape.
-
satpy.utils.
get_satpos
(dataset)[source]¶ Get satellite position from dataset attributes.
Preferences are:
- Longitude & Latitude: Nadir, actual, nominal, projection
- Altitude: Actual, nominal, projection
A warning is issued when projection values have to be used because nothing else is available.
Returns: Geodetic longitude, latitude, altitude
-
satpy.utils.
proj_units_to_meters
(proj_str)[source]¶ Convert projection units from kilometers to meters.
-
satpy.utils.
sunzen_corr_cos
(data, cos_zen, limit=88.0, max_sza=95.0)[source]¶ Perform Sun zenith angle correction.
The correction is based on the provided cosine of the zenith angle (
cos_zen
). The correction is limited tolimit
degrees (default: 88.0 degrees). For larger zenith angles, the correction is the same as at thelimit
ifmax_sza
is None. The default behavior is to gradually reduce the correction pastlimit
degrees up tomax_sza
where the correction becomes 0. Bothdata
andcos_zen
should be 2D arrays of the same shape.
Module contents¶
Satpy Package initializer.