Skip to content

Commit d984d44

Browse files
fedorovgitbook-bot
authored andcommitted
GITBOOK-416: change request with no subject merged in GitBook
1 parent d96f1a5 commit d984d44

File tree

1 file changed

+18
-34
lines changed

1 file changed

+18
-34
lines changed

data/downloading-data/direct-loading.md

Lines changed: 18 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,16 @@
11
# Directly loading DICOM objects from Google Cloud or AWS in Python
22

33
DICOM files in the IDC are stored as "blobs" on the cloud, with one copy housed on Google Cloud Storage (GCS) and another on Amazon Web Services (AWS) S3 storage. By using the right tools, these blobs can be wrapped to appear as "file-like" objects to Python DICOM libraries, enabling intelligent loading of DICOM files directly from cloud storage as if they were local files without having to first download them onto a local drive.
4+
5+
{% hint style="success" %}
6+
Code snippets included in this article are also replicated in this Google Colab tutorial notebook for you convenience: [https://github.com/ImagingDataCommons/IDC-Tutorials/blob/master/notebooks/advanced\_topics/gcs\_aws\_direct\_access.ipynb](https://github.com/ImagingDataCommons/IDC-Tutorials/blob/master/notebooks/advanced_topics/gcs_aws_direct_access.ipynb)
7+
{% endhint %}
8+
49
### Reading files with Pydicom
510

6-
[Pydicom][2] is popular library for working with DICOM files in Python. Its [dcmread][3] function is able to accept any "file-like" object, meaning you can read a file straight from a cloud blob if you know its path. See [this page](../organization-of-data/files-and-metadata.md#storage-buckets) for information on finding the paths of the blobs for DICOM objects in IDC. The `dcmread` function also has some other options that allow you to control what is read. For example you can choose to read only the metadata and not the pixel data, or read only certain attributes. In the following two sections, we demonstrate these abilities using first Google Cloud Storage blobs and then AWS S3 blobs.
11+
[Pydicom](https://pydicom.github.io/pydicom/stable/index.html) is popular library for working with DICOM files in Python. Its [dcmread](https://pydicom.github.io/pydicom/stable/reference/generated/pydicom.filereader.dcmread.html#pydicom.filereader.dcmread) function is able to accept any "file-like" object, meaning you can read a file straight from a cloud blob if you know its path. See [this page](../organization-of-data/files-and-metadata.md#storage-buckets) for information on finding the paths of the blobs for DICOM objects in IDC. The `dcmread` function also has some other options that allow you to control what is read. For example you can choose to read only the metadata and not the pixel data, or read only certain attributes. In the following two sections, we demonstrate these abilities using first Google Cloud Storage blobs and then AWS S3 blobs.
712

8-
##### Mapping IDC DICOM series to bucket URLs
13+
**Mapping IDC DICOM series to bucket URLs**
914

1015
All of the image data available from IDC is replicated between public Google Cloud Storage (GCS) and AWS buckets. pip-installable [idc-index](https://github.com/imagingdatacommons/idc-index) package provides convenience functions to get URLs of the files corresponding to a given DICOM series.
1116

@@ -29,9 +34,9 @@ aws_file_urls = idc_client.get_series_file_URLs(
2934
)
3035
```
3136

32-
##### From Google Cloud Storage blobs
37+
**From Google Cloud Storage blobs**
3338

34-
The [official Python SDK for Google Cloud Storage][1] (installable from pip and PyPI as `google-cloud-storage`) provides a "file-like" interface allowing other Python libraries, such as Pydicom, to work with blobs as if they were "normal" files on the local filesystem.
39+
The [official Python SDK for Google Cloud Storage](https://cloud.google.com/python/docs/reference/storage/latest/) (installable from pip and PyPI as `google-cloud-storage`) provides a "file-like" interface allowing other Python libraries, such as Pydicom, to work with blobs as if they were "normal" files on the local filesystem.
3540

3641
To read from a GCS blob with Pydicom, first create a storage client and blob object, representing a remote blob object stored on the cloud, then simply use the `.open('rb')` method to create a readable file-like object that can be passed to the `dcmread` function.
3742

@@ -84,9 +89,9 @@ with blob.open("rb") as reader:
8489

8590
Reading only metadata or only specific attributes will reduce the amount of data that needs to be pulled down under some circumstances and therefore make the loading process faster. This depends on the size of the attributes being retrieved, the `chunk_size` (a parameter of the `open()` method that controls how much data is pulled in each HTTP request to the server), and the position of the requested element within the file (since it is necessary to seek through the file until the requested attributes are found, but any data after the requested attributes need not be pulled).
8691

87-
This works because running the [open][4] method on a Blob object returns a [BlobReader][5] object, which has a "file-like" interface (specifically the ``seek``, ``read``, and ``tell`` methods).
92+
This works because running the [open](https://cloud.google.com/python/docs/reference/storage/latest/google.cloud.storage.blob.Blob#google_cloud_storage_blob_Blob_open) method on a Blob object returns a [BlobReader](https://cloud.google.com/python/docs/reference/storage/latest/google.cloud.storage.fileio.BlobReader) object, which has a "file-like" interface (specifically the `seek`, `read`, and `tell` methods).
8893

89-
##### From AWS S3 blobs
94+
**From AWS S3 blobs**
9095

9196
The `boto3` package provides a Python API for accessing S3 blobs. It can be installed with `pip install boto3`. In order to access open IDC data without providing AWS credentials, it is necessary to configure your own client object such that it does not require signing. This is demonstrated in the following example, which repeats the above example using the counterpart of the same blob on AWS S3. If you want to read an entire file, we recommend using a temporary buffer like this:
9297

@@ -127,7 +132,7 @@ with BytesIO() as buf:
127132
dcm = dcmread(buf)
128133
```
129134

130-
Unlike `google-cloud-storage`, `boto3` does not provide a file-like interface to access data in blobs. Instead, the `smart_open` [package][15] is a third-party package that wraps an S3 client to expose a "file-like" interface. It can be installed with `pip install 'smart_open[s3]'`. However, we have found that the buffering behavior of this package (which is intended for streaming) is not well matched to the use case of reading DICOM metadata, resulting in many unnecassary requests while reading the metadata of DICOM files (see [this](https://github.com/piskvorky/smart_open/issues/712) issue). Therefore while the following will work, we recommend using the approach in the above example (downloading the whole file) in most cases even if you only want to read the metadata as it will likely be much faster. The exception to this is when reading only the metadata of very large images where the total amount of pixel data dwarfs the amount of metadata (or using frame-level access to such images, see below).
135+
Unlike `google-cloud-storage`, `boto3` does not provide a file-like interface to access data in blobs. Instead, the `smart_open` [package](https://github.com/piskvorky/smart_open) is a third-party package that wraps an S3 client to expose a "file-like" interface. It can be installed with `pip install 'smart_open[s3]'`. However, we have found that the buffering behavior of this package (which is intended for streaming) is not well matched to the use case of reading DICOM metadata, resulting in many unnecassary requests while reading the metadata of DICOM files (see [this](https://github.com/piskvorky/smart_open/issues/712) issue). Therefore while the following will work, we recommend using the approach in the above example (downloading the whole file) in most cases even if you only want to read the metadata as it will likely be much faster. The exception to this is when reading only the metadata of very large images where the total amount of pixel data dwarfs the amount of metadata (or using frame-level access to such images, see below).
131136

132137
```python
133138
from pydicom import dcmread
@@ -163,15 +168,15 @@ with smart_open.open(url, mode="rb", transport_params=dict(client=s3_client)) as
163168
dcm = dcmread(reader, stop_before_pixels=True)
164169
```
165170

166-
You may want to look into the the other options of `smart_open`'s `open` [method][16] to improve performance (in particular the `buffering` parameter).
171+
You may want to look into the the other options of `smart_open`'s `open` [method](https://github.com/piskvorky/smart_open/blob/master/help.txt) to improve performance (in particular the `buffering` parameter).
167172

168173
In the remainder of the examples, we will use only the GCS access method for brevity. However, you should be able to straightforwardly swap out the opened GCS blob for the opened AWS S3 blob to achieve the same effect with Amazon S3.
169174

170175
### Frame-level access with Highdicom
171176

172-
[Highdicom][6] is a higher-level library providing several features to work with images and image-derived DICOM objects. As of the release 0.25.1, its various reading methods (including [imread][7], [segread][8], [annread][9], and [srread][10]) can read any file-like object, including Google Cloud blobs and anything opened with `smart_open` (including S3 blobs).
177+
[Highdicom](https://highdicom.readthedocs.io) is a higher-level library providing several features to work with images and image-derived DICOM objects. As of the release 0.25.1, its various reading methods (including [imread](https://highdicom.readthedocs.io/en/latest/package.html#highdicom.imread), [segread](https://highdicom.readthedocs.io/en/latest/package.html#highdicom.seg.segread), [annread](https://highdicom.readthedocs.io/en/latest/package.html#highdicom.ann.annread), and [srread](https://highdicom.readthedocs.io/en/latest/package.html#highdicom.sr.srread)) can read any file-like object, including Google Cloud blobs and anything opened with `smart_open` (including S3 blobs).
173178

174-
A particularly useful feature when working with blobs is ["lazy" frame retrieval][13] for images and segmentations. This downloads only the image metadata when the file is initially loaded, uses it to create a frame-level index, and downloads specific frames as and when they are requested by the user. This is especially useful for large multiframe files (such as those found in slide microscopy or multi-segment binary or fractional segmentations) as it can significantly reduce the amount of data that needs to be downloaded to access a subset of the frames.
179+
A particularly useful feature when working with blobs is ["lazy" frame retrieval](https://highdicom.readthedocs.io/en/latest/image.html#lazy) for images and segmentations. This downloads only the image metadata when the file is initially loaded, uses it to create a frame-level index, and downloads specific frames as and when they are requested by the user. This is especially useful for large multiframe files (such as those found in slide microscopy or multi-segment binary or fractional segmentations) as it can significantly reduce the amount of data that needs to be downloaded to access a subset of the frames.
175180

176181
In this first example, we use lazy frame retrieval to load only a specific spatial patch from a large whole slide image from the IDC.
177182

@@ -234,13 +239,10 @@ plt.show()
234239

235240
Running this code should produce an output that looks like this:
236241

237-
<p align="center">
238-
<img src="../../.gitbook/assets/slide_screenshot.png" alt="Screenshot of slide region" width="524" height="454">
239-
</p>
242+
<div align="center"><img src="../../.gitbook/assets/slide_screenshot.png" alt="Screenshot of slide region" height="454" width="524"></div>
240243

241244
As a further example, we use lazy frame retrieval to load only a specific set of segments from a large multi-organ segmentation of a CT image in the IDC stored in binary format (in binary segmentations, each segment is stored using a separate set of frames).
242245

243-
244246
```python
245247
import highdicom as hd
246248
from google.cloud import storage
@@ -284,13 +286,13 @@ with blob.open(mode="rb") as reader:
284286
print(volume.shape)
285287
```
286288

287-
See [this][11] page for more information on highdicom's `Image` class, and [this][12] page for the `Segmentation` class.
289+
See [this](https://highdicom.readthedocs.io/en/latest/image.html) page for more information on highdicom's `Image` class, and [this](https://highdicom.readthedocs.io/en/latest/seg.html) page for the `Segmentation` class.
288290

289291
### The importance of offset tables for slide microscopy (SM) images
290292

291293
Achieving good performance for the Slide Microscopy frame-level retrievals requires the presence of either a "Basic Offset Table" or "Extended Offset Table" in the file. These tables specify the starting positions of each frame within the file's byte stream. Without an offset table being present, libraries such as highdicom have to parse through the pixel data to find markers that tell it where frame boundaries are, which involves pulling down significantly more data and is therefore very slow. This mostly eliminates the potential speed benefits of frame-level retrieval. Unfortunately there is no simple way to know whether a file has an offset table without downloading the pixel data and checking it. If you find that an image takes a long time to load initially, it is probably because highdicom is constucting the offset table itself because it wasn't included in the file.
292294

293-
Most IDC images do include an offset table, but some of the older pathology slide images do not. [This page][14] contains some notes about whether individual collections include offset tables.
295+
Most IDC images do include an offset table, but some of the older pathology slide images do not. [This page](https://github.com/ImagingDataCommons/idc-wsi-conversion?tab=readme-ov-file#overview) contains some notes about whether individual collections include offset tables.
294296

295297
You can also check whether an image file (including pixel data) has an offset table using pydicom like this:
296298

@@ -305,21 +307,3 @@ print("Has Extended Offset Table:", "ExtendedOffsetTable" in dcm)
305307
print("Has Basic Offset Table:", dcm.Pixeldata[4:8] != b'\x00\x00\x00\x00')
306308

307309
```
308-
309-
310-
[1]: https://cloud.google.com/python/docs/reference/storage/latest/
311-
[2]: https://pydicom.github.io/pydicom/stable/index.html
312-
[3]: https://pydicom.github.io/pydicom/stable/reference/generated/pydicom.filereader.dcmread.html#pydicom.filereader.dcmread
313-
[4]: https://cloud.google.com/python/docs/reference/storage/latest/google.cloud.storage.blob.Blob#google_cloud_storage_blob_Blob_open
314-
[5]: https://cloud.google.com/python/docs/reference/storage/latest/google.cloud.storage.fileio.BlobReader
315-
[6]: https://highdicom.readthedocs.io
316-
[7]: https://highdicom.readthedocs.io/en/latest/package.html#highdicom.imread
317-
[8]: https://highdicom.readthedocs.io/en/latest/package.html#highdicom.seg.segread
318-
[9]: https://highdicom.readthedocs.io/en/latest/package.html#highdicom.ann.annread
319-
[10]: https://highdicom.readthedocs.io/en/latest/package.html#highdicom.sr.srread
320-
[11]: https://highdicom.readthedocs.io/en/latest/image.html
321-
[12]: https://highdicom.readthedocs.io/en/latest/seg.html
322-
[13]: https://highdicom.readthedocs.io/en/latest/image.html#lazy
323-
[14]: https://github.com/ImagingDataCommons/idc-wsi-conversion?tab=readme-ov-file#overview
324-
[15]: https://github.com/piskvorky/smart_open
325-
[16]: https://github.com/piskvorky/smart_open/blob/master/help.txt

0 commit comments

Comments
 (0)