Read / write access to the basic Mayo Analyze format
This is a binary header format and inherits from WrapStruct
Apart from the attributes and methods of WrapStruct:
Class attributes are:
.default_x_flip
with methods:
.get/set_data_shape
.get/set_data_dtype
.get/set_zooms
.get/set_data_offset
.get_base_affine()
.get_best_affine()
.data_to_fileobj
.data_from_fileobj
and class methods:
.from_header(hdr)
More sophisticated headers can add more methods and attributes.
This - basic - analyze header cannot encode full affines (only diagonal affines), and cannot do integer scaling.
The inability to store affines means that we have to guess what orientation the image has. Most Analyze images are stored on disk in (fastest-changing to slowest-changing) R->L, P->A and I->S order. That is, the first voxel is the rightmost, most posterior and most inferior voxel location in the image, and the next voxel is one voxel towards the left of the image.
Most people refer to this disk storage format as ‘radiological’, on the basis that, if you load up the data as an array img_arr where the first axis is the fastest changing, then take a slice in the I->S axis - img_arr[:,:,10] - then the right part of the brain will be on the left of your displayed slice. Radiologists like looking at images where the left of the brain is on the right side of the image.
Conversely, if the image has the voxels stored with the left voxels first - L->R, P->A, I->S, then this would be ‘neurological’ format. Neurologists like looking at images where the left side of the brain is on the left of the image.
When we are guessing at an affine for Analyze, this translates to the problem of whether the affine should consider proceeding within the data down an X line as being from left to right, or right to left.
By default we assume that the image is stored in R->L format. We encode this choice in the default_x_flip flag that can be True or False. True means assume radiological.
If the image is 3D, and the X, Y and Z zooms are x, y, and z, then:
if default_x_flip is True::
affine = np.diag((-x,y,z,1))
else:
affine = np.diag((x,y,z,1))
In our implementation, there is no way of saving this assumed flip into the header. One way of doing this, that we have not used, is to allow negative zooms, in particular, negative X zooms. We did not do this because the image can be loaded with and without a default flip, so the saved zoom will not constrain the affine.
AnalyzeHeader([binaryblock, endianness, check]) | Class for basic analyze header |
AnalyzeImage(dataobj, affine[, header, ...]) | Class for basic Analyze format image |
load | class method to create image from filename filename |
Bases: nibabel.wrapstruct.LabeledWrapStruct
Class for basic analyze header
Implements zoom-only setting of affine transform, and no image scaling
Initialize header from binary data block
Parameters: | binaryblock : {None, string} optional
endianness : {None, ‘<’,’>’, other endian code} string, optional
check : bool, optional
|
---|
Examples
>>> hdr1 = AnalyzeHeader() # an empty header
>>> hdr1.endianness == native_code
True
>>> hdr1.get_data_shape()
(0,)
>>> hdr1.set_data_shape((1,2,3)) # now with some content
>>> hdr1.get_data_shape()
(1, 2, 3)
We can set the binary block directly via this initialization. Here we get it from the header we have just made
>>> binblock2 = hdr1.binaryblock
>>> hdr2 = AnalyzeHeader(binblock2)
>>> hdr2.get_data_shape()
(1, 2, 3)
Empty headers are native endian by default
>>> hdr2.endianness == native_code
True
You can pass valid opposite endian headers with the endianness parameter. Even empty headers can have endianness
>>> hdr3 = AnalyzeHeader(endianness=swapped_code)
>>> hdr3.endianness == swapped_code
True
If you do not pass an endianness, and you pass some data, we will try to guess from the passed data.
>>> binblock3 = hdr3.binaryblock
>>> hdr4 = AnalyzeHeader(binblock3)
>>> hdr4.endianness == swapped_code
True
Initialize header from binary data block
Parameters: | binaryblock : {None, string} optional
endianness : {None, ‘<’,’>’, other endian code} string, optional
check : bool, optional
|
---|
Examples
>>> hdr1 = AnalyzeHeader() # an empty header
>>> hdr1.endianness == native_code
True
>>> hdr1.get_data_shape()
(0,)
>>> hdr1.set_data_shape((1,2,3)) # now with some content
>>> hdr1.get_data_shape()
(1, 2, 3)
We can set the binary block directly via this initialization. Here we get it from the header we have just made
>>> binblock2 = hdr1.binaryblock
>>> hdr2 = AnalyzeHeader(binblock2)
>>> hdr2.get_data_shape()
(1, 2, 3)
Empty headers are native endian by default
>>> hdr2.endianness == native_code
True
You can pass valid opposite endian headers with the endianness parameter. Even empty headers can have endianness
>>> hdr3 = AnalyzeHeader(endianness=swapped_code)
>>> hdr3.endianness == swapped_code
True
If you do not pass an endianness, and you pass some data, we will try to guess from the passed data.
>>> binblock3 = hdr3.binaryblock
>>> hdr4 = AnalyzeHeader(binblock3)
>>> hdr4.endianness == swapped_code
True
Return header as mapping for conversion to Analyze types
Collect data from custom header type to fill in fields for Analyze and derived header types (such as Nifti1 and Nifti2).
When Analyze types convert another header type to their own type, they call this this method to check if there are other Analyze / Nifti fields that the source header would like to set.
Returns: | analyze_map : mapping
|
---|
Notes
You can also return a Nifti header with the relevant fields set.
Your header still needs methods get_data_dtype, get_data_shape and get_zooms, for the conversion, and these get called after using the analyze map, so the methods will override values set in the map.
Read scaled data array from fileobj
Use this routine to get the scaled image data from an image file fileobj, given a header self. “Scaled” means, with any header scaling factors applied to the raw data in the file. Use raw_data_from_fileobj to get the raw data.
Parameters: | fileobj : file-like
|
---|---|
Returns: | arr : ndarray
|
Notes
We use the header to get any scale or intercept values to apply to the data. Raw Analyze files don’t have scale factors or intercepts, but this routine also works with formats based on Analyze, that do have scaling, such as SPM analyze formats and NIfTI.
Write data to fileobj, maybe rescaling data, modifying self
In writing the data, we match the header to the written data, by setting the header scaling factors, iff rescale is True. Thus we modify self in the process of writing the data.
Parameters: | data : array-like
fileobj : file-like object
rescale : {True, False}, optional
|
---|
Examples
>>> from nibabel.analyze import AnalyzeHeader
>>> hdr = AnalyzeHeader()
>>> hdr.set_data_shape((1, 2, 3))
>>> hdr.set_data_dtype(np.float64)
>>> from io import BytesIO
>>> str_io = BytesIO()
>>> data = np.arange(6).reshape(1,2,3)
>>> hdr.data_to_fileobj(data, str_io)
>>> data.astype(np.float64).tostring('F') == str_io.getvalue()
True
Return header data for empty header with given endianness
Class method to create header from another header
Parameters: | header : Header instance or mapping
check : {True, False}
|
---|---|
Returns: | hdr : header instance
|
Get affine from basic (shared) header fields
Note that we get the translations from the center of the image.
Examples
>>> hdr = AnalyzeHeader()
>>> hdr.set_data_shape((3, 5, 7))
>>> hdr.set_zooms((3, 2, 1))
>>> hdr.default_x_flip
True
>>> hdr.get_base_affine() # from center of image
array([[-3., 0., 0., 3.],
[ 0., 2., 0., -4.],
[ 0., 0., 1., -3.],
[ 0., 0., 0., 1.]])
Get affine from basic (shared) header fields
Note that we get the translations from the center of the image.
Examples
>>> hdr = AnalyzeHeader()
>>> hdr.set_data_shape((3, 5, 7))
>>> hdr.set_zooms((3, 2, 1))
>>> hdr.default_x_flip
True
>>> hdr.get_base_affine() # from center of image
array([[-3., 0., 0., 3.],
[ 0., 2., 0., -4.],
[ 0., 0., 1., -3.],
[ 0., 0., 0., 1.]])
Get numpy dtype for data
For examples see set_data_dtype
Return offset into data file to read data
Examples
>>> hdr = AnalyzeHeader()
>>> hdr.get_data_offset()
0
>>> hdr['vox_offset'] = 12
>>> hdr.get_data_offset()
12
Get shape of data
Examples
>>> hdr = AnalyzeHeader()
>>> hdr.get_data_shape()
(0,)
>>> hdr.set_data_shape((1,2,3))
>>> hdr.get_data_shape()
(1, 2, 3)
Expanding number of dimensions gets default zooms
>>> hdr.get_zooms()
(1.0, 1.0, 1.0)
Get scalefactor and intercept
These are not implemented for basic Analyze
Get zooms from header
Returns: | z : tuple
|
---|
Examples
>>> hdr = AnalyzeHeader()
>>> hdr.get_zooms()
(1.0,)
>>> hdr.set_data_shape((1,2))
>>> hdr.get_zooms()
(1.0, 1.0)
>>> hdr.set_zooms((3, 4))
>>> hdr.get_zooms()
(3.0, 4.0)
Guess intended endianness from mapping-like hdr
Parameters: | hdr : mapping-like
|
---|---|
Returns: | endianness : {‘<’, ‘>’}
|
Examples
Zeros header, no information, guess native
>>> hdr = AnalyzeHeader()
>>> hdr_data = np.zeros((), dtype=header_dtype)
>>> AnalyzeHeader.guessed_endian(hdr_data) == native_code
True
A valid native header is guessed native
>>> hdr_data = hdr.structarr.copy()
>>> AnalyzeHeader.guessed_endian(hdr_data) == native_code
True
And, when swapped, is guessed as swapped
>>> sw_hdr_data = hdr_data.byteswap(swapped_code)
>>> AnalyzeHeader.guessed_endian(sw_hdr_data) == swapped_code
True
The algorithm is as follows:
First, look at the first value in the dim field; this should be between 0 and 7. If it is between 1 and 7, then this must be a native endian header.
>>> hdr_data = np.zeros((), dtype=header_dtype) # blank binary data
>>> hdr_data['dim'][0] = 1
>>> AnalyzeHeader.guessed_endian(hdr_data) == native_code
True
>>> hdr_data['dim'][0] = 6
>>> AnalyzeHeader.guessed_endian(hdr_data) == native_code
True
>>> hdr_data['dim'][0] = -1
>>> AnalyzeHeader.guessed_endian(hdr_data) == swapped_code
True
If the first dim value is zeros, we need a tie breaker. In that case we check the sizeof_hdr field. This should be 348. If it looks like the byteswapped value of 348, assumed swapped. Otherwise assume native.
>>> hdr_data = np.zeros((), dtype=header_dtype) # blank binary data
>>> AnalyzeHeader.guessed_endian(hdr_data) == native_code
True
>>> hdr_data['sizeof_hdr'] = 1543569408
>>> AnalyzeHeader.guessed_endian(hdr_data) == swapped_code
True
>>> hdr_data['sizeof_hdr'] = -1
>>> AnalyzeHeader.guessed_endian(hdr_data) == native_code
True
This is overridden by the dim[0] value though:
>>> hdr_data['sizeof_hdr'] = 1543569408
>>> hdr_data['dim'][0] = 1
>>> AnalyzeHeader.guessed_endian(hdr_data) == native_code
True
Read unscaled data array from fileobj
Parameters: | fileobj : file-like
|
---|---|
Returns: | arr : ndarray
|
Set numpy dtype for data from code or dtype or type
Examples
>>> hdr = AnalyzeHeader()
>>> hdr.set_data_dtype(np.uint8)
>>> hdr.get_data_dtype()
dtype('uint8')
>>> hdr.set_data_dtype(np.dtype(np.uint8))
>>> hdr.get_data_dtype()
dtype('uint8')
>>> hdr.set_data_dtype('implausible')
Traceback (most recent call last):
...
HeaderDataError: data dtype "implausible" not recognized
>>> hdr.set_data_dtype('none')
Traceback (most recent call last):
...
HeaderDataError: data dtype "none" known but not supported
>>> hdr.set_data_dtype(np.void)
Traceback (most recent call last):
...
HeaderDataError: data dtype "<type 'numpy.void'>" known but not supported
Traceback (most recent call last):
...
HeaderDataError: data dtype "<type 'numpy.void'>" known but not supported
Set offset into data file to read data
Set shape of data
If ndims == len(shape) then we set zooms for dimensions higher than ndims to 1.0
Parameters: | shape : sequence
|
---|
Set slope and / or intercept into header
Set slope and intercept for image data, such that, if the image data is arr, then the scaled image data will be (arr * slope) + inter
In this case, for Analyze images, we can’t store the slope or the intercept, so this method only checks that slope is None or NaN or 1.0, and that inter is None or NaN or 0.
Parameters: | slope : None or float
inter : None or float, optional
|
---|
Set zooms into header fields
See docstring for get_zooms for examples
Bases: nibabel.spatialimages.SpatialImage
Class for basic Analyze format image
Initialize image
The image is a combination of (array, affine matrix, header), with optional metadata in extra, and filename / file-like objects contained in the file_map mapping.
Parameters: | dataobj : object
affine : None or (4,4) array-like
header : None or mapping or header instance, optional
extra : None or mapping, optional
file_map : mapping, optional
|
---|
Initialize image
The image is a combination of (array, affine matrix, header), with optional metadata in extra, and filename / file-like objects contained in the file_map mapping.
Parameters: | dataobj : object
affine : None or (4,4) array-like
header : None or mapping or header instance, optional
extra : None or mapping, optional
file_map : mapping, optional
|
---|
alias of ArrayProxy
class method to create image from mapping in file_map `
Parameters: | file_map : dict
mmap : {True, False, ‘c’, ‘r’}, optional, keyword only
|
---|---|
Returns: | img : AnalyzeImage instance |
class method to create image from filename filename
Parameters: | filename : str
mmap : {True, False, ‘c’, ‘r’}, optional, keyword only
|
---|---|
Returns: | img : Analyze Image instance |
alias of AnalyzeHeader
class method to create image from filename filename
Parameters: | filename : str
mmap : {True, False, ‘c’, ‘r’}, optional, keyword only
|
---|---|
Returns: | img : Analyze Image instance |
Write image to file_map or contained self.file_map
Parameters: | file_map : None or mapping, optional
|
---|
class method to create image from filename filename
Parameters: | filename : str
mmap : {True, False, ‘c’, ‘r’}, optional, keyword only
|
---|---|
Returns: | img : Analyze Image instance |