NIPY logo

Site Navigation

NIPY Community

Table Of Contents

Previous topic

io.imageformats.testing.nosepatch

This Page

io.imageformats.volumeutils

Module: io.imageformats.volumeutils

Inheritance diagram for nipy.io.imageformats.volumeutils:

Utility functions for analyze-like formats

Classes

HeaderDataError

class nipy.io.imageformats.volumeutils.HeaderDataError

Bases: exceptions.Exception

Class to indicate error in getting or setting header data

__init__()

x.__init__(...) initializes x; see help(type(x)) for signature

args
message

HeaderTypeError

class nipy.io.imageformats.volumeutils.HeaderTypeError

Bases: exceptions.Exception

Class to indicate error in parameters into header functions

__init__()

x.__init__(...) initializes x; see help(type(x)) for signature

args
message

Recoder

class nipy.io.imageformats.volumeutils.Recoder(codes, fields=('code', ))

Bases: object

class to return canonical code(s) from code or aliases

The concept is a lot easier to read in the implementation and tests than it is to explain, so...

>>> # If you have some codes, and several aliases, like this:
>>> code1 = 1; aliases1=['one', 'first']
>>> code2 = 2; aliases2=['two', 'second']
>>> # You might want to do this:
>>> codes = [[code1]+aliases1,[code2]+aliases2]
>>> recodes = Recoder(codes)
>>> recodes.code['one']
1
>>> recodes.code['second']
2
>>> recodes.code[2]
2
>>> # Or maybe you have a code, a label and some aliases
>>> codes=((1,'label1','one', 'first'),(2,'label2','two'))
>>> # you might want to get back the code or the label
>>> recodes = Recoder(codes, fields=('code','label'))
>>> recodes.code['first']
1
>>> recodes.code['label1']
1
>>> recodes.label[2]
'label2'
>>> # For convenience, you can get the first entered name by
>>> # indexing the object directly
>>> recodes[2]
2

Methods

add_codes
keys
value_set
__init__(codes, fields=('code', ))

Create recoder object

codes give a sequence of code, alias sequences fields are names by which the entries in these sequences can be accessed.

By default fields gives the first column the name “code”. The first column is the vector of first entries in each of the sequences found in codes. Thence you can get the equivalent first column value with ob.code[value], where value can be a first column value, or a value in any of the other columns in that sequence.

You can give other columns names too, and access them in the same way - see the examples in the class docstring.

Parameters :

codes : seqence of sequences

Each sequence defines values (codes) that are equivalent

fields : {(‘code’,) string sequence}, optional

names by which elements in sequences can be accesssed

add_codes(codes)

Add codes to object

>>> codes = ((1, 'one'), (2, 'two'))
>>> rc = Recoder(codes)
>>> rc.value_set() == set((1,2))
True
>>> rc.add_codes(((3, 'three'), (1, 'first')))
>>> rc.value_set() == set((1,2,3))
True
keys()

Return all available code and alias values

Returns same value as obj.field1.keys() and, with the default initializing fields argument of fields=(‘code’,), this will return the same as obj.code.keys()

>>> codes = ((1, 'one'), (2, 'two'), (1, 'repeat value'))
>>> k = Recoder(codes).keys()
>>> k.sort() # Just to guarantee order for doctest output
>>> k
[1, 2, 'one', 'repeat value', 'two']
value_set(name=None)

Return set of possible returned values for column

By default, the column is the first column.

Returns same values as set(obj.field1.values()) and, with the default initializing``fields`` argument of fields=(‘code’,), this will return the same as set(obj.code.values())

Parameters :

name : {None, string}

Where default of none gives result for first column

>>> codes = ((1, ‘one’), (2, ‘two’), (1, ‘repeat value’)) :

>>> vs = Recoder(codes).value_set() :

>>> vs == set([1, 2]) # Sets are not ordered, hence this test :

True :

>>> rc = Recoder(codes, fields=(‘code’, ‘label’)) :

>>> rc.value_set(‘label’) == set((‘one’, ‘two’, ‘repeat value’)) :

True :

UnsupportedDataType

class nipy.io.imageformats.volumeutils.UnsupportedDataType

Bases: object

Class to indicated data type not supported

__init__()

x.__init__(...) initializes x; see help(type(x)) for signature

Functions

nipy.io.imageformats.volumeutils.allopen(fname, *args, **kwargs)

Generic file-like object open

If input fname already looks like a file, pass through. If fname ends with recognizable compressed types, use python libraries to open as file-like objects (read or write) Otherwise, use standard open.

nipy.io.imageformats.volumeutils.array_from_file(shape, dtype, infile, offset=0, order='F')

Get array from file with specified shape, dtype and file offset

Parameters :

shape : sequence

sequence specifying output array shape

dtype : numpy dtype

fully specified numpy dtype, including correct endianness

infile : file-like

open file-like object implementing at least read() and seek()

offset : int, optional

offset in bytes into infile to start reading array data. Default is 0

order : {‘F’, ‘C’} string

order in which to write data. Default is ‘F’ (fortran order).

Returns :

arr : array-like

array like object that can be sliced, containing data

Examples

>>> import StringIO
>>> str_io = StringIO.StringIO()
>>> arr = np.arange(6).reshape(1,2,3)
>>> str_io.write(arr.tostring('F'))
>>> arr2 = array_from_file((1,2,3), arr.dtype, str_io)
>>> np.all(arr == arr2)
True
>>> str_io = StringIO.StringIO()
>>> str_io.write(' ' * 10)
>>> str_io.write(arr.tostring('F'))
>>> arr2 = array_from_file((1,2,3), arr.dtype, str_io, 10)
>>> np.all(arr == arr2)
True
nipy.io.imageformats.volumeutils.array_to_file(data, out_dtype, fileobj, intercept=0.0, divslope=1.0, mn=None, mx=None, order='F', nan2zero=True)

Helper function for writing possibly scaled arrays to disk

Parameters :

data : array

array to write

out_dtype : dtype

dtype to write array as

fileobj : file-like

file-like object implementing write method. The fileobj should be initialized to start writing at the correct location

intercept : scalar, optional

scalar to subtract from data, before dividing by divslope. Default is 0.0

divslope : scalar, optional

scalefactor to divide data by before writing. Default is 1.0.

mn : scalar, optional

minimum threshold in (unscaled) data, such that all data below this value are set to this value. Default is None (no threshold)

mx : scalar, optional

maximum threshold in (unscaled) data, such that all data above this value are set to this value. Default is None (no threshold)

order : {‘F’, ‘C’}, optional

memory order to write array. Default is ‘F’

nan2zero : {True, False}, optional

Whether to set NaN values to 0 when writing integer output. Defaults to True. If False, NaNs will be represented as numpy does when casting, and this can be odd (often the lowest available integer value)

Examples

>>> from StringIO import StringIO
>>> sio = StringIO()
>>> data = np.arange(10, dtype=np.float)
>>> array_to_file(data, np.float, sio)
>>> sio.getvalue() == data.tostring('F')
True
>>> sio.truncate(0)
>>> array_to_file(data, np.int16, sio)
>>> sio.getvalue() == data.astype(np.int16).tostring()
True
>>> sio.truncate(0)
>>> array_to_file(data.byteswap(), np.float, sio)
>>> sio.getvalue() == data.byteswap().tostring('F')
True
>>> sio.truncate(0)
>>> array_to_file(data, np.float, sio, order='C')
>>> sio.getvalue() == data.tostring('C')
True
nipy.io.imageformats.volumeutils.calculate_scale(data, out_dtype, allow_intercept)

Calculate scaling and optional intercept for data

Parameters :

data : array

out_dtype : dtype

output data type

allow_intercept : bool

If True allow non-zero intercept

Returns :

scaling : None or float

scalefactor to divide into data. None if no valid data

intercept : None or float

intercept to subtract from data. None if no valid data

mn : None or float

minimum of finite value in data or None if this will not be used to threshold data

mx : None or float

minimum of finite value in data, or None if this will not be used to threshold data

nipy.io.imageformats.volumeutils.can_cast(in_type, out_type, has_intercept=False, has_slope=False)

Return True if we can safely cast in_type to out_type

Parameters :

in_type : numpy type

type of data we will case from

out_dtype : numpy type

type that we want to cast to

has_intercept : bool, optional

Whether we can subtract a constant from the data (before scaling) before casting to out_dtype. Default is False

has_slope : bool, optional

Whether we can use a scaling factor to adjust slope of relationship of data to data in cast array. Default is False

Returns :

tf : bool

True if we can safely cast, False otherwise

Examples

>>> can_cast(np.float64, np.float32)
True
>>> can_cast(np.complex128, np.float32)
False
>>> can_cast(np.int64, np.float32)
True
>>> can_cast(np.float32, np.int16)
False
>>> can_cast(np.float32, np.int16, False, True)
True
>>> can_cast(np.int16, np.uint8)
False
>>> can_cast(np.int16, np.uint8, False, True)
False
>>> can_cast(np.int16, np.uint8, True, True)
True
nipy.io.imageformats.volumeutils.finite_range(arr)

Return range (min, max) of finite values of arr

Parameters :

arr : array

Returns :

mn : scalar

minimum of values in (flattened) array

mx : scalar

maximum of values in (flattened) array

Examples

>>> a = np.array([[-1, 0, 1],[np.inf, np.nan, -np.inf]])
>>> finite_range(a)
(-1.0, 1.0)
>>> a = np.array([[np.nan],[np.nan]])
>>> finite_range(a)
(inf, -inf)
>>> a = np.array([[-3, 0, 1],[2,-1,4]], dtype=np.int)
>>> finite_range(a)
(-3, 4)
>>> a = np.array([[1, 0, 1],[2,3,4]], dtype=np.uint)
>>> finite_range(a)
(0, 4)
>>> a = a + 1j
>>> finite_range(a)
Traceback (most recent call last):
   ...
TypeError: Can only handle floats and (u)ints
nipy.io.imageformats.volumeutils.hdr_getterfunc(obj, key)

Getter function for keys or methods of form ‘get_<key’

nipy.io.imageformats.volumeutils.make_dt_codes(codes)

Create full dt codes object from datatype codes

nipy.io.imageformats.volumeutils.pretty_mapping(mapping, getterfunc=None)

Make pretty string from mapping

Adjusts text column to print values on basis of longest key. Probably only sensible if keys are mainly strings.

You can pass in a callable that does clever things to get the values out of the mapping, given the names. By default, we just use __getitem__

Parameters :

mapping : mapping

implementing iterator returning keys and .items()

getterfunc : None or callable

callable taking two arguments, obj and key where obj is the passed mapping. If None, just use lambda obj, key: obj[key]

Returns :

str : string

Examples

>>> d = {'a key': 'a value'}
>>> print pretty_mapping(d)
a key  : a value
>>> class C(object): # to control ordering, show get_ method
...     def __iter__(self):
...         return iter(('short_field','longer_field'))
...     def __getitem__(self, key):
...         if key == 'short_field':
...             return 0
...         if key == 'longer_field':
...             return 'str'
...     def get_longer_field(self):
...         return 'method string'
>>> def getter(obj, key):
...     # Look for any 'get_<name>' methods
...     try:
...         return obj.__getattribute__('get_' + key)()
...     except AttributeError:
...         return obj[key]
>>> print pretty_mapping(C(), getter)
short_field   : 0
longer_field  : method string
nipy.io.imageformats.volumeutils.scale_min_max(mn, mx, out_type, allow_intercept)

Return scaling and intercept min, max of data, given output type

Returns scalefactor and intercept to best fit data with given mn and mx min and max values into range of data type with type_min and type_max min and max values for type.

The calculated scaling is therefore:

scaled_data = (data-intercept) / scalefactor
Parameters :

mn : scalar

data minimum value

mx : scalar

data maximum value

out_type : numpy type

numpy type of output

allow_intercept : bool

If true, allow calculation of non-zero intercept. Otherwise, returned intercept is always 0.0

Returns :

scalefactor : numpy scalar, dtype=np.maximum_sctype(np.float)

scalefactor by which to divide data after subtracting intercept

intercept : numpy scalar, dtype=np.maximum_sctype(np.float)

value to subtract from data before dividing by scalefactor

>>> scale_min_max(0, 255, np.uint8, False) :

(1.0, 0.0) :

>>> scale_min_max(-128, 127, np.int8, False) :

(1.0, 0.0) :

>>> scale_min_max(0, 127, np.int8, False) :

(1.0, 0.0) :

>>> scaling, intercept = scale_min_max(0, 127, np.int8, True) :

>>> np.allclose((0 - intercept) / scaling, -128) :

True :

>>> np.allclose((127 - intercept) / scaling, 127) :

True :

>>> scaling, intercept = scale_min_max(-10, -1, np.int8, True) :

>>> np.allclose((-10 - intercept) / scaling, -128) :

True :

>>> np.allclose((-1 - intercept) / scaling, 127) :

True :

>>> scaling, intercept = scale_min_max(1, 10, np.int8, True) :

>>> np.allclose((1 - intercept) / scaling, -128) :

True :

>>> np.allclose((10 - intercept) / scaling, 127) :

True :

Notes

The large integers lead to python long types as max / min for type. To contain the rounding error, we need to use the maximum numpy float types when casting to float.