Inheritance diagram for nipy.io.imageformats.volumeutils:
Utility functions for analyze-like formats
Bases: object
class to return canonical code(s) from code or aliases
The concept is a lot easier to read in the implementation and tests than it is to explain, so...
>>> # If you have some codes, and several aliases, like this:
>>> code1 = 1; aliases1=['one', 'first']
>>> code2 = 2; aliases2=['two', 'second']
>>> # You might want to do this:
>>> codes = [[code1]+aliases1,[code2]+aliases2]
>>> recodes = Recoder(codes)
>>> recodes.code['one']
1
>>> recodes.code['second']
2
>>> recodes.code[2]
2
>>> # Or maybe you have a code, a label and some aliases
>>> codes=((1,'label1','one', 'first'),(2,'label2','two'))
>>> # you might want to get back the code or the label
>>> recodes = Recoder(codes, fields=('code','label'))
>>> recodes.code['first']
1
>>> recodes.code['label1']
1
>>> recodes.label[2]
'label2'
>>> # For convenience, you can get the first entered name by
>>> # indexing the object directly
>>> recodes[2]
2
Methods
add_codes | |
keys | |
value_set |
Create recoder object
codes give a sequence of code, alias sequences fields are names by which the entries in these sequences can be accessed.
By default fields gives the first column the name “code”. The first column is the vector of first entries in each of the sequences found in codes. Thence you can get the equivalent first column value with ob.code[value], where value can be a first column value, or a value in any of the other columns in that sequence.
You can give other columns names too, and access them in the same way - see the examples in the class docstring.
Parameters : | codes : seqence of sequences
fields : {(‘code’,) string sequence}, optional
|
---|
Add codes to object
>>> codes = ((1, 'one'), (2, 'two'))
>>> rc = Recoder(codes)
>>> rc.value_set() == set((1,2))
True
>>> rc.add_codes(((3, 'three'), (1, 'first')))
>>> rc.value_set() == set((1,2,3))
True
Return all available code and alias values
Returns same value as obj.field1.keys() and, with the default initializing fields argument of fields=(‘code’,), this will return the same as obj.code.keys()
>>> codes = ((1, 'one'), (2, 'two'), (1, 'repeat value'))
>>> k = Recoder(codes).keys()
>>> k.sort() # Just to guarantee order for doctest output
>>> k
[1, 2, 'one', 'repeat value', 'two']
Return set of possible returned values for column
By default, the column is the first column.
Returns same values as set(obj.field1.values()) and, with the default initializing``fields`` argument of fields=(‘code’,), this will return the same as set(obj.code.values())
Parameters : | name : {None, string}
>>> codes = ((1, ‘one’), (2, ‘two’), (1, ‘repeat value’)) : >>> vs = Recoder(codes).value_set() : >>> vs == set([1, 2]) # Sets are not ordered, hence this test : True : >>> rc = Recoder(codes, fields=(‘code’, ‘label’)) : >>> rc.value_set(‘label’) == set((‘one’, ‘two’, ‘repeat value’)) : True : |
---|
Generic file-like object open
If input fname already looks like a file, pass through. If fname ends with recognizable compressed types, use python libraries to open as file-like objects (read or write) Otherwise, use standard open.
Get array from file with specified shape, dtype and file offset
Parameters : | shape : sequence
dtype : numpy dtype
infile : file-like
offset : int, optional
order : {‘F’, ‘C’} string
|
---|---|
Returns : | arr : array-like
|
Examples
>>> import StringIO
>>> str_io = StringIO.StringIO()
>>> arr = np.arange(6).reshape(1,2,3)
>>> str_io.write(arr.tostring('F'))
>>> arr2 = array_from_file((1,2,3), arr.dtype, str_io)
>>> np.all(arr == arr2)
True
>>> str_io = StringIO.StringIO()
>>> str_io.write(' ' * 10)
>>> str_io.write(arr.tostring('F'))
>>> arr2 = array_from_file((1,2,3), arr.dtype, str_io, 10)
>>> np.all(arr == arr2)
True
Helper function for writing possibly scaled arrays to disk
Parameters : | data : array
out_dtype : dtype
fileobj : file-like
intercept : scalar, optional
divslope : scalar, optional
mn : scalar, optional
mx : scalar, optional
order : {‘F’, ‘C’}, optional
nan2zero : {True, False}, optional
|
---|
Examples
>>> from StringIO import StringIO
>>> sio = StringIO()
>>> data = np.arange(10, dtype=np.float)
>>> array_to_file(data, np.float, sio)
>>> sio.getvalue() == data.tostring('F')
True
>>> sio.truncate(0)
>>> array_to_file(data, np.int16, sio)
>>> sio.getvalue() == data.astype(np.int16).tostring()
True
>>> sio.truncate(0)
>>> array_to_file(data.byteswap(), np.float, sio)
>>> sio.getvalue() == data.byteswap().tostring('F')
True
>>> sio.truncate(0)
>>> array_to_file(data, np.float, sio, order='C')
>>> sio.getvalue() == data.tostring('C')
True
Calculate scaling and optional intercept for data
Parameters : | data : array out_dtype : dtype
allow_intercept : bool
|
---|---|
Returns : | scaling : None or float
intercept : None or float
mn : None or float
mx : None or float
|
Return True if we can safely cast in_type to out_type
Parameters : | in_type : numpy type
out_dtype : numpy type
has_intercept : bool, optional
has_slope : bool, optional
|
---|---|
Returns : | tf : bool
|
Examples
>>> can_cast(np.float64, np.float32)
True
>>> can_cast(np.complex128, np.float32)
False
>>> can_cast(np.int64, np.float32)
True
>>> can_cast(np.float32, np.int16)
False
>>> can_cast(np.float32, np.int16, False, True)
True
>>> can_cast(np.int16, np.uint8)
False
>>> can_cast(np.int16, np.uint8, False, True)
False
>>> can_cast(np.int16, np.uint8, True, True)
True
Return range (min, max) of finite values of arr
Parameters : | arr : array |
---|---|
Returns : | mn : scalar
mx : scalar
|
Examples
>>> a = np.array([[-1, 0, 1],[np.inf, np.nan, -np.inf]])
>>> finite_range(a)
(-1.0, 1.0)
>>> a = np.array([[np.nan],[np.nan]])
>>> finite_range(a)
(inf, -inf)
>>> a = np.array([[-3, 0, 1],[2,-1,4]], dtype=np.int)
>>> finite_range(a)
(-3, 4)
>>> a = np.array([[1, 0, 1],[2,3,4]], dtype=np.uint)
>>> finite_range(a)
(0, 4)
>>> a = a + 1j
>>> finite_range(a)
Traceback (most recent call last):
...
TypeError: Can only handle floats and (u)ints
Getter function for keys or methods of form ‘get_<key’
Create full dt codes object from datatype codes
Make pretty string from mapping
Adjusts text column to print values on basis of longest key. Probably only sensible if keys are mainly strings.
You can pass in a callable that does clever things to get the values out of the mapping, given the names. By default, we just use __getitem__
Parameters : | mapping : mapping
getterfunc : None or callable
|
---|---|
Returns : | str : string |
Examples
>>> d = {'a key': 'a value'}
>>> print pretty_mapping(d)
a key : a value
>>> class C(object): # to control ordering, show get_ method
... def __iter__(self):
... return iter(('short_field','longer_field'))
... def __getitem__(self, key):
... if key == 'short_field':
... return 0
... if key == 'longer_field':
... return 'str'
... def get_longer_field(self):
... return 'method string'
>>> def getter(obj, key):
... # Look for any 'get_<name>' methods
... try:
... return obj.__getattribute__('get_' + key)()
... except AttributeError:
... return obj[key]
>>> print pretty_mapping(C(), getter)
short_field : 0
longer_field : method string
Return scaling and intercept min, max of data, given output type
Returns scalefactor and intercept to best fit data with given mn and mx min and max values into range of data type with type_min and type_max min and max values for type.
The calculated scaling is therefore:
scaled_data = (data-intercept) / scalefactor
Parameters : | mn : scalar
mx : scalar
out_type : numpy type
allow_intercept : bool
|
---|---|
Returns : | scalefactor : numpy scalar, dtype=np.maximum_sctype(np.float)
intercept : numpy scalar, dtype=np.maximum_sctype(np.float)
>>> scale_min_max(0, 255, np.uint8, False) : (1.0, 0.0) : >>> scale_min_max(-128, 127, np.int8, False) : (1.0, 0.0) : >>> scale_min_max(0, 127, np.int8, False) : (1.0, 0.0) : >>> scaling, intercept = scale_min_max(0, 127, np.int8, True) : >>> np.allclose((0 - intercept) / scaling, -128) : True : >>> np.allclose((127 - intercept) / scaling, 127) : True : >>> scaling, intercept = scale_min_max(-10, -1, np.int8, True) : >>> np.allclose((-10 - intercept) / scaling, -128) : True : >>> np.allclose((-1 - intercept) / scaling, 127) : True : >>> scaling, intercept = scale_min_max(1, 10, np.int8, True) : >>> np.allclose((1 - intercept) / scaling, -128) : True : >>> np.allclose((10 - intercept) / scaling, 127) : True : |
Notes
The large integers lead to python long types as max / min for type. To contain the rounding error, we need to use the maximum numpy float types when casting to float.