![]() |
Eigen-unsupported
3.3.4
|
Tensors are multidimensional arrays of elements. Elements are typically scalars, but more complex types such as strings are also supported.
You can manipulate a tensor with one of the following classes. They all are in the namespace Eigen.
This is the class to use to create a tensor and allocate memory for it. The class is templatized with the tensor datatype, such as float or int, and the tensor rank. The rank is the number of dimensions, for example rank 2 is a matrix.
Tensors of this class are resizable. For example, if you assign a tensor of a different size to a Tensor, that tensor is resized to match its new value.
Constructor for a Tensor. The constructor must be passed rank
integers indicating the sizes of the instance along each of the the rank
dimensions.
// Create a tensor of rank 3 of sizes 2, 3, 4. This tensor owns // memory to hold 24 floating point values (24 = 2 x 3 x 4). Tensor<float, 3> t_3d(2, 3, 4); // Resize t_3d by assigning a tensor of different sizes, but same rank. t_3d = Tensor<float, 3>(3, 4, 3);
Constructor where the sizes for the constructor are specified as an array of values instead of an explicitly list of parameters. The array type to use is
is actually the tree of tensor operators that will compute the addition of
expressions ensures that the sum is computed before any updates to Y
are done.
Y = Y / (Y.sum(depth_dim).eval().reshape(dims2d).broadcast(bcast));
Note that an eval around the full right hand side expression is not needed because the generated has to compute the i-th value of the right hand side before assigning it to the left hand side.
However, if you were assigning the expression value to a shuffle of Y
then you would need to force an eval for correctness by adding an eval()
call for the right hand side:
Y.shuffle(...) = (Y / (Y.sum(depth_dim).eval().reshape(dims2d).broadcast(bcast))).eval();
If you need to access only a few elements from the value of an expression you can avoid materializing the value in a full tensor by using a TensorRef.
A TensorRef is a small wrapper class for any Eigen Operation. It provides overloads for the ()
operator that let you access individual values in the expression. TensorRef is convenient, because the Operation themselves do not provide a way to access individual elements.
// Create a TensorRef for the expression. The expression is not // evaluated yet. TensorRef<Tensor<float, 3> > ref = ((t1 + t2) * 0.2f).exp(); // Use "ref" to access individual elements. The expression is evaluated // on the fly. float at_0 = ref(0, 0, 0); cout << ref(0, 1, 0);
Only use TensorRef when you need a subset of the values of the expression. TensorRef only computes the values you access. However note that if you are going to access all the values it will be much faster to materialize the results in a Tensor first.
In some cases, if the full Tensor result would be very large, you may save memory by accessing it as a TensorRef. But not always. So don't count on it.
The tensor library provides several implementations of the various operations such as contractions and convolutions. The implementations are optimized for different environments: single threaded on CPU, multi threaded on CPU, or on a GPU using cuda. Additional implementations may be added later.
You can choose which implementation to use with the device()
call. If you do not choose an implementation explicitly the default implementation that uses a single thread on the CPU is used.
The default implementation has been optimized for recent Intel CPUs, taking advantage of SSE, AVX, and FMA instructions. Work is ongoing to tune the library on ARM CPUs. Note that you need to pass compiler-dependent flags to enable the use of SSE, AVX, and other instructions.
For example, the following code adds two tensors using the default single-threaded CPU implementation:
Tensor<float, 2> a(30, 40); Tensor<float, 2> b(30, 40); Tensor<float, 2> c = a + b;
To choose a different implementation you have to insert a device()
call before the assignment of the result. For technical C++ reasons this requires that the Tensor for the result be declared on its own. This means that you have to know the size of the result.
Eigen::Tensor<float, 2> c(30, 40); c.device(...) = a + b;
The call to device()
must be the last call on the left of the operator=.
You must pass to the device()
call an Eigen device object. There are presently three devices you can use: DefaultDevice, ThreadPoolDevice and GpuDevice.
This is exactly the same as not inserting a device()
call.
DefaultDevice my_device; c.device(my_device) = a + b;
// Create the Eigen ThreadPoolDevice. Eigen::ThreadPoolDevice my_device(4 /* number of threads to use */); // Now just use the device when evaluating expressions. Eigen::Tensor<float, 2> c(30, 50); c.device(my_device) = a.contract(b, dot_product_dims);
This is presently a bit more complicated than just using a thread pool device. You need to create a GPU device but you also need to explicitly allocate the memory for tensors with cuda.
In the documentation of the tensor methods and Operation we mention datatypes that are tensor-type specific:
Acts like an array of ints. Has an int size
attribute, and can be indexed like an array to access individual values. Used to represent the dimensions of a tensor. See dimensions()
.
Acts like an int
. Used for indexing tensors along their dimensions. See
is the type of data stored in the tensor. You can pass any value that is convertible to that type.
Returns the tensor itself in case you want to chain another call.
a.setConstant(12.3f); cout << "Constant: " << endl << a << endl << endl; => Constant: 12.3 12.3 12.3 12.3 12.3 12.3 12.3 12.3 12.3 12.3 12.3 12.3
Note that setConstant()
can be used on any tensor where the element type has a copy constructor and an operator=()
:
Eigen::Tensor<string, 2> a(2, 3); a.setConstant("yolo"); cout << "String tensor: " << endl << a << endl << endl; => String tensor: yolo yolo yolo yolo yolo yolo
Fills the tensor with zeros. Equivalent to setConstant(Scalar(0))
. Returns the tensor itself in case you want to chain another call.
a.setZero(); cout << "Zeros: " << endl << a << endl << endl; => Zeros: 0 0 0 0 0 0 0 0 0 0 0 0
Fills the tensor with explicit values specified in a std::initializer_list. The type of the initializer list depends on the type and rank of the tensor.
If the tensor has rank N, the initializer list must be nested N times. The most deeply nested lists must contains P scalars of the Tensor type where P is the size of the last dimension of the Tensor.
For example, for a TensorFixedSize<float, 2, 3>
the initializer list must contains 2 lists of 3 floats each.
and <Tensor-Type>::Index
.
See struct UniformRandomGenerator
in TensorFunctors.h for an example.
// Custom number generator for use with setRandom(). struct MyRandomGenerator { // Default and copy constructors. Both are needed MyRandomGenerator() { } MyRandomGenerator(const MyRandomGenerator& ) { } // Return a random value to be used. "element_location" is the // location of the entry to set in the tensor, it can typically // be ignored. Scalar operator()(Eigen::DenseIndex element_location, Eigen::DenseIndex /*unused*/ = 0) const { return <randomly generated value of type T>; } // Same as above but generates several numbers at a time. typename internal::packet_traits<Scalar>::type packetOp( Eigen::DenseIndex packet_location, Eigen::DenseIndex /*unused*/ = 0) const { return <a packet of randomly generated values>; } };
You can also use one of the 2 random number generators that are part of the tensor library:
The Tensor, TensorFixedSize, and TensorRef classes provide the following accessors to access the tensor coefficients:
const Scalar& operator()(const array<Index, NumIndices>& indices) const Scalar& operator()(Index firstIndex, IndexTypes... otherIndices) Scalar& operator()(const array<Index, NumIndices>& indices) Scalar& operator()(Index firstIndex, IndexTypes... otherIndices)
The number of indices must be equal to the rank of the tensor. Moreover, these accessors are not available on tensor expressions. In order to access the values of a tensor expression, the expression must either be evaluated or wrapped in a TensorRef.
Returns a pointer to the storage for the tensor. The pointer is const if the tensor was const. This allows direct access to the data. The layout of the data depends on the tensor layout: RowMajor or ColMajor.
This access is usually only needed for special cases, for example when mixing Eigen Tensor code with other libraries.
Scalar is the type of data stored in the tensor.
Eigen::Tensor<float, 2> a(3, 4); float* a_data = a.data(); a_data[0] = 123.45f; cout << "a(0, 0): " << a(0, 0); => a(0, 0): 123.45
All the methods documented below return non evaluated tensor Operations
. These can be chained: you can apply another Tensor Operation to the value returned by the method.
The chain of Operation is evaluated lazily, typically when it is assigned to a tensor. See "Controlling when Expression are Evaluated" for more details about their evaluation.
Returns a tensor of the same type and dimensions as the original tensor but where all elements have the value val
.
This is useful, for example, when you want to add or subtract a constant from a tensor, or multiply every element of a tensor by a scalar.
Eigen::Tensor<float, 2> a(2, 3); a.setConstant(1.0f); Eigen::Tensor<float, 2> b = a + a.constant(2.0f); Eigen::Tensor<float, 2> c = b * b.constant(0.2f); cout << "a" << endl << a << endl << endl; cout << "b" << endl << b << endl << endl; cout << "c" << endl << c << endl << endl; => a 1 1 1 1 1 1 b 3 3 3 3 3 3 c 0.6 0.6 0.6 0.6 0.6 0.6
Returns a tensor of the same type and dimensions as the current tensor but where all elements have random values.
This is for example useful to add random values to an existing tensor. The generation of random values can be customized in the same manner as for setRandom()
.
Eigen::Tensor<float, 2> a(2, 3); a.setConstant(1.0f); Eigen::Tensor<float, 2> b = a + a.random(); cout << "a" << endl << a << endl << endl; cout << "b" << endl << b << endl << endl; => a 1 1 1 1 1 1 b 1.68038 1.5662 1.82329 0.788766 1.59688 0.395103
All these operations take a single input tensor as argument and return a tensor of the same type and dimensions as the tensor to which they are applied. The requested operations are applied to each element independently.
Returns a tensor of the same type and dimensions as the original tensor containing the opposite values of the original tensor.
Eigen::Tensor<float, 2> a(2, 3); a.setConstant(1.0f); Eigen::Tensor<float, 2> b = -a; cout << "a" << endl << a << endl << endl; cout << "b" << endl << b << endl << endl; => a 1 1 1 1 1 1 b -1 -1 -1 -1 -1 -1
Returns a tensor of the same type and dimensions as the original tensor containing the square roots of the original tensor.
Returns a tensor of the same type and dimensions as the original tensor containing the inverse square roots of the original tensor.
Returns a tensor of the same type and dimensions as the original tensor containing the squares of the original tensor values.
Returns a tensor of the same type and dimensions as the original tensor containing the inverse of the original tensor values.
Returns a tensor of the same type and dimensions as the original tensor containing the exponential of the original tensor.
Returns a tensor of the same type and dimensions as the original tensor containing the natural logarithms of the original tensor.
Returns a tensor of the same type and dimensions as the original tensor containing the absolute values of the original tensor.
Returns a tensor of the same type and dimensions as the original tensor containing the coefficients of the original tensor to the power of the exponent.
The type of the exponent, Scalar, is always the same as the type of the tensor coefficients. For example, only integer exponents can be used in conjuntion with tensors of integer values.
You can use cast() to lift this restriction. For example this computes cubic roots of an int Tensor:
Eigen::Tensor<int, 2> a(2, 3); a.setValues({{0, 1, 8}, {27, 64, 125}}); Eigen::Tensor<double, 2> b = a.cast<double>().pow(1.0 / 3.0); cout << "a" << endl << a << endl << endl; cout << "b" << endl << b << endl << endl; => a 0 1 8 27 64 125 b 0 1 2 3 4 5
Multiplies all the coefficients of the input tensor by the provided scale.
TODO
TODO
TODO
These operations take two input tensors as arguments. The 2 input tensors should be of the same type and dimensions. The result is a tensor of the same dimensions as the tensors to which they are applied, and unless otherwise specified it is also of the same type. The requested operations are applied to each pair of elements independently.
Returns a tensor of the same type and dimensions as the input tensors containing the coefficient wise sums of the inputs.
Returns a tensor of the same type and dimensions as the input tensors containing the coefficient wise differences of the inputs.
Returns a tensor of the same type and dimensions as the input tensors containing the coefficient wise products of the inputs.
Returns a tensor of the same type and dimensions as the input tensors containing the coefficient wise quotients of the inputs.
This operator is not supported for integer types.
Returns a tensor of the same type and dimensions as the input tensors containing the coefficient wise maximums of the inputs.
Returns a tensor of the same type and dimensions as the input tensors containing the coefficient wise mimimums of the inputs.
The following logical operators are supported as well:
They all return a tensor of boolean values.
Selection is a coefficient-wise ternary operator that is the tensor equivalent to the if-then-else operation.
Tensor<bool, 3> if = ...; Tensor<float, 3> then = ...; Tensor<float, 3> else = ...; Tensor<float, 3> result = if.select(then, else);
The 3 arguments must be of the same dimensions, which will also be the dimension of the result. The 'if' tensor must be of type boolean, the 'then' and the 'else' tensor must be of the same type, which will also be the type of the result.
Each coefficient in the result is equal to the corresponding coefficient in the 'then' tensor if the corresponding value in the 'if' tensor is true. If not, the resulting coefficient will come from the 'else' tensor.
Tensor contractions are a generalization of the matrix product to the multidimensional case.
// Create 2 matrices using tensors of rank 2 Eigen::Tensor<int, 2> a(2, 3); a.setValues({{1, 2, 3}, {6, 5, 4}}); Eigen::Tensor<int, 2> b(3, 2); a.setValues({{1, 2}, {4, 5}, {5, 6}}); // Compute the traditional matrix product array<IndexPair<int>, 1> product_dims = { IndexPair(1, 0) }; Eigen::Tensor<int, 2> AB = a.contract(b, product_dims); // Compute the product of the transpose of the matrices array<IndexPair<int>, 1> transpose_product_dims = { IndexPair(0, 1) }; Eigen::Tensor<int, 2> AtBt = a.contract(b, transposed_product_dims);
A Reduction operation returns a tensor with fewer dimensions than the original tensor. The values in the returned tensor are computed by applying a reduction operator to slices of values from the original tensor. You specify the dimensions along which the slices are made.
The Eigen Tensor library provides a set of predefined reduction operators such as maximum()
and sum()
and lets you define additional operators by implementing a few methods from a reductor template.
All reduction operations take a single parameter of type
in TensorFunctors.h for information on how to implement a reduction operator.
A Scan operation returns a tensor with the same dimensions as the original tensor. The operation performs an inclusive scan along the specified axis, which means it computes a running total along the axis for a given reduction operation. If the reduction operation corresponds to summation, then this computes the prefix sum of the tensor along the given axis.
Example: dd a comment to this line
// Create a tensor of 2 dimensions Eigen::Tensor<int, 2> a(2, 3); a.setValues({{1, 2, 3}, {4, 5, 6}}); // Scan it along the second dimension (1) using summation Eigen::Tensor<int, 2> b = a.cumsum(1); // The result is a tensor with the same size as the input cout << "a" << endl << a << endl << endl; cout << "b" << endl << b << endl << endl; => a 1 2 3 6 5 4 b 1 3 6 4 9 15
Perform a scan by summing consecutive entries.
Perform a scan by multiplying consecutive entries.
Returns a tensor that is the output of the convolution of the input tensor with the kernel, along the specified dimensions of the input tensor. The dimension size for dimensions of the output tensor which were part of the convolution will be reduced by the formula: output_dim_size = input_dim_size - kernel_dim_size + 1 (requires: input_dim_size >= kernel_dim_size). The dimension sizes for dimensions that were not part of the convolution will remain the same. Performance of the convolution can depend on the length of the stride(s) of the input tensor dimension(s) along which the convolution is computed (the first dimension has the shortest stride for ColMajor, whereas RowMajor's shortest stride is for the last dimension).
// Compute convolution along the second and third dimension. Tensor<float, 4, DataLayout> input(3, 3, 7, 11); Tensor<float, 2, DataLayout> kernel(2, 2); Tensor<float, 4, DataLayout> output(3, 2, 6, 11); input.setRandom(); kernel.setRandom(); Eigen::array<ptrdiff_t, 2> dims({1, 2}); // Specify second and third dimension for convolution. output = input.convolve(kernel, dims); for (int i = 0; i < 3; ++i) { for (int j = 0; j < 2; ++j) { for (int k = 0; k < 6; ++k) { for (int l = 0; l < 11; ++l) { const float result = output(i,j,k,l); const float expected = input(i,j+0,k+0,l) * kernel(0,0) + input(i,j+1,k+0,l) * kernel(1,0) + input(i,j+0,k+1,l) * kernel(0,1) + input(i,j+1,k+1,l) * kernel(1,1); VERIFY_IS_APPROX(result, expected); } } } }
These operations return a Tensor with different dimensions than the original Tensor. They can be used to access slices of tensors, see them with different dimensions, or pad tensors with additional data.
Returns a view of the input tensor that has been reshaped to the specified new dimensions. The argument new_dims is an array of Index values. The rank of the resulting tensor is equal to the number of elements in new_dims.
The product of all the sizes in the new dimension array must be equal to the number of elements in the input tensor.
// Increase the rank of the input tensor by introducing a new dimension // of size 1. Tensor<float, 2> input(7, 11); array<int, 3> three_dims{{7, 11, 1}}; Tensor<float, 3> result = input.reshape(three_dims); // Decrease the rank of the input tensor by merging 2 dimensions; array<int, 1> one_dim{{7 * 11}}; Tensor<float, 1> result = input.reshape(one_dim);
This operation does not move any data in the input tensor, so the resulting contents of a reshaped Tensor depend on the data layout of the original Tensor.
For example this is what happens when you reshape()
a 2D ColMajor tensor to one dimension:
Eigen::Tensor<float, 2, Eigen::ColMajor> a(2, 3); a.setValues({{0.0f, 100.0f, 200.0f}, {300.0f, 400.0f, 500.0f}}); Eigen::array<Eigen::DenseIndex, 1> one_dim({3 * 2}); Eigen::Tensor<float, 1, Eigen::ColMajor> b = a.reshape(one_dim); cout << "b" << endl << b << endl; => b 0 300 100 400 200 500
This is what happens when the 2D Tensor is RowMajor:
Eigen::Tensor<float, 2, Eigen::RowMajor> a(2, 3); a.setValues({{0.0f, 100.0f, 200.0f}, {300.0f, 400.0f, 500.0f}}); Eigen::array<Eigen::DenseIndex, 1> one_dim({3 * 2}); Eigen::Tensor<float, 1, Eigen::RowMajor> b = a.reshape(one_dim); cout << "b" << endl << b << endl; => b 0 100 200 300 400 500
The reshape operation is a lvalue. In other words, it can be used on the left side of the assignment operator.
The previous example can be rewritten as follow:
Eigen::Tensor<float, 2, Eigen::ColMajor> a(2, 3); a.setValues({{0.0f, 100.0f, 200.0f}, {300.0f, 400.0f, 500.0f}}); Eigen::array<Eigen::DenseIndex, 2> two_dim({2, 3}); Eigen::Tensor<float, 1, Eigen::ColMajor> b; b.reshape(two_dim) = a; cout << "b" << endl << b << endl; => b 0 300 100 400 200 500
Note that "b" itself was not reshaped but that instead the assignment is done to the reshape view of b.
Returns a copy of the input tensor whose dimensions have been reordered according to the specified permutation. The argument shuffle is an array of Index values. Its size is the rank of the input tensor. It must contain a permutation of 0, 1, ..., rank - 1. The i-th dimension of the output tensor equals to the size of the shuffle[i]-th dimension of the input tensor. For example:
// Shuffle all dimensions to the left by 1. Tensor<float, 3> input(20, 30, 50); // ... set some values in input. Tensor<float, 3> output = input.shuffle({1, 2, 0}) eigen_assert(output.dimension(0) == 30); eigen_assert(output.dimension(1) == 50); eigen_assert(output.dimension(2) == 20);
Indices into the output tensor are shuffled accordingly to formulate indices into the input tensor. For example, one can assert in the above code snippet that:
eigen_assert(output(3, 7, 11) == input(11, 3, 7));
In general, one can assert that
eigen_assert(output(..., indices[shuffle[i]], ...) == input(..., indices[i], ...))
The shuffle operation results in a lvalue, which means that it can be assigned to. In other words, it can be used on the left side of the assignment operator.
Let's rewrite the previous example to take advantage of this feature:
// Shuffle all dimensions to the left by 1. Tensor<float, 3> input(20, 30, 50); // ... set some values in input. Tensor<float, 3> output(30, 50, 20); output.shuffle({2, 0, 1}) = input;
Returns a view of the input tensor that strides (skips stride-1 elements) along each of the dimensions. The argument strides is an array of Index values. The dimensions of the resulting tensor are ceil(input_dimensions[i] / strides[i]).
For example this is what happens when you stride()
a 2D tensor:
Eigen::Tensor<int, 2> a(4, 3); a.setValues({{0, 100, 200}, {300, 400, 500}, {600, 700, 800}, {900, 1000, 1100}}); Eigen::array<Eigen::DenseIndex, 2> strides({3, 2}); Eigen::Tensor<int, 2> b = a.stride(strides); cout << "b" << endl << b << endl; => b 0 200 900 1100
It is possible to assign a tensor to a stride: Tensor<float, 3> input(20, 30, 50); // ... set some values in input. Tensor<float, 3> output(40, 90, 200); output.stride({2, 3, 4}) = input;
Returns a sub-tensor of the given tensor. For each dimension i, the slice is made of the coefficients stored between offset[i] and offset[i] + extents[i] in the input tensor.
Eigen::Tensor<int, 2> a(4, 3); a.setValues({{0, 100, 200}, {300, 400, 500}, {600, 700, 800}, {900, 1000, 1100}}); Eigen::array<int, 2> offsets = {1, 0}; Eigen::array<int, 2> extents = {2, 2}; Eigen::Tensor<int, 1> slice = a.slice(offsets, extents); cout << "a" << endl << a << endl; => a 0 100 200 300 400 500 600 700 800 900 1000 1100 cout << "slice" << endl << slice << endl; => slice 300 400 600 700
A chip is a special kind of slice. It is the subtensor at the given offset in the dimension dim. The returned tensor has one fewer dimension than the input tensor: the dimension dim is removed.
For example, a matrix chip would be either a row or a column of the input matrix.
Eigen::Tensor<int, 2> a(4, 3); a.setValues({{0, 100, 200}, {300, 400, 500}, {600, 700, 800}, {900, 1000, 1100}}); Eigen::Tensor<int, 1> row_3 = a.chip(2, 0); Eigen::Tensor<int, 1> col_2 = a.chip(1, 1); cout << "a" << endl << a << endl; => a 0 100 200 300 400 500 600 700 800 900 1000 1100 cout << "row_3" << endl << row_3 << endl; => row_3 600 700 800 cout << "col_2" << endl << col_2 << endl; => col_2 100 400 700 1000
It is possible to assign values to a tensor chip since the chip operation is a lvalue. For example:
Eigen::Tensor<int, 1> a(3); a.setValues({{100, 200, 300}}); Eigen::Tensor<int, 2> b(2, 3); b.setZero(); b.chip(0, 0) = a; cout << "a" << endl << a << endl; => a 100 200 300 cout << "b" << endl << b << endl; => b 100 200 300 0 0 0
Returns a view of the input tensor that reverses the order of the coefficients along a subset of the dimensions. The argument reverse is an array of boolean values that indicates whether or not the order of the coefficients should be reversed along each of the dimensions. This operation preserves the dimensions of the input tensor.
For example this is what happens when you reverse()
the first dimension of a 2D tensor:
Eigen::Tensor<int, 2> a(4, 3); a.setValues({{0, 100, 200}, {300, 400, 500}, {600, 700, 800}, {900, 1000, 1100}}); Eigen::array<bool, 2> reverse({true, false}); Eigen::Tensor<int, 2> b = a.reverse(reverse); cout << "a" << endl << a << endl << "b" << endl << b << endl; => a 0 100 200 300 400 500 600 700 800 900 1000 1100 b 900 1000 1100 600 700 800 300 400 500 0 100 200
Returns a view of the input tensor in which the input is replicated one to many times. The broadcast argument specifies how many copies of the input tensor need to be made in each of the dimensions.
Eigen::Tensor<int, 2> a(2, 3); a.setValues({{0, 100, 200}, {300, 400, 500}}); Eigen::array<int, 2> bcast({3, 2}); Eigen::Tensor<int, 2> b = a.broadcast(bcast); cout << "a" << endl << a << endl << "b" << endl << b << endl; => a 0 100 200 300 400 500 b 0 100 200 0 100 200 300 400 500 300 400 500 0 100 200 0 100 200 300 400 500 300 400 500 0 100 200 0 100 200 300 400 500 300 400 500
TODO
Returns a view of the input tensor in which the input is padded with zeros.
Eigen::Tensor<int, 2> a(2, 3); a.setValues({{0, 100, 200}, {300, 400, 500}}); Eigen::array<pair<int, int>, 2> paddings; paddings[0] = make_pair(0, 1); paddings[1] = make_pair(2, 3); Eigen::Tensor<int, 2> b = a.pad(paddings); cout << "a" << endl << a << endl << "b" << endl << b << endl; => a 0 100 200 300 400 500 b 0 0 0 0 0 0 0 0 0 100 200 0 300 400 500 0 0 0 0 0 0 0 0 0 0 0 0 0
Returns a tensor of coefficient patches extracted from the input tensor, where each patch is of dimension specified by 'patch_dims'. The returned tensor has one greater dimension than the input tensor, which is used to index each patch. The patch index in the output tensor depends on the data layout of the input tensor: the patch index is the last dimension ColMajor layout, and the first dimension in RowMajor layout.
For example, given the following input tensor:
Eigen::Tensor<float, 2, DataLayout> tensor(3,4); tensor.setValues({{0.0f, 1.0f, 2.0f, 3.0f}, {4.0f, 5.0f, 6.0f, 7.0f}, {8.0f, 9.0f, 10.0f, 11.0f}});
cout << "tensor: " << endl << tensor << endl; => tensor: 0 1 2 3 4 5 6 7 8 9 10 11
Six 2x2 patches can be extracted and indexed using the following code:
Eigen::Tensor<float, 3, DataLayout> patch; Eigen::array<ptrdiff_t, 2> patch_dims; patch_dims[0] = 2; patch_dims[1] = 2; patch = tensor.extract_patches(patch_dims); for (int k = 0; k < 6; ++k) { cout << "patch index: " << k << endl; for (int i = 0; i < 2; ++i) { for (int j = 0; j < 2; ++j) { if (DataLayout == ColMajor) { cout << patch(i, j, k) << " "; } else { cout << patch(k, i, j) << " "; } } cout << endl; } }
This code results in the following output when the data layout is ColMajor:
patch index: 0 0 1 4 5 patch index: 1 4 5 8 9 patch index: 2 1 2 5 6 patch index: 3 5 6 9 10 patch index: 4 2 3 6 7 patch index: 5 6 7 10 11
This code results in the following output when the data layout is RowMajor: (NOTE: the set of patches is the same as in ColMajor, but are indexed differently).
patch index: 0 0 1 4 5 patch index: 1 1 2 5 6 patch index: 2 2 3 6 7 patch index: 3 4 5 8 9 patch index: 4 5 6 9 10 patch index: 5 6 7 10 11
const Index row_stride, const Index col_stride, const PaddingType padding_type)
Returns a tensor of coefficient image patches extracted from the input tensor, which is expected to have dimensions ordered as follows (depending on the data layout of the input tensor, and the number of additional dimensions 'N'):
*) ColMajor 1st dimension: channels (of size d) 2nd dimension: rows (of size r) 3rd dimension: columns (of size c) 4th-Nth dimension: time (for video) or batch (for bulk processing).
*) RowMajor (reverse order of ColMajor) 1st-Nth dimension: time (for video) or batch (for bulk processing). N+1'th dimension: columns (of size c) N+2'th dimension: rows (of size r) N+3'th dimension: channels (of size d)
The returned tensor has one greater dimension than the input tensor, which is used to index each patch. The patch index in the output tensor depends on the data layout of the input tensor: the patch index is the 4'th dimension in ColMajor layout, and the 4'th from the last dimension in RowMajor layout.
For example, given the following input tensor with the following dimension sizes: *) depth: 2 *) rows: 3 *) columns: 5 *) batch: 7
Tensor<float, 4> tensor(2,3,5,7); Tensor<float, 4, RowMajor> tensor_row_major = tensor.swap_layout();
2x2 image patches can be extracted and indexed using the following code:
*) 2D patch: ColMajor (patch indexed by second-to-last dimension) Tensor<float, 5> twod_patch; twod_patch = tensor.extract_image_patches<2, 2>(); // twod_patch.dimension(0) == 2 // twod_patch.dimension(1) == 2 // twod_patch.dimension(2) == 2 // twod_patch.dimension(3) == 3*5 // twod_patch.dimension(4) == 7
*) 2D patch: RowMajor (patch indexed by the second dimension) Tensor<float, 5, RowMajor> twod_patch_row_major; twod_patch_row_major = tensor_row_major.extract_image_patches<2, 2>(); // twod_patch_row_major.dimension(0) == 7 // twod_patch_row_major.dimension(1) == 3*5 // twod_patch_row_major.dimension(2) == 2 // twod_patch_row_major.dimension(3) == 2 // twod_patch_row_major.dimension(4) == 2
Returns a tensor of type T with the same dimensions as the original tensor. The returned tensor contains the values of the original tensor converted to type T.
Eigen::Tensor<float, 2> a(2, 3); Eigen::Tensor<int, 2> b = a.cast<int>();
This can be useful for example if you need to do element-wise division of Tensors of integers. This is not currently supported by the Tensor library but you can easily cast the tensors to floats to do the division:
Eigen::Tensor<int, 2> a(2, 3); a.setValues({{0, 1, 2}, {3, 4, 5}}); Eigen::Tensor<int, 2> b = (a.cast<float>() / a.constant(2).cast<float>()).cast<int>(); cout << "a" << endl << a << endl << endl; cout << "b" << endl << b << endl << endl; => a 0 1 2 3 4 5 b 0 0 1 1 2 2
TODO
Scalar values are often represented by tensors of size 1 and rank 1. It would be more logical and user friendly to use tensors of rank 0 instead. For example Tensor<T, N>::maximum() currently returns a Tensor<T, 1>. Similarly, the inner product of 2 1d tensors (through contractions) returns a 1d tensor. In the future these operations might be updated to return 0d tensors instead.