MatrixStorage returning a null pointer). For instance this is very
useful to make Tridiagonalization compile for 1x1 matrices
* fix LLT and eigensolver for 1x1 matrix
*Add Eigen/StdVector header.
Including it #includes<vector> and "Core" and generates a partial
specialization of std::vector<T> for T=Eigen::Matrix<...>
that will work even with vectorizable fixed-size Eigen types
(working around a design issue in the c++ STL)
*Add unit-test
CCMAIL: alex.stapleton@gmail.com
ei_aligned_malloc now really behaves like a malloc
(untyped, doesn't call ctor)
ei_aligned_new is the typed variant calling ctor
EIGEN_MAKE_ALIGNED_OPERATOR_NEW now takes the class name as parameter
order, one bit for enabling/disabling auto-alignment. If you want to
disable, do:
Matrix<float,4,1,Matrix_DontAlign>
The Matrix_ prefix is the only way I can see to avoid
ambiguity/pollution. The old RowMajor, ColMajor constants are
deprecated, remain for now.
* this prompted several improvements in matrix_storage. ei_aligned_array
renamed to ei_matrix_array and moved there. The %16==0 tests are now
much more centralized in 1 place there.
* unalignedassert test: updated
* update FindEigen2.cmake from KDElibs
* determinant test: use VERIFY_IS_APPROX to fix false positives; add
testing of 1 big matrix
* remove the automatic resizing feature of operator =
* add function Matrix::set() to be used when the previous
behavior is wanted
* the default constructor of dynamic-size matrices now
creates a "null" matrix (data=0, rows = cols = 0)
instead of a 1x1 matrix
* fix UnixX typos ;)
* Improve the efficiency of matrix*vector in unaligned cases
* Trivial fixes in the destructors of MatrixStorage
* Removed the matrixNorm in test/product.cpp (twice faster and
that assumed the matrix product was ok while checking that !!)
It basically performs 4 dot products at once reducing loads of the vector and improving
instructions scheduling. With 3 cache friendly algorithms, we now handle all product
configurations with outstanding perf for large matrices.
- support dynamic sizes
- support arbitrary matrix size when the matrix can be seen as a 1D array
(except for fixed size matrices where the size in Bytes must be a factor of 16,
this is to allow compact storage of a vector of matrices)
Note that the explict vectorization is still experimental and far to be completely tested.
Currently only the following platform/operations are supported:
- SSE2 compatible architecture
- compiler compatible with intel's SSE2 intrinsics
- float, double and int data types
- fixed size matrices with a storage major dimension multiple of 4 (or 2 for double)
- scalar-matrix product, component wise: +,-,*,min,max
- matrix-matrix product only if the left matrix is vectorizable and column major
or the right matrix is vectorizable and row major, e.g.:
a.transpose() * b is not vectorized with the default column major storage.
To use it you must define EIGEN_VECTORIZE and EIGEN_INTEL_PLATFORM.
internal classes: AaBb -> ei_aa_bb
IntAtRunTimeIfDynamic -> ei_int_if_dynamic
unify UNROLLING_LIMIT (there was no reason to have operator= use
a higher limit)
etc...
* functor templates are not template template parameter anymore
(this allows to make templated functors !)
* Main page: extented compiler discussion
* A small hack to support gcc 3.4 and 4.0 (see the main page)
* Fix a cast type issue in Cast
* Various doxygen updates (mainly Cwise stuff and added doxygen groups
in MatrixBase to split the huge memeber list, still not perfect though)
* Updated Gael's email address
and long double.
-define scalar-multiple operators only for the current Scalar type;
thanks to Gael for expaining how to make the compiler understand
when automatic casting is needed.
-take ScalarMultiple take only 1 template param, again.
We lose some flexibility especially when dealing with complex numbers,
but we gain a lot of extensibility to new scalar types.
Rework the matrix storage to ensure optimal sizeof in all cases, while
keeping the decoupling of matrix sizes versus storage sizes.
Also fixing (recently introduced) bugs caused by unwanted
reallocations of the buffers.
dimension. The advantage is that evaluating a dynamic-sized block in a fixed-size
matrix no longer causes a dynamic memory allocation. Other new thing:
IntAtRunTimeIfDynamic allows storing an integer at zero cost if it is known at
compile time.