order, one bit for enabling/disabling auto-alignment. If you want to
disable, do:
Matrix<float,4,1,Matrix_DontAlign>
The Matrix_ prefix is the only way I can see to avoid
ambiguity/pollution. The old RowMajor, ColMajor constants are
deprecated, remain for now.
* this prompted several improvements in matrix_storage. ei_aligned_array
renamed to ei_matrix_array and moved there. The %16==0 tests are now
much more centralized in 1 place there.
* unalignedassert test: updated
* update FindEigen2.cmake from KDElibs
* determinant test: use VERIFY_IS_APPROX to fix false positives; add
testing of 1 big matrix
* Matrix: always inherit WithAlignedOperatorNew, regardless of
vectorization or not
* rename ei_alloc_stack to ei_aligned_stack_alloc
* mixingtypes test: disable vectorization as SSE intrinsics don't allow
mixing types and we just get compile errors there.
* use _mm_malloc/_mm_free on other platforms than linux of MSVC (eg., cygwin, OSX)
* replace a lot of inline keywords by EIGEN_STRONG_INLINE to compensate for
poor MSVC inlining
Derived to MatrixBase.
* the optimization of eval() for Matrix now consists in a partial
specialization of ei_eval, which returns a reference type for Matrix.
No overriding of eval() in Matrix anymore. Consequence: careful,
ei_eval is no longer guaranteed to give a plain matrix type!
For that, use ei_plain_matrix_type, or the PlainMatrixType typedef.
* so lots of changes to adapt to that everywhere. Hope this doesn't
break (too much) MSVC compilation.
* add code examples for the new image() stuff.
* lower a bit the precision for floats in the unit tests as
we were already doing some workarounds in inverse.cpp and we got some
failed tests.
- in matrix-matrix product, static assert on the two scalar types to be the same.
- Similarly in CwiseBinaryOp. POTENTIALLY CONTROVERSIAL: we don't allow anymore binary
ops to take two different scalar types. The functors that we defined take two args
of the same type anyway; also we still allow the return type to be different.
Again the reason is that different scalar types are incompatible with vectorization.
Better have the user realize explicitly what mixing different numeric types costs him
in terms of performance.
See comment in CwiseBinaryOp constructor.
- This allowed to fix a little mistake in test/regression.cpp, mixing float and double
- Remove redundant semicolon (;) after static asserts
* remove the automatic resizing feature of operator =
* add function Matrix::set() to be used when the previous
behavior is wanted
* the default constructor of dynamic-size matrices now
creates a "null" matrix (data=0, rows = cols = 0)
instead of a 1x1 matrix
* fix UnixX typos ;)
* add a WithAlignedOperatorNew class with overloaded operator new
* make Matrix (and Quaternion, Transform, Hyperplane, etc.) use it
if needed such that "*(new Vector4) = xpr" does not failed anymore.
* Please: make sure your classes having fixed size Eigen's vector
or matrice attributes inherit WithAlignedOperatorNew
* add a ei_new_allocator STL memory allocator to use with STL containers.
This allocator really calls operator new on your types (unlike GCC's
new_allocator). Example:
std::vector<Vector4f> data(10);
will segfault if the vectorization is enabled, instead use:
std::vector<Vector4f,ei_new_allocator<Vector4f> > data(10);
NOTE: you only have to worry if you deal with fixed-size matrix types
with "sizeof(matrix_type)%16==0"...
* replaced the Flags template parameter of Matrix by StorageOrder
and move it back to the 4th position such that we don't have to
worry about the two Max* template parameters
* extended EIGEN_USING_MATRIX_TYPEDEFS with the ei_* math functions
* add some explanations in the typedefs page
* expand a bit the new QuickStartGuide. Some placeholders (not a pb since
it's not even yet linked to from other pages). The point I want to make is
that it's super important to have fully compilable short programs (even
with compile instructions for the first one) not just small snippets, at
least at the beginning. Let's start with examples of compilable programs.
* remove the cast operators in the Geometry module: they are replaced by constructors
and new operator= in Matrix
* extended the operations supported by Rotation2D
* rewrite in solveTriangular:
- merge the Upper and Lower specializations
- big optimization of the path for row-major triangular matrices
* fix .normalized() so that Random().normalized() works; since the return
type became complicated to write down i just let it return an actual
vector, perhaps not optimal.
* add Sparse/CMakeLists.txt. I suppose that it was intentional that it
didn't have CMakeLists, but in <=2.0 releases I'll just manually remove
Sparse.
* Improve the efficiency of matrix*vector in unaligned cases
* Trivial fixes in the destructors of MatrixStorage
* Removed the matrixNorm in test/product.cpp (twice faster and
that assumed the matrix product was ok while checking that !!)
* rework PacketMath and DummyPacketMath, make these actual template
specializations instead of just overriding by non-template inline
functions
* introduce ei_ploadt and ei_pstoret, make use of them in Map and Matrix
* remove Matrix::map() methods, use Map constructors instead.
* added some glue to Eigen/Core (SparseBit, ei_eval, Matrix)
* add two new sparse matrix types:
HashMatrix: based on std::map (for random writes)
LinkedVectorMatrix: array of linked vectors
(for outer coherent writes, e.g. to transpose a matrix)
* add a SparseSetter class to easily set/update any kind of matrices, e.g.:
{ SparseSetter<MatrixType,RandomAccessPattern> wrapper(mymatrix);
for (...) wrapper->coeffRef(rand(),rand()) = rand(); }
* automatic shallow copy for RValue
* and a lot of mess !
plus:
* remove the remaining ArrayBit related stuff
* don't use alloca in product for very large memory allocation
to "public:method()" i.e. reimplementing the generic method()
from MatrixBase.
improves compilation speed by 7%, reduces almost by half the call depth
of trivial functions, making gcc errors and application backtraces
nicer...
* introduce packet(int), make use of it in linear vectorized paths
--> completely fixes the slowdown noticed in benchVecAdd.
* generalize coeff(int) to linear-access xprs
* clarify the access flag bits
* rework api dox in Coeffs.h and util/Constants.h
* improve certain expressions's flags, allowing more vectorization
* fix bug in Block: start(int) and end(int) returned dyn*dyn size
* fix bug in Block: just because the Eval type has packet access
doesn't imply the block xpr should have it too.
** Much better organization
** Fix a few bugs
** Add the ability to unroll only the inner loop
** Add an unrolled path to the Like1D vectorization. Not well tested.
** Add placeholder for sliced vectorization. Unimplemented.
* Rework of corrected_flags:
** improve rules determining vectorizability
** for vectors, the storage-order is indifferent, so we tweak it
to allow vectorization of row-vectors.
* fix compilation in benchmark, and a warning in Transpose.
them in the ei_traits, so that they're guaranteed even if the user
specified his own non-default flags (like before).
Measured to not make compilation any slower.
flags. This ensures that unless explicitly messed up otherwise,
a Matrix type is equal to its own Eval type. This seriously reduces
the number of types instantiated. Measured +13% compile speed, -7%
binary size.
* Improve doc of Matrix template parameters.
(does not support complex and does not re-use the QR decomposition)
* Rewrite the cache friendly product to have only one instance per scalar type !
This significantly speeds up compilation time and reduces executable size.
The current drawback is that some trivial expressions might be
evaluated like conjugate or negate.
* Renamed "cache optimal" to "cache friendly"
* Added the ability to directly access matrix data of some expressions via:
- the stride()/_stride() methods
- DirectAccessBit flag (replace ReferencableBit)
* Introduce a new highly optimized matrix-matrix product for large
matrices. The code is still highly experimental and it is activated
only if you define EIGEN_WIP_PRODUCT at compile time.
Currently the third dimension of the product must be a factor of
the packet size (x4 for floats) and the right handed side matrix
must be column major.
Moreover, currently c = a*b; actually computes c += a*b !!
Therefore, the code is provided for experimentation purpose only !
These limitations will be fixed soon or later to become the default
product implementation.
- support dynamic sizes
- support arbitrary matrix size when the matrix can be seen as a 1D array
(except for fixed size matrices where the size in Bytes must be a factor of 16,
this is to allow compact storage of a vector of matrices)
Note that the explict vectorization is still experimental and far to be completely tested.