* Add short documentation for Array class
* Put all classes explicitly in Core module (where applicable)
* Section on Modules in Quick Reference Guide
* Put Page 7 after Page 6 in Contents :)
* Introduction of strides-at-compile-time so for example the optimized code really knows when it needs to evaluate to a temporary
* StorageKind / XprKind
* Quaternion::setFromTwoVectors: use JacobiSVD instead of SVD
* ComplexSchur: support the 1x1 case
* use them (big simplification in Assign.h)
* axe (Inner|Outer)StrideAtCompileTime that were just introduced
* ei_int_if_dynamic now asserts that the size is the expected one: adapt to that in Block.h
* add rowStride() / colStride() in DenseBase
* implement innerStride() / outerStride() everywhere needed
because thanks to the previous commit this is not needed anymore
* add a more general ForceAlignedAccess expression which can be used for any expression.
It is already used by StableNorm.h.
is aligned or not. This is done using the Aligned constant:
Map<MatrixType,Aligned>::Map(data);
* rename ForceAligned to EnforceAlignedAccess, and update its doc,
and emphasize this is mainly an internal stuff.
* try to be clever in matrix ctors and operator=: be lazy when we can, always allow
to copy rowvector into columnvector, check the template parameters,
try to factor the code better
* add missing copy ctor in UnalignedType
* fix bug in the traits of DiagonalProduct
* renaming: EIGEN_TUNE_FOR_CPU_CACHE_SIZE
* update the dox a little
- in matrix-matrix product, static assert on the two scalar types to be the same.
- Similarly in CwiseBinaryOp. POTENTIALLY CONTROVERSIAL: we don't allow anymore binary
ops to take two different scalar types. The functors that we defined take two args
of the same type anyway; also we still allow the return type to be different.
Again the reason is that different scalar types are incompatible with vectorization.
Better have the user realize explicitly what mixing different numeric types costs him
in terms of performance.
See comment in CwiseBinaryOp constructor.
- This allowed to fix a little mistake in test/regression.cpp, mixing float and double
- Remove redundant semicolon (;) after static asserts
* replaced the Flags template parameter of Matrix by StorageOrder
and move it back to the 4th position such that we don't have to
worry about the two Max* template parameters
* extended EIGEN_USING_MATRIX_TYPEDEFS with the ei_* math functions
- added a MapBase base xpr on top of which Map and the specialization
of Block are implemented
- MapBase forces both aligned loads (and aligned stores, see below) in expressions
such as "x.block(...) += other_expr"
* Significant vectorization improvement:
- added a AlignedBit flag meaning the first coeff/packet is aligned,
this allows to not generate extra code to deal with the first unaligned part
- removed all unaligned stores when no unrolling
- removed unaligned loads in Sum when the input as the DirectAccessBit flag
* Some code simplification in CacheFriendly product
* Some minor documentation improvements
* faster matrix-matrix and matrix-vector products (especially for not aligned cases)
* faster tridiagonalization (make it using our matrix-vector impl.)
Others:
* fix Flags of Map
* split the test_product to two smaller ones
* Improve the efficiency of matrix*vector in unaligned cases
* Trivial fixes in the destructors of MatrixStorage
* Removed the matrixNorm in test/product.cpp (twice faster and
that assumed the matrix product was ok while checking that !!)
* rework PacketMath and DummyPacketMath, make these actual template
specializations instead of just overriding by non-template inline
functions
* introduce ei_ploadt and ei_pstoret, make use of them in Map and Matrix
* remove Matrix::map() methods, use Map constructors instead.
to "public:method()" i.e. reimplementing the generic method()
from MatrixBase.
improves compilation speed by 7%, reduces almost by half the call depth
of trivial functions, making gcc errors and application backtraces
nicer...