significantly simplify the code of these checks while extending them
to catch much more expressions!
* move the enabling/disabling of vectorized sin/cos to the architecture traits
* renaming, e.g. LU ---> FullPivLU
* split tests framework: more robust, e.g. dont generate empty tests if a number is skipped
* make all remaining tests use that splitting, as needed.
* Fix 4x4 inversion (see stable branch)
* Transform::inverse() and geo_transform test : adapt to new inverse() API, it was also trying to instantiate inverse() for 3x4 matrices.
* CMakeLists: more robust regexp to parse the version number
* misc fixes in unit tests
* support resize() to same size (nop). The case of FFT was another case where that make one's life far easier.
hope that's ok with you Gael. but indeed, i don't use it in the ReturnByValue stuff.
FFT:
* Support MatrixBase (well, in the case with direct memory access such as Map)
* adapt unit test
This allows to limit the number of instantiations of the big template method evalTo.
Also allows to get rid of the dummy MatrixBase::resize().
See "TODO" comment.
-- simpplify by removing the 2nd template parameter
-- rename Functor to Derived, as now it's a usual CRTP
* Homogeneous:
-- in products, honor the Max sizes etc.
- support complex numbers
- big rewrite of the 2x2 kernel, much more robust
* Jacobi:
- fix weirdness in initial design, e.g. applyJacobiOnTheRight actually did the inverse transformation
- fully support complex numbers
- fix logic to decide whether to vectorize
- remove several clumsy methods
fix for complex numbers
- rename EvalBeforeAssignBit to MayAliasBit
- make .lazy() remove the MayAliasBit only, and mark it as deprecated
- add a NoAlias pseudo expression, and MatrixBase::noalias() function
Todo:
- we have to decide whether += and -= assume no aliasing by default ?
- once we agree on the API: update the Sparse module and the unit tests respectively.
to guarantee the precision of the output, which is very valuable.
Here, we guarantee that the diagonal matrix returned by the SVD is
actually diagonal, to machine precision.
Performance isn't bad at all at 50% of the current householder SVD
performance for a 200x200 matrix (no vectorization) and we have
lots of room for improvement.
- all specialized products now inherits ProductBase
- the default product evaluated by Assign is still here,
but it is currently enabled for small fixed sizes only
- => this significantly speed up compilation for large matrices
- I left the OuterProduct specialization empty as an exercise...