* try to be clever in matrix ctors and operator=: be lazy when we can, always allow
to copy rowvector into columnvector, check the template parameters,
try to factor the code better
* add missing copy ctor in UnalignedType
* fix bug in the traits of DiagonalProduct
* renaming: EIGEN_TUNE_FOR_CPU_CACHE_SIZE
* update the dox a little
ei_aligned_malloc now really behaves like a malloc
(untyped, doesn't call ctor)
ei_aligned_new is the typed variant calling ctor
EIGEN_MAKE_ALIGNED_OPERATOR_NEW now takes the class name as parameter
* Matrix: always inherit WithAlignedOperatorNew, regardless of
vectorization or not
* rename ei_alloc_stack to ei_aligned_stack_alloc
* mixingtypes test: disable vectorization as SSE intrinsics don't allow
mixing types and we just get compile errors there.
* use _mm_malloc/_mm_free on other platforms than linux of MSVC (eg., cygwin, OSX)
* replace a lot of inline keywords by EIGEN_STRONG_INLINE to compensate for
poor MSVC inlining
* added a meta.cpp unit test
* EIGEN_TUNE_FOR_L2_CACHE_SIZE now represents L2 block size in Bytes (whence the ei_meta_sqrt...)
* added a CustomizeEigen.dox page
* added a TOC to QuickStartGuide.dox
IoFormat OctaveFmt(4, AlignCols, ", ", ";\n", "", "", "[", "]");
cout << mat.format(OctaveFmt);
The first "4" is the precision.
Documentation missing.
* Some compilation fixes
- added a MapBase base xpr on top of which Map and the specialization
of Block are implemented
- MapBase forces both aligned loads (and aligned stores, see below) in expressions
such as "x.block(...) += other_expr"
* Significant vectorization improvement:
- added a AlignedBit flag meaning the first coeff/packet is aligned,
this allows to not generate extra code to deal with the first unaligned part
- removed all unaligned stores when no unrolling
- removed unaligned loads in Sum when the input as the DirectAccessBit flag
* Some code simplification in CacheFriendly product
* Some minor documentation improvements
* faster matrix-matrix and matrix-vector products (especially for not aligned cases)
* faster tridiagonalization (make it using our matrix-vector impl.)
Others:
* fix Flags of Map
* split the test_product to two smaller ones
* Improve the efficiency of matrix*vector in unaligned cases
* Trivial fixes in the destructors of MatrixStorage
* Removed the matrixNorm in test/product.cpp (twice faster and
that assumed the matrix product was ok while checking that !!)
It basically performs 4 dot products at once reducing loads of the vector and improving
instructions scheduling. With 3 cache friendly algorithms, we now handle all product
configurations with outstanding perf for large matrices.
and vector * row-major products. Currently, it is enabled only is the matrix
has DirectAccessBit flag and the product is "large enough".
Added the respective unit tests in test/product/cpp.
* added some glue to Eigen/Core (SparseBit, ei_eval, Matrix)
* add two new sparse matrix types:
HashMatrix: based on std::map (for random writes)
LinkedVectorMatrix: array of linked vectors
(for outer coherent writes, e.g. to transpose a matrix)
* add a SparseSetter class to easily set/update any kind of matrices, e.g.:
{ SparseSetter<MatrixType,RandomAccessPattern> wrapper(mymatrix);
for (...) wrapper->coeffRef(rand(),rand()) = rand(); }
* automatic shallow copy for RValue
* and a lot of mess !
plus:
* remove the remaining ArrayBit related stuff
* don't use alloca in product for very large memory allocation
(does not support complex and does not re-use the QR decomposition)
* Rewrite the cache friendly product to have only one instance per scalar type !
This significantly speeds up compilation time and reduces executable size.
The current drawback is that some trivial expressions might be
evaluated like conjugate or negate.
* Renamed "cache optimal" to "cache friendly"
* Added the ability to directly access matrix data of some expressions via:
- the stride()/_stride() methods
- DirectAccessBit flag (replace ReferencableBit)