introduce ei_is_diagonal to check for it
DiagonalCoeffs ---> Diagonal and allow Index to by Dynamic
-> add MatrixBase::diagonal(int) with unittest and doc
* add Homogeneous expression for vector and set of vectors (aka matrix)
=> the next step will be to overload operator*
* add homogeneous normalization (again for vector and set of vectors)
* add a Replicate expression (with uni-directional replication
facilities)
=> for all of them I'll add examples once we agree on the API
* fix gcc-4.4 warnings
* rename reverse.cpp array_reverse.cpp
That means a lot of features which were available for sparse matrices
via the dense (and super slow) implemention are no longer available.
All features which make sense for sparse matrices (aka can be implemented efficiently) will be
implemented soon, but don't expect to see an API as rich as for the dense path.
Other changes:
* no block(), row(), col() anymore.
* instead use .innerVector() to get a col or row vector of a matrix.
* .segment(), start(), end() will be back soon, not sure for block()
* faster cwise product
order, one bit for enabling/disabling auto-alignment. If you want to
disable, do:
Matrix<float,4,1,Matrix_DontAlign>
The Matrix_ prefix is the only way I can see to avoid
ambiguity/pollution. The old RowMajor, ColMajor constants are
deprecated, remain for now.
* this prompted several improvements in matrix_storage. ei_aligned_array
renamed to ei_matrix_array and moved there. The %16==0 tests are now
much more centralized in 1 place there.
* unalignedassert test: updated
* update FindEigen2.cmake from KDElibs
* determinant test: use VERIFY_IS_APPROX to fix false positives; add
testing of 1 big matrix
- in matrix-matrix product, static assert on the two scalar types to be the same.
- Similarly in CwiseBinaryOp. POTENTIALLY CONTROVERSIAL: we don't allow anymore binary
ops to take two different scalar types. The functors that we defined take two args
of the same type anyway; also we still allow the return type to be different.
Again the reason is that different scalar types are incompatible with vectorization.
Better have the user realize explicitly what mixing different numeric types costs him
in terms of performance.
See comment in CwiseBinaryOp constructor.
- This allowed to fix a little mistake in test/regression.cpp, mixing float and double
- Remove redundant semicolon (;) after static asserts
* replaced the Flags template parameter of Matrix by StorageOrder
and move it back to the 4th position such that we don't have to
worry about the two Max* template parameters
* extended EIGEN_USING_MATRIX_TYPEDEFS with the ei_* math functions
- added a MapBase base xpr on top of which Map and the specialization
of Block are implemented
- MapBase forces both aligned loads (and aligned stores, see below) in expressions
such as "x.block(...) += other_expr"
* Significant vectorization improvement:
- added a AlignedBit flag meaning the first coeff/packet is aligned,
this allows to not generate extra code to deal with the first unaligned part
- removed all unaligned stores when no unrolling
- removed unaligned loads in Sum when the input as the DirectAccessBit flag
* Some code simplification in CacheFriendly product
* Some minor documentation improvements
not allow to easily get the rank), fix a bug (which could have been
triggered by matrices having coefficients of very different
magnitudes).
Part: add an assert to prevent hard to find bugs
Swap: update comments
Note: in fact, inverse() always uses partial pivoting because the algo
currently used doesn't make sense with complete pivoting. No num
stability issue so far even with size 200x200. If there is any problem
we can of course reimplement inverse on top of LU.
might be twice faster fot small fixed size matrix
* added a sparse triangular solver (sparse version
of inverseProduct)
* various other improvements in the Sparse module
* rework PacketMath and DummyPacketMath, make these actual template
specializations instead of just overriding by non-template inline
functions
* introduce ei_ploadt and ei_pstoret, make use of them in Map and Matrix
* remove Matrix::map() methods, use Map constructors instead.
* added some glue to Eigen/Core (SparseBit, ei_eval, Matrix)
* add two new sparse matrix types:
HashMatrix: based on std::map (for random writes)
LinkedVectorMatrix: array of linked vectors
(for outer coherent writes, e.g. to transpose a matrix)
* add a SparseSetter class to easily set/update any kind of matrices, e.g.:
{ SparseSetter<MatrixType,RandomAccessPattern> wrapper(mymatrix);
for (...) wrapper->coeffRef(rand(),rand()) = rand(); }
* automatic shallow copy for RValue
* and a lot of mess !
plus:
* remove the remaining ArrayBit related stuff
* don't use alloca in product for very large memory allocation
* introduce packet(int), make use of it in linear vectorized paths
--> completely fixes the slowdown noticed in benchVecAdd.
* generalize coeff(int) to linear-access xprs
* clarify the access flag bits
* rework api dox in Coeffs.h and util/Constants.h
* improve certain expressions's flags, allowing more vectorization
* fix bug in Block: start(int) and end(int) returned dyn*dyn size
* fix bug in Block: just because the Eval type has packet access
doesn't imply the block xpr should have it too.
* make the conj functor vectorizable: it is just identity in real case,
and complex doesn't use the vectorized path anyway.
* fix bug in Block: a 3x1 block in a 4x4 matrix (all fixed-size)
should not be vectorizable, since in fixed-size we are assuming
the size to be a multiple of packet size. (Or would you prefer
Vector3d to be flagged "packetaccess" even though no packet access
is possible on vectors of that type?)
* rename:
isOrtho for vectors ---> isOrthogonal
isOrtho for matrices ---> isUnitary
* add normalize()
* reimplement normalized with quotient1 functor
(could come back to redux after it has been vectorized,
and could serve as a starting point for that)
also make the abs2 functor vectorizable (for real types).
* use ProductReturnType<>::Type to get the correct Product xpr type
* Product is no longer instanciated for xpr types which are evaluated
* vectorization of "a.transpose() * b" for the normal product (small and fixed-size matrix)
* some cleanning
* removed ArrayBase
** Much better organization
** Fix a few bugs
** Add the ability to unroll only the inner loop
** Add an unrolled path to the Like1D vectorization. Not well tested.
** Add placeholder for sliced vectorization. Unimplemented.
* Rework of corrected_flags:
** improve rules determining vectorizability
** for vectors, the storage-order is indifferent, so we tweak it
to allow vectorization of row-vectors.
* fix compilation in benchmark, and a warning in Transpose.
to optimize matrix-diag and diag-matrix products without
making Product over complicated.
* compilation fixes in Tridiagonalization and HessenbergDecomposition
in the case of 2x2 matrices.
* added an Orientation2D small class with similar interface than Quaternion
(used by Transform to handle 2D and 3D orientations seamlessly)
* added a couple of features in Transform.
- get the doc of the flags in Constants right
- finally give up with SEPARATE_MEMBER_PAGES: it triggers too big
Doxygen bugs, and produces too many small pages. So we have one
huge page for MatrixBase at currently 300kb and going up, so the
solution especially for users with low bandwidth will be to provide
an archive of the html documentation.
* added MatrixBase::real()
* added the ability to extract a selfadjoint matrix from the
lower or upper part of a matrix, e.g.:
m.extract<Upper|SelfAdjoint>()
will ignore the strict lower part and return a selfadjoint.
This is compatible with ZeroDiag and UnitDiag.