mirror of
https://gitlab.com/libeigen/eigen.git
synced 2025-09-12 17:33:15 +08:00
Put code in monospace (typewriter) style.
This commit is contained in:
parent
02a0e79c70
commit
c86ac71b4f
@ -22,11 +22,11 @@ We will explain the program after telling you how to compile it.
|
|||||||
|
|
||||||
\section GettingStartedCompiling Compiling and running your first program
|
\section GettingStartedCompiling Compiling and running your first program
|
||||||
|
|
||||||
There is no library to link to. The only thing that you need to keep in mind when compiling the above program is that the compiler must be able to find the Eigen header files. The directory in which you placed Eigen's source code must be in the include path. With GCC you use the -I option to achieve this, so you can compile the program with a command like this:
|
There is no library to link to. The only thing that you need to keep in mind when compiling the above program is that the compiler must be able to find the Eigen header files. The directory in which you placed Eigen's source code must be in the include path. With GCC you use the \c -I option to achieve this, so you can compile the program with a command like this:
|
||||||
|
|
||||||
\code g++ -I /path/to/eigen/ my_program.cpp -o my_program \endcode
|
\code g++ -I /path/to/eigen/ my_program.cpp -o my_program \endcode
|
||||||
|
|
||||||
On Linux or Mac OS X, another option is to symlink or copy the Eigen folder into /usr/local/include/. This way, you can compile the program with:
|
On Linux or Mac OS X, another option is to symlink or copy the Eigen folder into \c /usr/local/include/. This way, you can compile the program with:
|
||||||
|
|
||||||
\code g++ my_program.cpp -o my_program \endcode
|
\code g++ my_program.cpp -o my_program \endcode
|
||||||
|
|
||||||
|
@ -137,7 +137,7 @@ x1 = solver.solve(b1);
|
|||||||
x2 = solver.solve(b2);
|
x2 = solver.solve(b2);
|
||||||
...
|
...
|
||||||
\endcode
|
\endcode
|
||||||
The compute() method is equivalent to calling both analyzePattern() and factorize().
|
The `compute()` method is equivalent to calling both `analyzePattern()` and `factorize()`.
|
||||||
|
|
||||||
Each solver provides some specific features, such as determinant, access to the factors, controls of the iterations, and so on.
|
Each solver provides some specific features, such as determinant, access to the factors, controls of the iterations, and so on.
|
||||||
More details are available in the documentations of the respective classes.
|
More details are available in the documentations of the respective classes.
|
||||||
@ -145,9 +145,9 @@ More details are available in the documentations of the respective classes.
|
|||||||
Finally, most of the iterative solvers, can also be used in a \b matrix-free context, see the following \link MatrixfreeSolverExample example \endlink.
|
Finally, most of the iterative solvers, can also be used in a \b matrix-free context, see the following \link MatrixfreeSolverExample example \endlink.
|
||||||
|
|
||||||
\section TheSparseCompute The Compute Step
|
\section TheSparseCompute The Compute Step
|
||||||
In the compute() function, the matrix is generally factorized: LLT for self-adjoint matrices, LDLT for general hermitian matrices, LU for non hermitian matrices and QR for rectangular matrices. These are the results of using direct solvers. For this class of solvers precisely, the compute step is further subdivided into analyzePattern() and factorize().
|
In the `compute()` function, the matrix is generally factorized: LLT for self-adjoint matrices, LDLT for general hermitian matrices, LU for non hermitian matrices and QR for rectangular matrices. These are the results of using direct solvers. For this class of solvers precisely, the compute step is further subdivided into `analyzePattern()` and `factorize()`.
|
||||||
|
|
||||||
The goal of analyzePattern() is to reorder the nonzero elements of the matrix, such that the factorization step creates less fill-in. This step exploits only the structure of the matrix. Hence, the results of this step can be used for other linear systems where the matrix has the same structure. Note however that sometimes, some external solvers (like SuperLU) require that the values of the matrix are set in this step, for instance to equilibrate the rows and columns of the matrix. In this situation, the results of this step should not be used with other matrices.
|
The goal of `analyzePattern()` is to reorder the nonzero elements of the matrix, such that the factorization step creates less fill-in. This step exploits only the structure of the matrix. Hence, the results of this step can be used for other linear systems where the matrix has the same structure. Note however that sometimes, some external solvers (like SuperLU) require that the values of the matrix are set in this step, for instance to equilibrate the rows and columns of the matrix. In this situation, the results of this step should not be used with other matrices.
|
||||||
|
|
||||||
Eigen provides a limited set of methods to reorder the matrix in this step, either built-in (COLAMD, AMD) or external (METIS). These methods are set in template parameter list of the solver :
|
Eigen provides a limited set of methods to reorder the matrix in this step, either built-in (COLAMD, AMD) or external (METIS). These methods are set in template parameter list of the solver :
|
||||||
\code
|
\code
|
||||||
@ -156,21 +156,21 @@ DirectSolverClassName<SparseMatrix<double>, OrderingMethod<IndexType> > solver;
|
|||||||
|
|
||||||
See the \link OrderingMethods_Module OrderingMethods module \endlink for the list of available methods and the associated options.
|
See the \link OrderingMethods_Module OrderingMethods module \endlink for the list of available methods and the associated options.
|
||||||
|
|
||||||
In factorize(), the factors of the coefficient matrix are computed. This step should be called each time the values of the matrix change. However, the structural pattern of the matrix should not change between multiple calls.
|
In `factorize()`, the factors of the coefficient matrix are computed. This step should be called each time the values of the matrix change. However, the structural pattern of the matrix should not change between multiple calls.
|
||||||
|
|
||||||
For iterative solvers, the compute step is used to eventually setup a preconditioner. For instance, with the ILUT preconditioner, the incomplete factors L and U are computed in this step. Remember that, basically, the goal of the preconditioner is to speedup the convergence of an iterative method by solving a modified linear system where the coefficient matrix has more clustered eigenvalues. For real problems, an iterative solver should always be used with a preconditioner. In Eigen, a preconditioner is selected by simply adding it as a template parameter to the iterative solver object.
|
For iterative solvers, the compute step is used to eventually setup a preconditioner. For instance, with the ILUT preconditioner, the incomplete factors L and U are computed in this step. Remember that, basically, the goal of the preconditioner is to speedup the convergence of an iterative method by solving a modified linear system where the coefficient matrix has more clustered eigenvalues. For real problems, an iterative solver should always be used with a preconditioner. In Eigen, a preconditioner is selected by simply adding it as a template parameter to the iterative solver object.
|
||||||
\code
|
\code
|
||||||
IterativeSolverClassName<SparseMatrix<double>, PreconditionerName<SparseMatrix<double> > solver;
|
IterativeSolverClassName<SparseMatrix<double>, PreconditionerName<SparseMatrix<double> > solver;
|
||||||
\endcode
|
\endcode
|
||||||
The member function preconditioner() returns a read-write reference to the preconditioner
|
The member function `preconditioner()` returns a read-write reference to the preconditioner
|
||||||
to directly interact with it. See the \link IterativeLinearSolvers_Module Iterative solvers module \endlink and the documentation of each class for the list of available methods.
|
to directly interact with it. See the \link IterativeLinearSolvers_Module Iterative solvers module \endlink and the documentation of each class for the list of available methods.
|
||||||
|
|
||||||
\section TheSparseSolve The Solve step
|
\section TheSparseSolve The Solve step
|
||||||
The solve() function computes the solution of the linear systems with one or many right hand sides.
|
The `solve()` function computes the solution of the linear systems with one or many right hand sides.
|
||||||
\code
|
\code
|
||||||
X = solver.solve(B);
|
X = solver.solve(B);
|
||||||
\endcode
|
\endcode
|
||||||
Here, B can be a vector or a matrix where the columns form the different right hand sides. The solve() function can be called several times as well, for instance when all the right hand sides are not available at once.
|
Here, B can be a vector or a matrix where the columns form the different right hand sides. `The solve()` function can be called several times as well, for instance when all the right hand sides are not available at once.
|
||||||
\code
|
\code
|
||||||
x1 = solver.solve(b1);
|
x1 = solver.solve(b1);
|
||||||
// Get the second right hand side b2
|
// Get the second right hand side b2
|
||||||
@ -180,7 +180,7 @@ x2 = solver.solve(b2);
|
|||||||
For direct methods, the solution are computed at the machine precision. Sometimes, the solution need not be too accurate. In this case, the iterative methods are more suitable and the desired accuracy can be set before the solve step using \b setTolerance(). For all the available functions, please, refer to the documentation of the \link IterativeLinearSolvers_Module Iterative solvers module \endlink.
|
For direct methods, the solution are computed at the machine precision. Sometimes, the solution need not be too accurate. In this case, the iterative methods are more suitable and the desired accuracy can be set before the solve step using \b setTolerance(). For all the available functions, please, refer to the documentation of the \link IterativeLinearSolvers_Module Iterative solvers module \endlink.
|
||||||
|
|
||||||
\section BenchmarkRoutine
|
\section BenchmarkRoutine
|
||||||
Most of the time, all you need is to know how much time it will take to solve your system, and hopefully, what is the most suitable solver. In Eigen, we provide a benchmark routine that can be used for this purpose. It is very easy to use. In the build directory, navigate to bench/spbench and compile the routine by typing \b make \e spbenchsolver. Run it with --help option to get the list of all available options. Basically, the matrices to test should be in <a href="http://math.nist.gov/MatrixMarket/formats.html">MatrixMarket Coordinate format</a>, and the routine returns the statistics from all available solvers in Eigen.
|
Most of the time, all you need is to know how much time it will take to solve your system, and hopefully, what is the most suitable solver. In Eigen, we provide a benchmark routine that can be used for this purpose. It is very easy to use. In the build directory, navigate to `bench/spbench` and compile the routine by typing `make spbenchsolver`. Run it with `--help` option to get the list of all available options. Basically, the matrices to test should be in <a href="http://math.nist.gov/MatrixMarket/formats.html">MatrixMarket Coordinate format</a>, and the routine returns the statistics from all available solvers in Eigen.
|
||||||
|
|
||||||
To export your matrices and right-hand-side vectors in the matrix-market format, you can the the unsupported SparseExtra module:
|
To export your matrices and right-hand-side vectors in the matrix-market format, you can the the unsupported SparseExtra module:
|
||||||
\code
|
\code
|
||||||
|
@ -249,7 +249,7 @@ sm1.outerIndexPtr(); // Pointer to the beginning of each inner vector
|
|||||||
\endcode
|
\endcode
|
||||||
</td>
|
</td>
|
||||||
<td>
|
<td>
|
||||||
If the matrix is not in compressed form, makeCompressed() should be called before.\n
|
If the matrix is not in compressed form, `makeCompressed()` should be called before.\n
|
||||||
Note that these functions are mostly provided for interoperability purposes with external libraries.\n
|
Note that these functions are mostly provided for interoperability purposes with external libraries.\n
|
||||||
A better access to the values of the matrix is done by using the InnerIterator class as described in \link TutorialSparse the Tutorial Sparse \endlink section</td>
|
A better access to the values of the matrix is done by using the InnerIterator class as described in \link TutorialSparse the Tutorial Sparse \endlink section</td>
|
||||||
</tr>
|
</tr>
|
||||||
|
@ -151,14 +151,14 @@ The numbering starts at 0. This example is self-explanatory:
|
|||||||
\verbinclude tut_matrix_coefficient_accessors.out
|
\verbinclude tut_matrix_coefficient_accessors.out
|
||||||
</td></tr></table>
|
</td></tr></table>
|
||||||
|
|
||||||
Note that the syntax <tt> m(index) </tt>
|
Note that the syntax `m(index)`
|
||||||
is not restricted to vectors, it is also available for general matrices, meaning index-based access
|
is not restricted to vectors, it is also available for general matrices, meaning index-based access
|
||||||
in the array of coefficients. This however depends on the matrix's storage order. All Eigen matrices default to
|
in the array of coefficients. This however depends on the matrix's storage order. All Eigen matrices default to
|
||||||
column-major storage order, but this can be changed to row-major, see \ref TopicStorageOrders "Storage orders".
|
column-major storage order, but this can be changed to row-major, see \ref TopicStorageOrders "Storage orders".
|
||||||
|
|
||||||
The operator[] is also overloaded for index-based access in vectors, but keep in mind that C++ doesn't allow operator[] to
|
The `operator[]` is also overloaded for index-based access in vectors, but keep in mind that C++ doesn't allow `operator[]` to
|
||||||
take more than one argument. We restrict operator[] to vectors, because an awkwardness in the C++ language
|
take more than one argument. We restrict `operator[]` to vectors, because an awkwardness in the C++ language
|
||||||
would make matrix[i,j] compile to the same thing as matrix[j] !
|
would make `matrix[i,j]` compile to the same thing as `matrix[j]`!
|
||||||
|
|
||||||
\section TutorialMatrixCommaInitializer Comma-initialization
|
\section TutorialMatrixCommaInitializer Comma-initialization
|
||||||
|
|
||||||
@ -186,8 +186,8 @@ The current size of a matrix can be retrieved by \link EigenBase::rows() rows()\
|
|||||||
<td>\verbinclude tut_matrix_resize.out </td>
|
<td>\verbinclude tut_matrix_resize.out </td>
|
||||||
</tr></table>
|
</tr></table>
|
||||||
|
|
||||||
The resize() method is a no-operation if the actual matrix size doesn't change; otherwise it is destructive: the values of the coefficients may change.
|
The `resize()` method is a no-operation if the actual matrix size doesn't change; otherwise it is destructive: the values of the coefficients may change.
|
||||||
If you want a conservative variant of resize() which does not change the coefficients, use \link PlainObjectBase::conservativeResize() conservativeResize()\endlink, see \ref TopicResizing "this page" for more details.
|
If you want a conservative variant of `resize()` which does not change the coefficients, use \link PlainObjectBase::conservativeResize() conservativeResize()\endlink, see \ref TopicResizing "this page" for more details.
|
||||||
|
|
||||||
All these methods are still available on fixed-size matrices, for the sake of API uniformity. Of course, you can't actually
|
All these methods are still available on fixed-size matrices, for the sake of API uniformity. Of course, you can't actually
|
||||||
resize a fixed-size matrix. Trying to change a fixed size to an actually different value will trigger an assertion failure;
|
resize a fixed-size matrix. Trying to change a fixed size to an actually different value will trigger an assertion failure;
|
||||||
@ -234,7 +234,7 @@ is always allocated on the heap, so doing
|
|||||||
\code MatrixXf mymatrix(rows,columns); \endcode
|
\code MatrixXf mymatrix(rows,columns); \endcode
|
||||||
amounts to doing
|
amounts to doing
|
||||||
\code float *mymatrix = new float[rows*columns]; \endcode
|
\code float *mymatrix = new float[rows*columns]; \endcode
|
||||||
and in addition to that, the MatrixXf object stores its number of rows and columns as
|
and in addition to that, the \c MatrixXf object stores its number of rows and columns as
|
||||||
member variables.
|
member variables.
|
||||||
|
|
||||||
The limitation of using fixed sizes, of course, is that this is only possible
|
The limitation of using fixed sizes, of course, is that this is only possible
|
||||||
@ -276,14 +276,14 @@ Matrix<typename Scalar,
|
|||||||
\section TutorialMatrixTypedefs Convenience typedefs
|
\section TutorialMatrixTypedefs Convenience typedefs
|
||||||
|
|
||||||
Eigen defines the following Matrix typedefs:
|
Eigen defines the following Matrix typedefs:
|
||||||
\li MatrixNt for Matrix<type, N, N>. For example, MatrixXi for Matrix<int, Dynamic, Dynamic>.
|
\li \c MatrixNt for `Matrix<type, N, N>`. For example, \c MatrixXi for `Matrix<int, Dynamic, Dynamic>`.
|
||||||
\li VectorNt for Matrix<type, N, 1>. For example, Vector2f for Matrix<float, 2, 1>.
|
\li \c VectorNt for `Matrix<type, N, 1>`. For example, \c Vector2f for `Matrix<float, 2, 1>`.
|
||||||
\li RowVectorNt for Matrix<type, 1, N>. For example, RowVector3d for Matrix<double, 1, 3>.
|
\li \c RowVectorNt for `Matrix<type, 1, N>`. For example, \c RowVector3d for `Matrix<double, 1, 3>`.
|
||||||
|
|
||||||
Where:
|
Where:
|
||||||
\li N can be any one of \c 2, \c 3, \c 4, or \c X (meaning \c Dynamic).
|
\li \c N can be any one of \c 2, \c 3, \c 4, or \c X (meaning \c Dynamic).
|
||||||
\li t can be any one of \c i (meaning int), \c f (meaning float), \c d (meaning double),
|
\li \c t can be any one of \c i (meaning \c int), \c f (meaning \c float), \c d (meaning \c double),
|
||||||
\c cf (meaning complex<float>), or \c cd (meaning complex<double>). The fact that typedefs are only
|
\c cf (meaning `complex<float>`), or \c cd (meaning `complex<double>`). The fact that `typedef`s are only
|
||||||
defined for these five types doesn't mean that they are the only supported scalar types. For example,
|
defined for these five types doesn't mean that they are the only supported scalar types. For example,
|
||||||
all standard integer types are supported, see \ref TopicScalarTypes "Scalar types".
|
all standard integer types are supported, see \ref TopicScalarTypes "Scalar types".
|
||||||
|
|
||||||
|
@ -3,7 +3,7 @@ namespace Eigen {
|
|||||||
/** \eigenManualPage TutorialReshape Reshape
|
/** \eigenManualPage TutorialReshape Reshape
|
||||||
|
|
||||||
Since the version 3.4, %Eigen exposes convenient methods to reshape a matrix to another matrix of different sizes or vector.
|
Since the version 3.4, %Eigen exposes convenient methods to reshape a matrix to another matrix of different sizes or vector.
|
||||||
All cases are handled via the DenseBase::reshaped(NRowsType,NColsType) and DenseBase::reshaped() functions.
|
All cases are handled via the `DenseBase::reshaped(NRowsType,NColsType)` and `DenseBase::reshaped()` functions.
|
||||||
Those functions do not perform in-place reshaping, but instead return a <i> view </i> on the input expression.
|
Those functions do not perform in-place reshaping, but instead return a <i> view </i> on the input expression.
|
||||||
|
|
||||||
\eigenAutoToc
|
\eigenAutoToc
|
||||||
@ -23,7 +23,7 @@ Here is an example reshaping a 4x4 matrix to a 2x8 one:
|
|||||||
</td></tr></table>
|
</td></tr></table>
|
||||||
|
|
||||||
By default, the input coefficients are always interpreted in column-major order regardless of the storage order of the input expression.
|
By default, the input coefficients are always interpreted in column-major order regardless of the storage order of the input expression.
|
||||||
For more control on ordering, compile-time sizes, and automatic size deduction, please see de documentation of DenseBase::reshaped(NRowsType,NColsType) that contains all the details with many examples.
|
For more control on ordering, compile-time sizes, and automatic size deduction, please see de documentation of `DenseBase::reshaped(NRowsType,NColsType)` that contains all the details with many examples.
|
||||||
|
|
||||||
|
|
||||||
\section TutorialReshapeMat2Vec 1D linear views
|
\section TutorialReshapeMat2Vec 1D linear views
|
||||||
|
@ -54,13 +54,13 @@ and one of its possible sparse, \b column \b major representation:
|
|||||||
|
|
||||||
Currently the elements of a given inner vector are guaranteed to be always sorted by increasing inner indices.
|
Currently the elements of a given inner vector are guaranteed to be always sorted by increasing inner indices.
|
||||||
The \c "_" indicates available free space to quickly insert new elements.
|
The \c "_" indicates available free space to quickly insert new elements.
|
||||||
Assuming no reallocation is needed, the insertion of a random element is therefore in O(nnz_j) where nnz_j is the number of nonzeros of the respective inner vector.
|
Assuming no reallocation is needed, the insertion of a random element is therefore in `O(nnz_j)` where `nnz_j` is the number of nonzeros of the respective inner vector.
|
||||||
On the other hand, inserting elements with increasing inner indices in a given inner vector is much more efficient since this only requires to increase the respective \c InnerNNZs entry that is a O(1) operation.
|
On the other hand, inserting elements with increasing inner indices in a given inner vector is much more efficient since this only requires to increase the respective \c InnerNNZs entry that is a `O(1)` operation.
|
||||||
|
|
||||||
The case where no empty space is available is a special case, and is referred as the \em compressed mode.
|
The case where no empty space is available is a special case, and is referred as the \em compressed mode.
|
||||||
It corresponds to the widely used Compressed Column (or Row) Storage schemes (CCS or CRS).
|
It corresponds to the widely used Compressed Column (or Row) Storage schemes (CCS or CRS).
|
||||||
Any SparseMatrix can be turned to this form by calling the SparseMatrix::makeCompressed() function.
|
Any SparseMatrix can be turned to this form by calling the SparseMatrix::makeCompressed() function.
|
||||||
In this case, one can remark that the \c InnerNNZs array is redundant with \c OuterStarts because we have the equality: \c InnerNNZs[j] = \c OuterStarts[j+1]-\c OuterStarts[j].
|
In this case, one can remark that the \c InnerNNZs array is redundant with \c OuterStarts because we have the equality: `InnerNNZs[j] == OuterStarts[j+1] - OuterStarts[j]`.
|
||||||
Therefore, in practice a call to SparseMatrix::makeCompressed() frees this buffer.
|
Therefore, in practice a call to SparseMatrix::makeCompressed() frees this buffer.
|
||||||
|
|
||||||
It is worth noting that most of our wrappers to external libraries requires compressed matrices as inputs.
|
It is worth noting that most of our wrappers to external libraries requires compressed matrices as inputs.
|
||||||
@ -221,9 +221,9 @@ A typical scenario of this approach is illustrated below:
|
|||||||
5: mat.makeCompressed(); // optional
|
5: mat.makeCompressed(); // optional
|
||||||
\endcode
|
\endcode
|
||||||
|
|
||||||
- The key ingredient here is the line 2 where we reserve room for 6 non-zeros per column. In many cases, the number of non-zeros per column or row can easily be known in advance. If it varies significantly for each inner vector, then it is possible to specify a reserve size for each inner vector by providing a vector object with an operator[](int j) returning the reserve size of the \c j-th inner vector (e.g., via a VectorXi or std::vector<int>). If only a rought estimate of the number of nonzeros per inner-vector can be obtained, it is highly recommended to overestimate it rather than the opposite. If this line is omitted, then the first insertion of a new element will reserve room for 2 elements per inner vector.
|
- The key ingredient here is the line 2 where we reserve room for 6 non-zeros per column. In many cases, the number of non-zeros per column or row can easily be known in advance. If it varies significantly for each inner vector, then it is possible to specify a reserve size for each inner vector by providing a vector object with an `operator[](int j)` returning the reserve size of the \c j-th inner vector (e.g., via a `VectorXi` or `std::vector<int>`). If only a rought estimate of the number of nonzeros per inner-vector can be obtained, it is highly recommended to overestimate it rather than the opposite. If this line is omitted, then the first insertion of a new element will reserve room for 2 elements per inner vector.
|
||||||
- The line 4 performs a sorted insertion. In this example, the ideal case is when the \c j-th column is not full and contains non-zeros whose inner-indices are smaller than \c i. In this case, this operation boils down to trivial O(1) operation.
|
- The line 4 performs a sorted insertion. In this example, the ideal case is when the \c j-th column is not full and contains non-zeros whose inner-indices are smaller than \c i. In this case, this operation boils down to trivial O(1) operation.
|
||||||
- When calling insert(i,j) the element \c i \c ,j must not already exists, otherwise use the coeffRef(i,j) method that will allow to, e.g., accumulate values. This method first performs a binary search and finally calls insert(i,j) if the element does not already exist. It is more flexible than insert() but also more costly.
|
- When calling `insert(i,j)` the element `i`, `j` must not already exists, otherwise use the `coeffRef(i,j)` method that will allow to, e.g., accumulate values. This method first performs a binary search and finally calls `insert(i,j)` if the element does not already exist. It is more flexible than `insert()` but also more costly.
|
||||||
- The line 5 suppresses the remaining empty space and transforms the matrix into a compressed column storage.
|
- The line 5 suppresses the remaining empty space and transforms the matrix into a compressed column storage.
|
||||||
|
|
||||||
|
|
||||||
@ -259,7 +259,7 @@ sm2 = sm1.cwiseProduct(dm1);
|
|||||||
dm2 = sm1 + dm1;
|
dm2 = sm1 + dm1;
|
||||||
dm2 = dm1 - sm1;
|
dm2 = dm1 - sm1;
|
||||||
\endcode
|
\endcode
|
||||||
Performance-wise, the adding/subtracting sparse and dense matrices is better performed in two steps. For instance, instead of doing <tt>dm2 = sm1 + dm1</tt>, better write:
|
Performance-wise, the adding/subtracting sparse and dense matrices is better performed in two steps. For instance, instead of doing `dm2 = sm1 + dm1`, better write:
|
||||||
\code
|
\code
|
||||||
dm2 = dm1;
|
dm2 = dm1;
|
||||||
dm2 += sm1;
|
dm2 += sm1;
|
||||||
@ -272,7 +272,7 @@ This version has the advantage to fully exploit the higher performance of dense
|
|||||||
sm1 = sm2.transpose();
|
sm1 = sm2.transpose();
|
||||||
sm1 = sm2.adjoint();
|
sm1 = sm2.adjoint();
|
||||||
\endcode
|
\endcode
|
||||||
However, there is no transposeInPlace() method.
|
However, there is no `transposeInPlace()` method.
|
||||||
|
|
||||||
|
|
||||||
\subsection TutorialSparse_Products Matrix products
|
\subsection TutorialSparse_Products Matrix products
|
||||||
@ -284,18 +284,18 @@ dv2 = sm1 * dv1;
|
|||||||
dm2 = dm1 * sm1.adjoint();
|
dm2 = dm1 * sm1.adjoint();
|
||||||
dm2 = 2. * sm1 * dm1;
|
dm2 = 2. * sm1 * dm1;
|
||||||
\endcode
|
\endcode
|
||||||
- \b symmetric \b sparse-dense. The product of a sparse symmetric matrix with a dense matrix (or vector) can also be optimized by specifying the symmetry with selfadjointView():
|
- \b symmetric \b sparse-dense. The product of a sparse symmetric matrix with a dense matrix (or vector) can also be optimized by specifying the symmetry with `selfadjointView()`:
|
||||||
\code
|
\code
|
||||||
dm2 = sm1.selfadjointView<>() * dm1; // if all coefficients of A are stored
|
dm2 = sm1.selfadjointView<>() * dm1; // if all coefficients of sm1 are stored
|
||||||
dm2 = A.selfadjointView<Upper>() * dm1; // if only the upper part of A is stored
|
dm2 = sm1.selfadjointView<Upper>() * dm1; // if only the upper part of sm1 is stored
|
||||||
dm2 = A.selfadjointView<Lower>() * dm1; // if only the lower part of A is stored
|
dm2 = sm1.selfadjointView<Lower>() * dm1; // if only the lower part of sm1 is stored
|
||||||
\endcode
|
\endcode
|
||||||
- \b sparse-sparse. For sparse-sparse products, two different algorithms are available. The default one is conservative and preserve the explicit zeros that might appear:
|
- \b sparse-sparse. For sparse-sparse products, two different algorithms are available. The default one is conservative and preserve the explicit zeros that might appear:
|
||||||
\code
|
\code
|
||||||
sm3 = sm1 * sm2;
|
sm3 = sm1 * sm2;
|
||||||
sm3 = 4 * sm1.adjoint() * sm2;
|
sm3 = 4 * sm1.adjoint() * sm2;
|
||||||
\endcode
|
\endcode
|
||||||
The second algorithm prunes on the fly the explicit zeros, or the values smaller than a given threshold. It is enabled and controlled through the prune() functions:
|
The second algorithm prunes on the fly the explicit zeros, or the values smaller than a given threshold. It is enabled and controlled through the `prune()` functions:
|
||||||
\code
|
\code
|
||||||
sm3 = (sm1 * sm2).pruned(); // removes numerical zeros
|
sm3 = (sm1 * sm2).pruned(); // removes numerical zeros
|
||||||
sm3 = (sm1 * sm2).pruned(ref); // removes elements much smaller than ref
|
sm3 = (sm1 * sm2).pruned(ref); // removes elements much smaller than ref
|
||||||
@ -314,7 +314,7 @@ sm2 = sm1.transpose() * P;
|
|||||||
\subsection TutorialSparse_SubMatrices Block operations
|
\subsection TutorialSparse_SubMatrices Block operations
|
||||||
|
|
||||||
Regarding read-access, sparse matrices expose the same API than for dense matrices to access to sub-matrices such as blocks, columns, and rows. See \ref TutorialBlockOperations for a detailed introduction.
|
Regarding read-access, sparse matrices expose the same API than for dense matrices to access to sub-matrices such as blocks, columns, and rows. See \ref TutorialBlockOperations for a detailed introduction.
|
||||||
However, for performance reasons, writing to a sub-sparse-matrix is much more limited, and currently only contiguous sets of columns (resp. rows) of a column-major (resp. row-major) SparseMatrix are writable. Moreover, this information has to be known at compile-time, leaving out methods such as <tt>block(...)</tt> and <tt>corner*(...)</tt>. The available API for write-access to a SparseMatrix are summarized below:
|
However, for performance reasons, writing to a sub-sparse-matrix is much more limited, and currently only contiguous sets of columns (resp. rows) of a column-major (resp. row-major) SparseMatrix are writable. Moreover, this information has to be known at compile-time, leaving out methods such as `block(...)` and `corner*(...)`. The available API for write-access to a SparseMatrix are summarized below:
|
||||||
\code
|
\code
|
||||||
SparseMatrix<double,ColMajor> sm1;
|
SparseMatrix<double,ColMajor> sm1;
|
||||||
sm1.col(j) = ...;
|
sm1.col(j) = ...;
|
||||||
@ -329,22 +329,22 @@ sm2.middleRows(i,nrows) = ...;
|
|||||||
sm2.bottomRows(nrows) = ...;
|
sm2.bottomRows(nrows) = ...;
|
||||||
\endcode
|
\endcode
|
||||||
|
|
||||||
In addition, sparse matrices expose the SparseMatrixBase::innerVector() and SparseMatrixBase::innerVectors() methods, which are aliases to the col/middleCols methods for a column-major storage, and to the row/middleRows methods for a row-major storage.
|
In addition, sparse matrices expose the `SparseMatrixBase::innerVector()` and `SparseMatrixBase::innerVectors()` methods, which are aliases to the `col`/`middleCols` methods for a column-major storage, and to the `row`/`middleRows` methods for a row-major storage.
|
||||||
|
|
||||||
\subsection TutorialSparse_TriangularSelfadjoint Triangular and selfadjoint views
|
\subsection TutorialSparse_TriangularSelfadjoint Triangular and selfadjoint views
|
||||||
|
|
||||||
Just as with dense matrices, the triangularView() function can be used to address a triangular part of the matrix, and perform triangular solves with a dense right hand side:
|
Just as with dense matrices, the `triangularView()` function can be used to address a triangular part of the matrix, and perform triangular solves with a dense right hand side:
|
||||||
\code
|
\code
|
||||||
dm2 = sm1.triangularView<Lower>(dm1);
|
dm2 = sm1.triangularView<Lower>(dm1);
|
||||||
dv2 = sm1.transpose().triangularView<Upper>(dv1);
|
dv2 = sm1.transpose().triangularView<Upper>(dv1);
|
||||||
\endcode
|
\endcode
|
||||||
|
|
||||||
The selfadjointView() function permits various operations:
|
The `selfadjointView()` function permits various operations:
|
||||||
- optimized sparse-dense matrix products:
|
- optimized sparse-dense matrix products:
|
||||||
\code
|
\code
|
||||||
dm2 = sm1.selfadjointView<>() * dm1; // if all coefficients of A are stored
|
dm2 = sm1.selfadjointView<>() * dm1; // if all coefficients of sm1 are stored
|
||||||
dm2 = A.selfadjointView<Upper>() * dm1; // if only the upper part of A is stored
|
dm2 = sm1.selfadjointView<Upper>() * dm1; // if only the upper part of sm1 is stored
|
||||||
dm2 = A.selfadjointView<Lower>() * dm1; // if only the lower part of A is stored
|
dm2 = sm1.selfadjointView<Lower>() * dm1; // if only the lower part of sm1 is stored
|
||||||
\endcode
|
\endcode
|
||||||
- copy of triangular parts:
|
- copy of triangular parts:
|
||||||
\code
|
\code
|
||||||
|
Loading…
x
Reference in New Issue
Block a user