From c86ac71b4f557c8a9506146f0664fa5d5762c7c7 Mon Sep 17 00:00:00 2001 From: pvcStillinGradSchool Date: Tue, 3 Aug 2021 01:48:32 +0000 Subject: [PATCH] Put code in monospace (typewriter) style. --- doc/QuickStartGuide.dox | 4 ++-- doc/SparseLinearSystems.dox | 16 +++++++-------- doc/SparseQuickReference.dox | 2 +- doc/TutorialMatrixClass.dox | 26 ++++++++++++------------ doc/TutorialReshape.dox | 4 ++-- doc/TutorialSparse.dox | 38 ++++++++++++++++++------------------ 6 files changed, 45 insertions(+), 45 deletions(-) diff --git a/doc/QuickStartGuide.dox b/doc/QuickStartGuide.dox index 4192b28b7..037269474 100644 --- a/doc/QuickStartGuide.dox +++ b/doc/QuickStartGuide.dox @@ -22,11 +22,11 @@ We will explain the program after telling you how to compile it. \section GettingStartedCompiling Compiling and running your first program -There is no library to link to. The only thing that you need to keep in mind when compiling the above program is that the compiler must be able to find the Eigen header files. The directory in which you placed Eigen's source code must be in the include path. With GCC you use the -I option to achieve this, so you can compile the program with a command like this: +There is no library to link to. The only thing that you need to keep in mind when compiling the above program is that the compiler must be able to find the Eigen header files. The directory in which you placed Eigen's source code must be in the include path. With GCC you use the \c -I option to achieve this, so you can compile the program with a command like this: \code g++ -I /path/to/eigen/ my_program.cpp -o my_program \endcode -On Linux or Mac OS X, another option is to symlink or copy the Eigen folder into /usr/local/include/. This way, you can compile the program with: +On Linux or Mac OS X, another option is to symlink or copy the Eigen folder into \c /usr/local/include/. This way, you can compile the program with: \code g++ my_program.cpp -o my_program \endcode diff --git a/doc/SparseLinearSystems.dox b/doc/SparseLinearSystems.dox index 38754e4af..1f7db6fa2 100644 --- a/doc/SparseLinearSystems.dox +++ b/doc/SparseLinearSystems.dox @@ -137,7 +137,7 @@ x1 = solver.solve(b1); x2 = solver.solve(b2); ... \endcode -The compute() method is equivalent to calling both analyzePattern() and factorize(). +The `compute()` method is equivalent to calling both `analyzePattern()` and `factorize()`. Each solver provides some specific features, such as determinant, access to the factors, controls of the iterations, and so on. More details are available in the documentations of the respective classes. @@ -145,9 +145,9 @@ More details are available in the documentations of the respective classes. Finally, most of the iterative solvers, can also be used in a \b matrix-free context, see the following \link MatrixfreeSolverExample example \endlink. \section TheSparseCompute The Compute Step -In the compute() function, the matrix is generally factorized: LLT for self-adjoint matrices, LDLT for general hermitian matrices, LU for non hermitian matrices and QR for rectangular matrices. These are the results of using direct solvers. For this class of solvers precisely, the compute step is further subdivided into analyzePattern() and factorize(). +In the `compute()` function, the matrix is generally factorized: LLT for self-adjoint matrices, LDLT for general hermitian matrices, LU for non hermitian matrices and QR for rectangular matrices. These are the results of using direct solvers. For this class of solvers precisely, the compute step is further subdivided into `analyzePattern()` and `factorize()`. -The goal of analyzePattern() is to reorder the nonzero elements of the matrix, such that the factorization step creates less fill-in. This step exploits only the structure of the matrix. Hence, the results of this step can be used for other linear systems where the matrix has the same structure. Note however that sometimes, some external solvers (like SuperLU) require that the values of the matrix are set in this step, for instance to equilibrate the rows and columns of the matrix. In this situation, the results of this step should not be used with other matrices. +The goal of `analyzePattern()` is to reorder the nonzero elements of the matrix, such that the factorization step creates less fill-in. This step exploits only the structure of the matrix. Hence, the results of this step can be used for other linear systems where the matrix has the same structure. Note however that sometimes, some external solvers (like SuperLU) require that the values of the matrix are set in this step, for instance to equilibrate the rows and columns of the matrix. In this situation, the results of this step should not be used with other matrices. Eigen provides a limited set of methods to reorder the matrix in this step, either built-in (COLAMD, AMD) or external (METIS). These methods are set in template parameter list of the solver : \code @@ -156,21 +156,21 @@ DirectSolverClassName, OrderingMethod > solver; See the \link OrderingMethods_Module OrderingMethods module \endlink for the list of available methods and the associated options. -In factorize(), the factors of the coefficient matrix are computed. This step should be called each time the values of the matrix change. However, the structural pattern of the matrix should not change between multiple calls. +In `factorize()`, the factors of the coefficient matrix are computed. This step should be called each time the values of the matrix change. However, the structural pattern of the matrix should not change between multiple calls. For iterative solvers, the compute step is used to eventually setup a preconditioner. For instance, with the ILUT preconditioner, the incomplete factors L and U are computed in this step. Remember that, basically, the goal of the preconditioner is to speedup the convergence of an iterative method by solving a modified linear system where the coefficient matrix has more clustered eigenvalues. For real problems, an iterative solver should always be used with a preconditioner. In Eigen, a preconditioner is selected by simply adding it as a template parameter to the iterative solver object. \code IterativeSolverClassName, PreconditionerName > solver; \endcode -The member function preconditioner() returns a read-write reference to the preconditioner +The member function `preconditioner()` returns a read-write reference to the preconditioner to directly interact with it. See the \link IterativeLinearSolvers_Module Iterative solvers module \endlink and the documentation of each class for the list of available methods. \section TheSparseSolve The Solve step -The solve() function computes the solution of the linear systems with one or many right hand sides. +The `solve()` function computes the solution of the linear systems with one or many right hand sides. \code X = solver.solve(B); \endcode -Here, B can be a vector or a matrix where the columns form the different right hand sides. The solve() function can be called several times as well, for instance when all the right hand sides are not available at once. +Here, B can be a vector or a matrix where the columns form the different right hand sides. `The solve()` function can be called several times as well, for instance when all the right hand sides are not available at once. \code x1 = solver.solve(b1); // Get the second right hand side b2 @@ -180,7 +180,7 @@ x2 = solver.solve(b2); For direct methods, the solution are computed at the machine precision. Sometimes, the solution need not be too accurate. In this case, the iterative methods are more suitable and the desired accuracy can be set before the solve step using \b setTolerance(). For all the available functions, please, refer to the documentation of the \link IterativeLinearSolvers_Module Iterative solvers module \endlink. \section BenchmarkRoutine -Most of the time, all you need is to know how much time it will take to solve your system, and hopefully, what is the most suitable solver. In Eigen, we provide a benchmark routine that can be used for this purpose. It is very easy to use. In the build directory, navigate to bench/spbench and compile the routine by typing \b make \e spbenchsolver. Run it with --help option to get the list of all available options. Basically, the matrices to test should be in MatrixMarket Coordinate format, and the routine returns the statistics from all available solvers in Eigen. +Most of the time, all you need is to know how much time it will take to solve your system, and hopefully, what is the most suitable solver. In Eigen, we provide a benchmark routine that can be used for this purpose. It is very easy to use. In the build directory, navigate to `bench/spbench` and compile the routine by typing `make spbenchsolver`. Run it with `--help` option to get the list of all available options. Basically, the matrices to test should be in MatrixMarket Coordinate format, and the routine returns the statistics from all available solvers in Eigen. To export your matrices and right-hand-side vectors in the matrix-market format, you can the the unsupported SparseExtra module: \code diff --git a/doc/SparseQuickReference.dox b/doc/SparseQuickReference.dox index 9779f3f9c..b8264a444 100644 --- a/doc/SparseQuickReference.dox +++ b/doc/SparseQuickReference.dox @@ -249,7 +249,7 @@ sm1.outerIndexPtr(); // Pointer to the beginning of each inner vector \endcode -If the matrix is not in compressed form, makeCompressed() should be called before.\n +If the matrix is not in compressed form, `makeCompressed()` should be called before.\n Note that these functions are mostly provided for interoperability purposes with external libraries.\n A better access to the values of the matrix is done by using the InnerIterator class as described in \link TutorialSparse the Tutorial Sparse \endlink section diff --git a/doc/TutorialMatrixClass.dox b/doc/TutorialMatrixClass.dox index 2c452220f..3e0785a1f 100644 --- a/doc/TutorialMatrixClass.dox +++ b/doc/TutorialMatrixClass.dox @@ -151,14 +151,14 @@ The numbering starts at 0. This example is self-explanatory: \verbinclude tut_matrix_coefficient_accessors.out -Note that the syntax m(index) +Note that the syntax `m(index)` is not restricted to vectors, it is also available for general matrices, meaning index-based access in the array of coefficients. This however depends on the matrix's storage order. All Eigen matrices default to column-major storage order, but this can be changed to row-major, see \ref TopicStorageOrders "Storage orders". -The operator[] is also overloaded for index-based access in vectors, but keep in mind that C++ doesn't allow operator[] to -take more than one argument. We restrict operator[] to vectors, because an awkwardness in the C++ language -would make matrix[i,j] compile to the same thing as matrix[j] ! +The `operator[]` is also overloaded for index-based access in vectors, but keep in mind that C++ doesn't allow `operator[]` to +take more than one argument. We restrict `operator[]` to vectors, because an awkwardness in the C++ language +would make `matrix[i,j]` compile to the same thing as `matrix[j]`! \section TutorialMatrixCommaInitializer Comma-initialization @@ -186,8 +186,8 @@ The current size of a matrix can be retrieved by \link EigenBase::rows() rows()\ \verbinclude tut_matrix_resize.out -The resize() method is a no-operation if the actual matrix size doesn't change; otherwise it is destructive: the values of the coefficients may change. -If you want a conservative variant of resize() which does not change the coefficients, use \link PlainObjectBase::conservativeResize() conservativeResize()\endlink, see \ref TopicResizing "this page" for more details. +The `resize()` method is a no-operation if the actual matrix size doesn't change; otherwise it is destructive: the values of the coefficients may change. +If you want a conservative variant of `resize()` which does not change the coefficients, use \link PlainObjectBase::conservativeResize() conservativeResize()\endlink, see \ref TopicResizing "this page" for more details. All these methods are still available on fixed-size matrices, for the sake of API uniformity. Of course, you can't actually resize a fixed-size matrix. Trying to change a fixed size to an actually different value will trigger an assertion failure; @@ -234,7 +234,7 @@ is always allocated on the heap, so doing \code MatrixXf mymatrix(rows,columns); \endcode amounts to doing \code float *mymatrix = new float[rows*columns]; \endcode -and in addition to that, the MatrixXf object stores its number of rows and columns as +and in addition to that, the \c MatrixXf object stores its number of rows and columns as member variables. The limitation of using fixed sizes, of course, is that this is only possible @@ -276,14 +276,14 @@ Matrix. For example, MatrixXi for Matrix. -\li VectorNt for Matrix. For example, Vector2f for Matrix. -\li RowVectorNt for Matrix. For example, RowVector3d for Matrix. +\li \c MatrixNt for `Matrix`. For example, \c MatrixXi for `Matrix`. +\li \c VectorNt for `Matrix`. For example, \c Vector2f for `Matrix`. +\li \c RowVectorNt for `Matrix`. For example, \c RowVector3d for `Matrix`. Where: -\li N can be any one of \c 2, \c 3, \c 4, or \c X (meaning \c Dynamic). -\li t can be any one of \c i (meaning int), \c f (meaning float), \c d (meaning double), - \c cf (meaning complex), or \c cd (meaning complex). The fact that typedefs are only +\li \c N can be any one of \c 2, \c 3, \c 4, or \c X (meaning \c Dynamic). +\li \c t can be any one of \c i (meaning \c int), \c f (meaning \c float), \c d (meaning \c double), + \c cf (meaning `complex`), or \c cd (meaning `complex`). The fact that `typedef`s are only defined for these five types doesn't mean that they are the only supported scalar types. For example, all standard integer types are supported, see \ref TopicScalarTypes "Scalar types". diff --git a/doc/TutorialReshape.dox b/doc/TutorialReshape.dox index 5b4022a3b..07e5c3c0b 100644 --- a/doc/TutorialReshape.dox +++ b/doc/TutorialReshape.dox @@ -3,7 +3,7 @@ namespace Eigen { /** \eigenManualPage TutorialReshape Reshape Since the version 3.4, %Eigen exposes convenient methods to reshape a matrix to another matrix of different sizes or vector. -All cases are handled via the DenseBase::reshaped(NRowsType,NColsType) and DenseBase::reshaped() functions. +All cases are handled via the `DenseBase::reshaped(NRowsType,NColsType)` and `DenseBase::reshaped()` functions. Those functions do not perform in-place reshaping, but instead return a view on the input expression. \eigenAutoToc @@ -23,7 +23,7 @@ Here is an example reshaping a 4x4 matrix to a 2x8 one: By default, the input coefficients are always interpreted in column-major order regardless of the storage order of the input expression. -For more control on ordering, compile-time sizes, and automatic size deduction, please see de documentation of DenseBase::reshaped(NRowsType,NColsType) that contains all the details with many examples. +For more control on ordering, compile-time sizes, and automatic size deduction, please see de documentation of `DenseBase::reshaped(NRowsType,NColsType)` that contains all the details with many examples. \section TutorialReshapeMat2Vec 1D linear views diff --git a/doc/TutorialSparse.dox b/doc/TutorialSparse.dox index c69171ec5..77a08da6e 100644 --- a/doc/TutorialSparse.dox +++ b/doc/TutorialSparse.dox @@ -54,13 +54,13 @@ and one of its possible sparse, \b column \b major representation: Currently the elements of a given inner vector are guaranteed to be always sorted by increasing inner indices. The \c "_" indicates available free space to quickly insert new elements. -Assuming no reallocation is needed, the insertion of a random element is therefore in O(nnz_j) where nnz_j is the number of nonzeros of the respective inner vector. -On the other hand, inserting elements with increasing inner indices in a given inner vector is much more efficient since this only requires to increase the respective \c InnerNNZs entry that is a O(1) operation. +Assuming no reallocation is needed, the insertion of a random element is therefore in `O(nnz_j)` where `nnz_j` is the number of nonzeros of the respective inner vector. +On the other hand, inserting elements with increasing inner indices in a given inner vector is much more efficient since this only requires to increase the respective \c InnerNNZs entry that is a `O(1)` operation. The case where no empty space is available is a special case, and is referred as the \em compressed mode. It corresponds to the widely used Compressed Column (or Row) Storage schemes (CCS or CRS). Any SparseMatrix can be turned to this form by calling the SparseMatrix::makeCompressed() function. -In this case, one can remark that the \c InnerNNZs array is redundant with \c OuterStarts because we have the equality: \c InnerNNZs[j] = \c OuterStarts[j+1]-\c OuterStarts[j]. +In this case, one can remark that the \c InnerNNZs array is redundant with \c OuterStarts because we have the equality: `InnerNNZs[j] == OuterStarts[j+1] - OuterStarts[j]`. Therefore, in practice a call to SparseMatrix::makeCompressed() frees this buffer. It is worth noting that most of our wrappers to external libraries requires compressed matrices as inputs. @@ -221,9 +221,9 @@ A typical scenario of this approach is illustrated below: 5: mat.makeCompressed(); // optional \endcode -- The key ingredient here is the line 2 where we reserve room for 6 non-zeros per column. In many cases, the number of non-zeros per column or row can easily be known in advance. If it varies significantly for each inner vector, then it is possible to specify a reserve size for each inner vector by providing a vector object with an operator[](int j) returning the reserve size of the \c j-th inner vector (e.g., via a VectorXi or std::vector). If only a rought estimate of the number of nonzeros per inner-vector can be obtained, it is highly recommended to overestimate it rather than the opposite. If this line is omitted, then the first insertion of a new element will reserve room for 2 elements per inner vector. +- The key ingredient here is the line 2 where we reserve room for 6 non-zeros per column. In many cases, the number of non-zeros per column or row can easily be known in advance. If it varies significantly for each inner vector, then it is possible to specify a reserve size for each inner vector by providing a vector object with an `operator[](int j)` returning the reserve size of the \c j-th inner vector (e.g., via a `VectorXi` or `std::vector`). If only a rought estimate of the number of nonzeros per inner-vector can be obtained, it is highly recommended to overestimate it rather than the opposite. If this line is omitted, then the first insertion of a new element will reserve room for 2 elements per inner vector. - The line 4 performs a sorted insertion. In this example, the ideal case is when the \c j-th column is not full and contains non-zeros whose inner-indices are smaller than \c i. In this case, this operation boils down to trivial O(1) operation. -- When calling insert(i,j) the element \c i \c ,j must not already exists, otherwise use the coeffRef(i,j) method that will allow to, e.g., accumulate values. This method first performs a binary search and finally calls insert(i,j) if the element does not already exist. It is more flexible than insert() but also more costly. +- When calling `insert(i,j)` the element `i`, `j` must not already exists, otherwise use the `coeffRef(i,j)` method that will allow to, e.g., accumulate values. This method first performs a binary search and finally calls `insert(i,j)` if the element does not already exist. It is more flexible than `insert()` but also more costly. - The line 5 suppresses the remaining empty space and transforms the matrix into a compressed column storage. @@ -259,7 +259,7 @@ sm2 = sm1.cwiseProduct(dm1); dm2 = sm1 + dm1; dm2 = dm1 - sm1; \endcode -Performance-wise, the adding/subtracting sparse and dense matrices is better performed in two steps. For instance, instead of doing dm2 = sm1 + dm1, better write: +Performance-wise, the adding/subtracting sparse and dense matrices is better performed in two steps. For instance, instead of doing `dm2 = sm1 + dm1`, better write: \code dm2 = dm1; dm2 += sm1; @@ -272,7 +272,7 @@ This version has the advantage to fully exploit the higher performance of dense sm1 = sm2.transpose(); sm1 = sm2.adjoint(); \endcode -However, there is no transposeInPlace() method. +However, there is no `transposeInPlace()` method. \subsection TutorialSparse_Products Matrix products @@ -284,18 +284,18 @@ dv2 = sm1 * dv1; dm2 = dm1 * sm1.adjoint(); dm2 = 2. * sm1 * dm1; \endcode - - \b symmetric \b sparse-dense. The product of a sparse symmetric matrix with a dense matrix (or vector) can also be optimized by specifying the symmetry with selfadjointView(): + - \b symmetric \b sparse-dense. The product of a sparse symmetric matrix with a dense matrix (or vector) can also be optimized by specifying the symmetry with `selfadjointView()`: \code -dm2 = sm1.selfadjointView<>() * dm1; // if all coefficients of A are stored -dm2 = A.selfadjointView() * dm1; // if only the upper part of A is stored -dm2 = A.selfadjointView() * dm1; // if only the lower part of A is stored +dm2 = sm1.selfadjointView<>() * dm1; // if all coefficients of sm1 are stored +dm2 = sm1.selfadjointView() * dm1; // if only the upper part of sm1 is stored +dm2 = sm1.selfadjointView() * dm1; // if only the lower part of sm1 is stored \endcode - \b sparse-sparse. For sparse-sparse products, two different algorithms are available. The default one is conservative and preserve the explicit zeros that might appear: \code sm3 = sm1 * sm2; sm3 = 4 * sm1.adjoint() * sm2; \endcode - The second algorithm prunes on the fly the explicit zeros, or the values smaller than a given threshold. It is enabled and controlled through the prune() functions: + The second algorithm prunes on the fly the explicit zeros, or the values smaller than a given threshold. It is enabled and controlled through the `prune()` functions: \code sm3 = (sm1 * sm2).pruned(); // removes numerical zeros sm3 = (sm1 * sm2).pruned(ref); // removes elements much smaller than ref @@ -314,7 +314,7 @@ sm2 = sm1.transpose() * P; \subsection TutorialSparse_SubMatrices Block operations Regarding read-access, sparse matrices expose the same API than for dense matrices to access to sub-matrices such as blocks, columns, and rows. See \ref TutorialBlockOperations for a detailed introduction. -However, for performance reasons, writing to a sub-sparse-matrix is much more limited, and currently only contiguous sets of columns (resp. rows) of a column-major (resp. row-major) SparseMatrix are writable. Moreover, this information has to be known at compile-time, leaving out methods such as block(...) and corner*(...). The available API for write-access to a SparseMatrix are summarized below: +However, for performance reasons, writing to a sub-sparse-matrix is much more limited, and currently only contiguous sets of columns (resp. rows) of a column-major (resp. row-major) SparseMatrix are writable. Moreover, this information has to be known at compile-time, leaving out methods such as `block(...)` and `corner*(...)`. The available API for write-access to a SparseMatrix are summarized below: \code SparseMatrix sm1; sm1.col(j) = ...; @@ -329,22 +329,22 @@ sm2.middleRows(i,nrows) = ...; sm2.bottomRows(nrows) = ...; \endcode -In addition, sparse matrices expose the SparseMatrixBase::innerVector() and SparseMatrixBase::innerVectors() methods, which are aliases to the col/middleCols methods for a column-major storage, and to the row/middleRows methods for a row-major storage. +In addition, sparse matrices expose the `SparseMatrixBase::innerVector()` and `SparseMatrixBase::innerVectors()` methods, which are aliases to the `col`/`middleCols` methods for a column-major storage, and to the `row`/`middleRows` methods for a row-major storage. \subsection TutorialSparse_TriangularSelfadjoint Triangular and selfadjoint views -Just as with dense matrices, the triangularView() function can be used to address a triangular part of the matrix, and perform triangular solves with a dense right hand side: +Just as with dense matrices, the `triangularView()` function can be used to address a triangular part of the matrix, and perform triangular solves with a dense right hand side: \code dm2 = sm1.triangularView(dm1); dv2 = sm1.transpose().triangularView(dv1); \endcode -The selfadjointView() function permits various operations: +The `selfadjointView()` function permits various operations: - optimized sparse-dense matrix products: \code -dm2 = sm1.selfadjointView<>() * dm1; // if all coefficients of A are stored -dm2 = A.selfadjointView() * dm1; // if only the upper part of A is stored -dm2 = A.selfadjointView() * dm1; // if only the lower part of A is stored +dm2 = sm1.selfadjointView<>() * dm1; // if all coefficients of sm1 are stored +dm2 = sm1.selfadjointView() * dm1; // if only the upper part of sm1 is stored +dm2 = sm1.selfadjointView() * dm1; // if only the lower part of sm1 is stored \endcode - copy of triangular parts: \code