Parallelism in Matrix Computations (2016) - Efstratios Gallopoulos - Books - Springer - 9789401771870 - August 12, 2015
In case cover and title do not match, the title is correct

Parallelism in Matrix Computations (2016)

Efstratios Gallopoulos

Price
zł 498.90

Ordered from remote warehouse

Expected delivery Dec 6 - 19
Christmas presents can be returned until 31 January
Add to your iMusic wish list

Parallelism in Matrix Computations (2016)

Jacket Description/Back: This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded, Vandermonde, Toeplitz, and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of parallel iterative linear system solvers with emphasis on scalable preconditioners, (b) parallel schemes for obtaining a few of the extreme eigenpairs or those contained in a given interval in the spectrum of a standard or generalized symmetric eigenvalue problem, and (c) parallel methods for computing a few of the extreme singular triplets. Part IV focuses on the development of parallel algorithms for matrix functions and special characteristics such as the matrix pseudospectrum and the determinant. The book also reviews the theoretical and practical background necessary when designing these algorithms and includes an extensive bibliography that will be useful to researchers and students alike. The book brings together many existing algorithms for the fundamental matrix computations that have a proven track record of efficient implementation in terms of data locality and data transfer on state-of-the-art systems, as well as several algorithms that are presented for the first time, focusing on the opportunities for parallelism and algorithm robustness."Biographical Note: Efstratios Gallopoulos, University of Patras, Patras Greece Bernard Philippe, INRIA/IRISA, Rennes Cedex, France Ahmed H. Sameh, Purdue University, West Lafayette, IN, USATable of Contents: List of Figures.- List of Tables.- List of Algorithms.- Notations used in the book.- Part I Basics.- Parallel Programming Paradigms.- Computational Models.- Principles of parallel programming.- Fundamental kernels.- Vector operations.- Higher level BLAS.- General organization for dense matrix factorizations.- Sparse matrix computations.- Part II Dense and special matrix computations.- Recurrences and triangular systems.- Definitions and examples.- Linear recurrences.- Implementations for a given number of processors.- Nonlinear recurrences.- General linear systems.- Gaussian elimination.- Pair wise pivoting.- Block LU factorization.- Remarks.- Banded linear systems.- LUbased schemes with partial pivoting.- The Spike family of algorithms.- The Spike balance scheme.- A tearing based banded solver.- Tridiagonal systems.- Special linear systems.- Vandermonde solvers.- Banded Toeplitz linear systems solvers.- Symmetric and Anti symmetric Decomposition (SAS).- Rapid elliptic solvers.- Orthogonal factorization and linear least squares problems.- Definitions.- QR factorization via Givens rotations.- QR factorization via Householder reductions.- Gram Schmidt orthogonalization.- Normal equations vs. orthogonal reductions.- Hybrid algorithms when m>>n.- Orthogonal factorization of block angular matrices.- Rank deficient linear least squares problems.- The symmetric eigenvalue and singular value problems.- The Jacobi algorithms.- Tridiagonalization based schemes.- Bidiagonalization via Householder reduction.- Part III Sparse matrix computations.- Iterative schemes for large linear systems.- An example.- Classical splitting methods.- Polynomial methods.- Preconditioners.- A tearing based solver for generalized banded preconditioners.- Row projection methods for large non symmetric linear systems.- Multiplicative Schwarz preconditioner with GMRES.- Large symmetric eigenvalue problems.- Computing dominant eigenpairs and spectral transformations.- The Lanczos method.- A block Lanczos approach for solving symmetric perturbed standard eigenvalue problems.- The Davidson methods.- The trace minimization method for the symmetric generalized eigenvalue problem.- The sparse singular value problem.- Part IV Matrix functions and characteristics.- Matrix functions and the determinant.- Matrix functions.- Determinants.- Computing the matrix pseudospectrum.- Grid based methods.- Dimensionality reduction on the domain: Methods based on path following.- Dimensionality reduction on the matrix: Methods based on projection.- Notes.- References. Publisher Marketing: This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded, Vandermonde, Toeplitz, and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of parallel iterative linear system solvers with emphasis on scalable preconditioners, (b) parallel schemes for obtaining a few of the extreme eigenpairs or those contained in a given interval in the spectrum of a standard or generalized symmetric eigenvalue problem, and (c) parallel methods for computing a few of the extreme singular triplets. Part IV focuses on the development of parallel algorithms for matrix functions and special characteristics such as the matrix pseudospectrum and the determinant. The book also reviews the theoretical and practical background necessary when designing these algorithms and includes an extensive bibliography that will be useful to researchers and students alike. The book brings together many existing algorithms for the fundamental matrix computations that have a proven track record of efficient implementation in terms of data locality and data transfer on state-of-the-art systems, as well as several algorithms that are presented for the first time, focusing on the opportunities for parallelism and algorithm robustness."

Media Books     Hardcover Book   (Book with hard spine and cover)
Released August 12, 2015
ISBN13 9789401771870
Publishers Springer
Pages 473
Dimensions 156 × 234 × 27 mm   ·   875 g

Show all

More by Efstratios Gallopoulos