A subset of LAPACK routines redesigned for heterogenous computing

Edit Package scalapack

The ScaLAPACK (or Scalable LAPACK) library includes a subset of LAPACK routines redesigned for distributed memory MIMD parallel computers. It is currently written in a Single-Program-Multiple-Data style using explicit message passing for interprocessor communication. It assumes matrices are laid out in a two-dimensional block cyclic decomposition.

ScaLAPACK is designed for heterogeneous computing and is portable on any computer that supports MPI or PVM.

Like LAPACK, the ScaLAPACK routines are based on block-partitioned algorithms in order to minimize the frequency of data movement between different levels of the memory hierarchy. (For such machines, the memory hierarchy includes the off-processor memory of other processors, in addition to the hierarchy of registers, cache, and local memory on each processor.) The fundamental building blocks of the ScaLAPACK library are distributed memory versions (PBLAS) of the Level 1, 2 and 3 BLAS, and a set of Basic Linear Algebra Communication Subprograms (BLACS) for communication tasks that arise frequently in parallel linear algebra computations. In the ScaLAPACK routines, all interprocessor communication occurs within the PBLAS and the BLACS. One of the design goals of ScaLAPACK was to have the ScaLAPACK routines resemble their LAPACK equivalents as much as possible.

Refresh
Refresh
Source Files
Filename Size Changed
_multibuild 0000000234 234 Bytes
scalapack-2.0.2-shared-blacs.patch 0000000878 878 Bytes
scalapack-2.0.2-shared-lib.patch 0000001756 1.71 KB
scalapack-2.0.2.tgz 0004779534 4.56 MB
scalapack.changes 0000004099 4 KB
scalapack.spec 0000016024 15.6 KB
Revision 4 (latest revision is 6)
Stefan Behlert's avatar Stefan Behlert (sbehlert) committed (revision 4)
- The HPC build of scalapack requires openBLAS. OpenBLAS is not
  supported for s390: skip building on s390 for HPC (bsc#1079513).

- Don't set the module package to noarch. It contains arch specific
  directory paths (boo#1076443).

- Disable the openmpi3 flavor in some products.

- Switch from gcc6 to gcc7 as additional compiler flavor for HPC on SLES.
- Add support for openmpi3 and mpich to HPC build.
Comments 0
openSUSE Build Service is sponsored by