Fast Matrix Multiplication via Compiler-only Layered Data Reorganization and Intrinsic Lowering

The resurgence of machine learning has increased the demand for high-performance basic linear algebra subroutines (BLAS), which have long depended on libraries to achieve peak performance on commodity hardware. High-performance BLAS implementations rely on a layered approach that consists of tiling and packing layers, for data (re)organization, and micro kernels that perform the actual computations. The creation of high-performance micro kernels requires significant development effort to write tailored assembly code for each architecture. This hand optimization task is complicated by the recent introduction of matrix engines by IBM's POWER10 MMA, Intel AMX, and Arm ME to deliver high-performance matrix operations. This paper presents a compiler-only alternative to the use of high-performance libraries by incorporating, to the best of our knowledge and for the first time, the automatic generation of the layered approach into LLVM, a production compiler. Modular design of the algorithm, such as the use of LLVM's matrix-multiply intrinsic for a clear interface between the tiling and packing layers and the micro kernel, makes it easy to retarget the code generation to multiple accelerators. The use of intrinsics enables a comprehensive performance study. In processors without hardware matrix engines, the tiling and packing delivers performance up to 22x (Intel), for small matrices, and more than 6x (POWER9), for large matrices, faster than PLuTo, a widely used polyhedral optimizer. The performance also approaches high-performance libraries and is only 34% slower than OpenBLAS and on-par with Eigen for large matrices. With MMA in POWER10 this solution is, for large matrices, over 2.6x faster than the vector-extension solution, matches Eigen performance, and achieves up to 96% of BLAS peak performance.


INTRODUCTION
New machine-learning algorithms, combined with the ever increasing demands of scientific and business analytics applications, highlight the importance of improving the performance of matrix algorithms and matrix multiplication in particular 1 .General Matrix Multiplication (GEMM) is a routine heavily used in high-performance computing and neural networks 2 , both as a standalone operation and as a crucial component of other linear-algebra algorithms, such as LU decomposition.As the typical matrix sizes in GEMM operations for deep-learning workloads approach the order of 10,000 3 , efficiently partitioning the matrices to fit into the cache hierarchy is key to performance.The state-of-the-art approach in high-performance numerical libraries uses a layered approach for matrix multiplication that consists of (re)organizing the data to improve data locality as it moves from the main memory through the memory hierarchy and then relying on specialized assembly code to execute the multiplications efficiently in each targeted architecture.
This layered method, explained by Goto and Geijn 4 and used in both proprietary (e.g.Intel MKL, IBM ESSL) and opensource (e.g.OpenBLAS 5 , BLIS 6 , Eigen 7 ) highly-optimized libraries, consists of a two-layer approach.First, a macro kernel performs tiling and packing of the operand matrices across the caches.In a second layer, a micro kernel, implemented by means of compiler builtins or direct assembly instructions, extracts blocks from the packed tiles and executes the block multiplication.This work contributes new ideas to both the macro and micro kernels.
Libraries have been successful in exploiting the memory hierarchy to perform efficient matrix operations.However, they all share some drawbacks: 1. users must download and install architecture-specific libraries; 2. each library needs to be tailored by writing assembly code for every new architecture design; 3. manual changes to the users' code are necessary to call the libraries; and 4. often there is a time lag between the introduction of a new architecture and the creation of a specialized micro kernel for that architecture in a library.Overcoming these drawbacks would lead to broader utilization of the layered approach and of specialized micro kernels 2 .
Linear-algebra libraries share similar code for matrix multiplication because they all make use of the ideas described by Goto and Geijn 4 .This insight leads to the first contribution of this paper, which aims at capturing the layered strategy described by Goto and Geijn in a compiler-only LLVM-based optimization pass.Implementing the layered strategy as a general-purpose compiler pass brings three benefits.1. programs written in all languages that are supported by the LLVM frontend (e.g., C, Fortran, Go, Rust) can leverage the strategy when there is no library interface or a library implementation is outdated; 2. the tiling and packing code, which is common to all libraries, is automatically generated by the new compiler pass; 3. the algorithm to lower the LLVM intrinsic to architecture-specific micro-kernel code only needs to be implemented once for each architecture.
The resurgence of machine learning also led to the introduction of application-specific accelerators into general-purpose CPUs.Examples include Intel's Advanced Matrix Extension (AMX) 8 , Arm's Matrix Extension (ME) 9 and IBM POWER10's Matrix Multiply Assist (MMA) 10,11 .AMX is an off-core accelerator with a dedicated register file that employs inner product operations to compute in-register matrix multiply using novel tile registers 8 .New instructions allow the CPU to communicate with AMX through an accelerator command queue 8 .Both Arm's ME and IBM's MMA unit are on-core extensions that use SIMD vector registers for input and output operands.However, ME and AMX rely on inner products for multiplication, while MMA uses outer products.
This heterogeneity of accelerators further complicates the hand optimization of micro kernels that library developers must perform.In a compiler-only code generation path, the adoption of LLVM's intermediate representation (IR) llvm.matrix.multiply intrinsic abstracts target-specific operations under a clear interface reducing the need for specialized micro kernels to a single implementation of the intrinsic.This LLVM intrinsic computes the product of two fixed-size matrices.LLVM provides a generic lowering algorithm that unrolls the matrix-multiply computations to target-independent IR code.LLVM's backends can then further lower IR code to target-specific machine code for any of the many backends supported.For instance, another contribution of this paper is a lowering algorithm of the LLVM llvm.matrix.multiplyintrinsic that efficiently utilizes the new MMA unit in POWER10.When generating code for MMA this intrinsic computes a fast outer-product-based matrix multiplication for the micro kernel.
Summarizing, this paper makes the following contributions: • An algorithm for a compiler-only, architecture-independent, tiling & packing strategy for the macro kernel that improves upon the strategy described by Goto et al. 4 (Section 3.1).The incorporation of this algorithm in LLVM leverages all available backends for processor architectures by building upon a compiler-intrinsic micro kernel instead of a hand-crafted assembly micro kernel as in most high-performance BLAS libraries;
• An algorithm to lower the LLVM llvm.matrix.multiplyintrinsic (micro kernel) to the new IBM MMA extension (Section 3.2).The specialized code generated for MMA benefits not only the code generated by the macro-level algorithm, but any compilation path that uses the intrinsic.
• A thorough experimental evaluation that shows that: 1. the proposed macro-level algorithm, even when coupled with a generic intrinsic lowering, can perform more than 22× faster -for small matrices on Intel-and more than 6× fasterfor large matrices on POWER9 ™ -than PLuTO 12 , a widely used polyhedral optimizer.2. this compiler-only approach generates code that is on-par with Eigen and 34% slower than BLAS on POWER9; 3. coupling the macro-level algorithm with an MMA-specific lowering of the llvm.matrix.multiplyintrinsic in POWER10 achieves: (a) more than 2.6× the performance of the VSX micro kernel; (b) 10% more performance than BLAS for a small SGEMM and up to 96% of BLAS peak performance for large SGEMM; (c) 83% faster code than Eigen for large matrices.
An overview of the POWER10 Matrix-Multiply Assist (MMA) facility follows in Section 2.Then, Section 3 describes the proposed code generation approach, detailing the algorithms for the macro and micro kernels.Section 4 presents and analyzes the experimental results.Section 6 describes and puts in perspective the works related to this paper, and finally, Section 7 presents conclusions.

MATRIX-MULTIPLY ASSIST IN POWER10
The Matrix-Multiply Assist (MMA) instructions were introduced in PowerISA 3.1 13 as an extension of the Vector-Scalar Extension (VSX) facility.IBM's POWER10 processor is the first to implement these new instructions and functional units that perform two-dimensional matrix operations.MMA relies on 512-bit accumulator (ACC) registers to represent matrices, which can be manipulated by BLAS-like rank-operations that consume vector registers as inputs.Each accumulator register is associated with four of the architecture's 128-bit vector-scalar registers (VSRs).While an accumulator is being used for MMA instructions, the associated VSRs are blocked from use.Up to eight of these accumulators can be use simultaneously, leaving 32 of 64 VSRs available for use as vector registers.
Matrix multiplication and other linear algebra operations can be expressed as a series of rank-update operations.These operations compute the product of an × matrix by another × matrix, accumulating the result into an × matrix.When = 1, the operation reduces to an outer product of two vectors, of and elements, respectively as illustrated in Figure 1.A rank-update can itself be decomposed to a sequence of outer products.
Outer-products have high-computational density since they are two-dimensional operations that compute element-wise operations from + input values.Therefore, they are the standard building block of high-performance linear algebra frameworks such as OpenBLAS and Eigen.In processors with one-dimensional vector instructions, the outer products are emulated using a combination of splatting and element-wise multiply-add instructions.MMA bypasses this emulation step by directly supporting outer product.
In the MMA rank-update instructions, the updated matrix is stored in an ACC, while the operand vectors (or matrices) are provided through VSRs.The result matrix is either a 4 × 4 matrix of 32-bit elements (floating-point or integer) or a 4 × 2 matrix of 64-bit floating point elements.The is a function of the input data type, which can vary from 4-bit integers ( = 8) to 32-or 64-bit floating-point numbers ( = 1).For all input data types of 32-bit or less, the multiplying operands are represented by one VSR each.For 64-bit inputs, one operand is represented by a pair of VSRs (4 × 64-bit elements) and the other by a single VSR (2 × 64-bit elements).A summary of the MMA rank-update instructions is shown in Table 1.
MMA can operate with several data types that have different sizes.Moreover, a single MMA instruction can accumulate multiple outer products depending on the size of the data elements.For instance, for a 32-bit data type, each 128-bit VSR contains four elements and the MMA instruction computes and accumulates a single outer product into the 512-bit accumulator that contains 4 × 4 matrix elements 14 .Following BLAS terminology, such an instruction is called a rank 1 update 15 .However, for

CODE GENERATION FOR GEMM
Algorithm 1 provides an overview of the computation of GEMM within our compiler pass.The code generation can be divided into two abstract levels: macro: target-independent blocking and packing of matrices for faster memory access and micro: small target-dependent kernel lowered from compiler-intrinsic calls.A compiler-intrinsic-based micro kernel, instead of a handcrafted assembly micro kernel, enables the micro-kernel code to be (i) automatically optimized -for instance via vectorization -and mapped -via instruction selection -to efficient instructions in the target or (ii) generated by a lowering algorithm that aims to exploit target-specific instructions (See Section 3.2).The macro-level strategy is inspired by the memory-hierarchy modelling described by Goto and Geijn 4 .Nevertheless, our compiler-only macro-level algorithm differs from Goto and Geijn's seminal work on key aspects that better capture modern CPU's and accelerator's features (See Section 3.1).Implementing this strategy fully inside the compiler has advantages: 1. compile-time known features of the target architecture -e.g.minimum vector-register length -guide the automatic generation of tiling & packing code, contrasting with hand-crafted and targetspecific implementations in BLAS libraries; 2. a flexible packing layout enables the use of either row or column-major tiles to match the access order in the micro kernel; 3. defining tile sizes for more levels or for different cache/memory organizations only requires changes in the heuristic code itself -the remaining algorithm code remains untouched; and 4. the packing code -generated by the compiler instead of hardcoded as in libraries -can be retargeted to accelerators with, for instance, explicit cache/memory management (e.g.software-managed caches or scratch-pad memories).
In Algorithm 1, j, k, and i are the offsets of the blocks in the matrices, counted in terms of elements.The values lda, ldb and ldc are the leading dimensions, and thus the access stride, of the matrices as they are originally stored in memory.The blocking parameters mc, kc, and nc divide the input matrices A and B into blocks properly sized for cache.The pack functions in lines 3 and 5 each create a buffer in main memory containing one copy of an entire block of matrix B or of matrix A. The last argument of the function pack specifies the data layout within each tile.The storage order of the tiles after packing, which is independent of the original storage order of the elements of A and B, is selected to benefit the tiled multiplication (see Section 3.1).
The tiling parameters mr, kr, and nr ensure optimal resource utilization in the micro kernel.ATile, BTile, AccTile, ABTile, CTile, and CNewTile, are virtual LLVM IR vectors.These vectors are allocated and loaded to physical registers by subsequent code-generation passes.Lines 10 and 11 each load a single tile of an input matrix into an LLVM IR vector from the packed buffers.The algorithm invokes the intrinsic on Line 12 which multiples ATile and BTile and results in a new tile (ABTile) that is accumulated into AccTile.AccTile is kept in vector regiters for all iterations of the loop on Line 9. On the MMA code generation path, AccTile is mapped to accumulator registers.A tile of matrix C is loaded into CTile, in line 15, and scaled by the constant on the first iteration of the loop on Line 2, satisfying GEMM's requirement that the matrix multiplication updates the values of the destination matrix.Likewise, the accumulation produced in AccTile is multiplied by the constant in Line 19

Macro-level Algorithm: blocking, tiling and packing
Assuming large matrices, portions of each matrix must be brought, through the memory hierarchy, to the registers.These portions become operands to a mulitiplication intrinsic.At the micro level, a code generation pass lowers the intrinsic to specific hardware instructions (Section 3.2).Contrary to other approaches, no manual vectorization or hand-written assembly code is required.The whole micro kernel development happens inside the compiler as the intrinsic takes advantage of the target's available matrix engine operations.
In the example shown in Figure 2 (a), matrix A has × elements and matrix B has × elements.Both matrices are stored in memory in column-major order.A block of matrix A has mc × kc elements and a block of matrix B has kc × nc elements.Each block of A is divided into tiles of mr × kr elements and each block of B is divided into kr × nr tiles.Figure 2 (b) presents the order of the elements and tiles in the packed block when mr = 2, kr = 4 and nr = 2.These small values are used to keep the figure at a reasonable size.The actual values for mr, kr, and nr are selected at compilation time to result in a performant micro-level computation for each data-type size in each architecture.
Figure 2 (b) illustrates the partition of a block into tiles.The tiles are packed within the block in the order in which they will be accessed in the innermost tiling loop.That is, disregarding the preferred layout of elements within each tile for a particular architecture, the tiles will be placed in rows in the block mc × kc of A and in columns in the block kc × nc of B. The sequential numbers inside the tiles are used simply to indicate the order in which the elements will be accessed when micro-level tiles are loaded into vector registers.Letters are used at the end of the first row (column) and at the start of the second row (column) of tiles to indicate the relative order of these elements.originally stored in column-major order in memory, and the order of the elements of A and B as stored into the APack and BPack buffers after packing.The layout of elements within the tiles is tailored to the needs of the underlying architecture.For example, Arm ME, expects a row-major A and a column-major B for its input operands and accumulates the result in a row-major C 9 .Figure 2 (c) presents an example for IBM POWER10 MMA, which uses column (A), row (B), row (C) layouts.
Constraints 1-7 model how the tiling & packing factors are computed by the compiler-only macro-level algorithm.LXSizeInBytes is the number of bytes in cache level X, TypeSizeInBytes is the number of bytes in the matrix operations (e.g 4 for single-precision floating-point numbers), and VL is the minimum vector length size on the target architecture (e.g 4 for 128-bit vector registers).Similar to Goto & Van Geijn 4 , the macro-level algorithm allocates a significant portion of L1 for a piece of each kc × nc block of B. However, different from Goto & Van Geijn, which allocate half of L1 for a kc × nr piece, the macro-level algorithm allocates half of L1 only for a kc × VL piece of B's kc × nc block (Constraint 1).This strategy produces a larger value for kc to exploit the fact that most modern architecture have enough vector registers to hold nr multiples of VL.The algorithm considers that the remaining half of the L1 will be used to hold VL × VL C elements from memory and VL elements of A and B as per Constraint 2. kl is used to maximize the use of L2 for a mc × kl piece of an mc × kc block of A and to determine how much to allocate in L3 for the kl × nc piece of kc × nc block of B. Constaints 3 and 4 consider the effective size of L2 and L3, respectively, and take into account cache inclusion as in most modern architectures.The inclusion property of caches is not explicitly modelled by Goto & Van Geijn.A final constraint is to make kc, mc, and nc multiple of their respective register-tiling factor in the micro kernel, kr, mr, nr (Constraints 5-7).POWER10's MMA features eight 512-bit accumulators and thirty two Listing 1: An example usage of llvm.matrix.multiply.%CNewTile = c a l l <128 x f l o a t > @llvm .m a t r i x .m u l t i p l y .v 1 2 8 f 3 2 .v 4 0 f 3 2 .v 8 0 f 3 2 ( %ATile , %BTile , 8 , 5 , 1 6 ) 128-bit vector registers to support outer-product computations.Therefore, for 32-bit matrix elements the performant choice is mr = 8 and nr = 16, as explained in Section 3.2, while kr is selected to maximize the number of in-accumulator operations.
The effective sizes for L2 and L3 caches may depend on the machine load.For instance, L3 caches can be shared on some configurations of POWER10 (Section 4.1.1),and this allows a single-threaded computation to use all of the core's caches.To account for this effect, the macro algorithm has command line options to provide the effective L2 and L3 cache sizes available per core.Parameters mr, kr, and nr define the size of the GEMM computed by the micro kernel at each innermost loop iteration in Algorithm 1.The values of these parameters are selected by the intrinsic implementation, based on the architectural design of the target.Section 3.2 provides an explanation of how optimal mr, kr, and nr need to be chosen for the micro kernel on POWER10 MMA.The values of mr, kr, and nr are the only parameters that need tuning as the cache blocking parameters mc, kc and nc are chosen based on target-specific cache size information available in LLVM.Following the blocking strategy described by Goto and Geijn, the value of kc is selected in such a way that an entire row of tiles of the matrix Amr × kc elements -and an entire column of tiles of the matrix Bkc × nr elements -fit simultaneously in the L1 data cache.The L1 cache must also have space for an mr × nr tile of the matrix C but such a tile is quite small in comparison with the space needed for tiles of A and B.
Figure 2 (b) shows the order of the matrix elements in the buffers after packing, while Figure 2 (a) show the original matrices.Arrays A and B in Figure 2 (c) reflect that the original matrices are stored in column-major layout.Buffers APack and BPack in Figure 2 (c) illustrate that, after packing, the elements of matrices A and B are stored into these buffers in the order in which they will be accessed by the micro-level computation.Packing provides two benefits.First, the tiles of matrices A and B lie in the copied memory in the order they will be loaded for the multiply intrinsic in lines 10 and 11.Second, each row of tiles of A residing in a part of L1 cache is used in ⌈ × ⌉ multiplications (⌈ × ⌉ times for a column of tiles of B).
When CNewTile is not a multiple of a micro tile, the remainder elements are filled with zeroes in the packing buffers and the result is stored with scalar store instructions instead of the efficient matrix store intrinsic used for full tiles.This brings an overhead for using slow, element-by-element stores instead of fast strided stores.For the remaining and zero-padded elements, the micro kernel still performs a full computation.

Micro-Level Algorithm
The micro-level algorithm is centered around the LLVM intrinsic llvm.matrix.multiply.Intrinsics encapsulate a computational idiom and enable specialization to specific architectures, thus preventing the duplication of code within a compiler infrastructure.An intrinsic can be viewed as a function call that is eventually replaced by inlined generated code.Like a function call, an intrinsic has input parameters and a return value.An intrinsic can be transparently lowered to target-agnostic or to target-specific code.As expected, the target-agnostic code has lower performance but is available for all LLVM-supported backends.This section describes the design process to create a target-specific lowering pass for the llvm.matrix.multiplyintrinsic that utilizes the MMA instructions to deliver high performance matrix multiplication.
The LLVM IR fragment in Listing 1 shows how the llvm.matrix.multiplyintrinsic -called in line 12 of Algorithm 1takes ATile and BTile to produce CNewTile.The mangled function name shows the types of %ATile, %BTile, and %CNewTile.The return value, %CNewTile is <128 x float>, an 8 × 16 result, while %ATile is <40 x float>, an 8 × 5 tile, and %BTile is <80 x float>, a 5 × 16 tile.The three remaining parameters represent the dimensions of the matrices, mr = 8, kr = 5, and nr = 16, indicating that this intrinsic computes a 8×16 = 8×5 ⋅ 5×16 multiplication.These tile sizes must be known at compilation time and must match the dimensions of the packed input and output vectors.The requirement that the tile sizes are known at compile time allows both the target-agnostic lowering and the MMA target-specific lowering to completely unroll the matrix-multiplication loop nest in the IR to give the backend full control over instruction rescheduling.
In addition to these software constraints, scheduling decisions in the algorithm that lowers the intrinsic computation to execute in an MMA backend must also adhere to the following hardware constraints: 1 there are at most eight accumulators available per thread and for each accumulator that is used, the usage of four VSRs are blocked; 2 there are 64 VSRs, thus if eight accumulators are used, there are 32 VSRs remaining to contain the data from the input matrices; 3 two multiply-and-accumulate outer-product instructions can be issued on a single cycle; 4 the issue-to-issue latency for the same accumulator is four cycles; and 5 spilling an accumulator to memory is an expensive operation because it requires an instruction to disassemble the accumulator into four VSRs, four vector store instructions and, later, four vector load instructions.Figure 3 illustrates how CNewTile is divided into portions that are assigned to the MMA accumulators.ATile, BTile, and CNewTile are represented in two dimensions to illustrate the position of the elements in the matrices.Each small square in the figure represents one 32-bit element of a matrix.A circled number indicates that the corresponding portion of CNewTile is assigned to that accumulator number.When the intrinsic is executed each accumulator computes kr outer products using a multiply-and-add operation.The two tones of gray colour in Figure 3 illustrate that a strip of ATile and a strip of BTile are used for the accumulation of each portion of CNewTile.Each strip is reused for all the accumulations in the same row or column of accumulators.Each outer-product computation needs two four-element operands, one from ATile and one from BTile.These operands are surrounded by dashed lines for the two accumulations highlighted in gray.The arrows indicate how the loop indices in Algorithm 2 iterate for the example in the figure .Algorithm 2 describes the lowering of the intrinsic computation for MMA.The compile-time constants VAccs and HAccs (used in lines 5 and 8) specify the layout of the accumulators for the computation.For the example in Figure 3, VAccs = 2 and HAccs = 4.These constants in the compiler generalize the lowering and make it applicable to future architectures where the ideal arrangement to increase data reuse may be different from the 2 × 4 arrangement in the POWER10 processor.
After creating CNewTile (line 2) and assembling and zeroing all the accumulators (line 3), Algorithm 2 iterates k from 0 to kr − 1 (line 4) to extract operands from ATile and BTile into virtual IR registers.For each value of k, using the accumulator assignment shown in Figure 3, the algorithm extracts two operands from ATile (lines 5-6) and four operands from BTile (lines 8-9).For = 0 the operands are extracted from the leftmost column of ATile and from the top row of BTile in Figure 3.

The algorithmic presentation in Algorithm 2 uses the notation AOps[v][k] and BOps[h][k]
to show the connection between the operands extracted from ATile and BTile with the use of these operands in the MMABuiltin on line 15.In the compiler, at this point in the lowering, each four-element operand is extracted into a virtual IR register.The actual VSRs used for each operand are determined later by a register-allocation pass.
In Figure 3 each operand is formed by four elements and, once extracted, occupies one 128-bit VSR.Given constraints 1 and 2 , with the choice of kr = 5, there are enough non-blocked VSRs to contain all the thirty operands needed for the computation illustrated in Figure 3. Thus, laying out the accumulators in this 2 × 4 pattern maximizes the reuse of values loaded into the VSRs: operands extracted from ATile are reused four times and operands extracted from BTile are reused two times.
Once all operands are extracted into VSRs, the algorithm again iterates over the dimension kr to compute each piece of C to avoid spilling any accumulator to memory.Following constraint 3 , two outer-product instructions are issued in each cycle.Four pairs of accumulators can be scheduled before circling back to the first pair, thus satisfying constraint 4 .The assignment of a portion of CNewTile to a single accumulator eliminates the need to spill accumulators, thus increasing the performance according to constraint 5 .for h = 0 to HAccs-1 do 15: return CNewTile 21: end function The lowering of the intrinsic for execution in POWER10 is based on a set of builtins that encapsulate the computation of an outer product.There is a set of builtins for each data type to allow the code generator to select a multiply-and-add that either initializes or updates an accumulator.All combinations of positive/negative multiplication with positive/negative accumulation are available as well.For some data types there are also builtins that perform saturating arithmetic instead of overflow for accumulation.Thus, when lowering the intrinsic for the GEMM computation, the compiler selects the appropriate positive multiply and positive accumulate builtin for the specified data type which is then used on line 15.

Other Data Types
The presentation so far assumed 32-bit data types where each operand VSR contains 4 elements and an MMA instruction computes a rank 1 update, computing and accumulating a single outer product.Halving the data-type size doubles the number of elements in each VSR and doubles the rank of the update.For example, for a 16-bit data type MMA computes a rank 2 update while an 8-bit data type computes a rank 4 update.The packing of more elements into a single VSR and the accumulation of multiple outer products by a single MMA instruction requires changes to Algorithm 1 and Algorithm 2. Let be the number of outer products performed by an MMA instruction -i.e. the rank of the update.Now the step size of the loops on lines 4 and 12 must both be because, in Figure 3, rows of BTile and columns of ATile are packed into each VSR.The extraction of operands in lines 6 and 9 is now a strided access.For instance, for = 2 (16-bit data types), four consecutive elements are extracted from row and four consecutive elements are extracted from row + 1 to form the 128-bit VSR.The length of kr must increase by times to provide enough data to populate the VSRs.The effect is that more partial-product accumulations can be computed per micro-kernel invocation given the same number of assemblies and disassemblies because the number of multiplications per outer product increases by .
For double-precision floating-point data type f64 an accumulator contains 4 × 2 64-bit elements.The operand extracted from ATile is placed into a combination of two VSRs that together contain four elements while the operand extracted from BTile is placed into a single 128-bit VSR containing two elements.Therefore, for f64 the value of nr should be reduced in half to reflect the number of VSRs available.With this reduction, an ATile tile occupies 16 VSRs and a BTile tile also occupies 16  VSRs.The extraction of operands into vector registers in lines 6 and 9 of Algorithm 2 must be changed accordingly.Until now, the algorithms have used values of mr and nr selected such that a micro kernel with the accumulator arrangement shown in Figure 3 could be computed with a single set of assemble and disassemble instructions.However, the implementation of Algorithm 2 in LLVM must handle any llvm.matrix.multiplyintrinsic created by any compilation path and thus must handle arbitrary values for nr, mr and kr.The code-lowering algorithm also supports inputs and outputs in any access order through modifications to the functions that extract operands and store the results in the accumulators to memory.
To handle larger values of mr and nr, the micro-level code-lowering algorithm has an additional outer double-nested loop that logically divides the CVec tile into 8 × 16-element sections as shown in Figure 3.Each of these sections can then be handled as shown in Algorithm 2. The disadvantage of a tile size that spans multiple accumulator sections is that the extraction of data into vector registers becomes more complex.For example, consider a 32-bit data multiplication as shown Figure 3 but with the values of nr and mr double of what is shown in the figure.The rows of ATile and BTile shown in Figure 3 are now a portion of the rows of larger tiles and the data extraction must gather the correct data into the vector registers that will be used by the accumulators.This data gathering adds additional code and may impact access locality if the tiles are large enough.Moreover, if Algorithm 2 is used in combination with Algorithm 1, then the packing work done earlier in lines 3 and 5 of that algorithm may not result in optimal locality.As well, if kr is smaller than (see Figure 2) multiple invocations of the intrinsic are needed to compute each element of the result matrix.Therefore, accumulators must be assembled and disassembled multiple times, creating an issue with constraint 5 .

A PERFORMANT COMPILER-ONLY SOLUTION
Experimental results support the following claims: 1. the macro-level algorithm is architecture-independent: it is performant across four different architectures (Intel's and AMD ® 's x86; IBM's POWER9 and POWER10); 2. the algorithm, when fully implemented inside the compiler, surpasses the performance of PluTo, a widely used polyhedral-based compiler-only approach; 3. the macro-level algorithm, even when coupled with a generic-lowering of the LLVM's matrix-multiply intrinsic, approaches the performance of Eigen 7 and OpenBLAS 5 ; 4. for small GEMMs the compiler-only approach performance can surpass library calls; 5. the MMA lowering delivers more than 2.6x the performance of VSX on POWER10; and 6. our compiler-only solution boosts GEMM's performance by exploiting target-specific matrix engines: the micro-level algorithm lowers multiply-add GEMM reductions to efficient IBM's MMA instructions resulting in better performance than Eigen and up to 96% of OpenBLAS' peak performance.

Machine Setup
Experimental evaluation used the four platforms shown in Table 2 1 .All platforms run Linux with 64-bit kernel at version 4.15.0-155-generic,except POWER10 which runs at version 4.18.0-277.el8.ppc64le.L1 and L2 cache sizes are per core while L3 is shared among all cores on Intel's and AMD's machines.L3 cache is shared between pairs of cores in POWER9 but is local to a core in POWER10.

Code Generation and Libraries
Section 4.3 and Section 4.4 show results for the following code generation strategies: • Intrinsic: GEMM loops are replaced with a single call to LLVM's matrix-multiply intrinsic.This option causes all GEMM loops to be unrolled and multiply-add computations to be lowered with either generic or MMA lowering (Section 3.2).
• Tiling: This option tiles the GEMM loops iterating over each dimension (M, K, and N) and calls the matrix-multiply intrinsic in the body of the innermost loop (depth 3).Tile sizes are computed following Goto et al.'s strategy as described in Section 3.1.
• Tiling+Packing: This option tiles the GEMM loops and packs the input matrices A and B as described in Section 3.1.The matrix-multiply intrinsic is called in the body of the innermost loop (depth 6) to compute a block of C from tiles of A and B.
• PLuTo: This option applies a source-to-source transformation using PLuTo 2 , a polyhedral-based paralelism and locality optimizer, that automatically tiles and redorders loops annotated with #pragma scope.PluTo's auto-tiling 3 for both the first and second-level caches are enabled.
• BLAS: This option replaces the GEMM loops with a single call to the CBLAS interface GEMM routine.The OpenBLAS 4 5 library is the implementation of the CBLAS interface evaluated.The library is compiled and linked as described in Section 4.1.2.
• Eigen: This option replaces the GEMM loops with Eigen 5 7 code to compute the general matrix-matrix multiply.Eigen code is compiled as described in Section 4.1.2.
The experiments use versions of BLAS and Eigen, which support both VSX and MMA instructions, that were contributed to the open source community by internal teams from IBM.All versions listed above are compiled and set to execute singlethreaded code.A naive implementation of a × × SGEMM in C++ serves as the base code for our algorithm.Carvalho et al.'s pattern-matching algorithm 16 is used to automatically identify and replace these GEMM loops in the source code with the code generated by the macro-level algorithm.The input matrices are stored and accessed in column-major order and the values mr = 16, nr = 4, and kr = 64 are used for all platforms, except on POWER10 which used mr = 16, nr = 8, and kr = 128.
Even though PLuTo lacks critical optimizations, such as packing, it is the only compiler-only solution available that produced correct results for the programs used in the experimental evaluation.Polly, another compiler-only solution, either failed to optimize the input code or generated code that computes incorrect results. 6his paper presents an end-to-end solution that compiles a C/C++ input program and replaces identified GEMM loops with the high-performance code generated by the macro-level algorithm.Another compiler-only solution, introduced by Uday Bondhugula, is not an end-to-end solution because it relies on a hand-crafted MLIR code written in the Affine dialect 17 .At the time of writing, there is no way to automatically translate C/C++ programs to MLIR.Thus, Bondhugula's hand-crafted input benefits from MLIR passes that are unreachable for a C/C++ end-to-end compilation path.These MLIR passes are also not reachable to compile Eigen's and BLAS's code, our baselines.The experimental results in this Section indicate that the macro-level algorithm and the generic-lowered micro kernel produce code that reaches comparable performance to Bondhugula's approach relative to BLAS 17 .Moreover, the MMA lowering reaches up to 96% of BLAS's peak performance, while Bondhugula's reached only 91% of BLAS's performance on a Coffee Lake Intel machine with his MLIR code that employes a target-specific vectorization pass 17 .

Experimental Methodology
The methodology follows these steps: 1. Compile each benchmark, for each platform, with aggressive optimization flags (see Section 4.1.2) using the six strategies: Intrinsic, Tiling, Tiling+Packing, PLuTo, BLAS, and Eigen.2. Create a list containing all of the executables.3. Measure the execution time of each element of the list once.4. Randomize the list of executables before the next set of measurements.5. Repeat until there are twenty measurements for each executable.This methodology ensures that changes to the execution environment that may affect performance manifest as a higher variance between executions of the same executable rather than as a bias in the measurements for a version of the experiements.Section 4.3 and Section 4.4 show 95% confidence intervals.

Performance Comparison Against Other Compiler-Only Approaches
The results in Figure 4 indicate that for small SGEMM sizes the performance of Tiling is far superior than PLuTo across all platforms: Tiling is up to 22x faster than PLuTo for the smallest SGEMM size on Intel, almost 25x faster on AMD, and over 11x faster on POWER9.For small problem sizes Tiling performs best overall as the matrices fit well in the memory hierarchy.Given that small matrices fit in cache, Tiling+Packing only adds overhead due extra memory movent for packing input matrices, performing worse than Tiling, but still better than PLuTo.In addition, Tiling is aware of the vector unit capacity in the target architecture and generates a micro-kernel that fully utilizes the vector unit.PLuTo performs poorly because its auto-tiling mechanism generates innerloops with conservative tiling sizes which do not saturate the vector unit capacity.The graphs for medium and large SGEMMs do not include results for Intrinsic because the LLVM matrix-multiply intrinsic is designed for small kernels and completely unrolls the loops.Invoking that intrinsic with large dimensions leads to prohibitive compilation times.
Figure 5 shows speedup results for medium SGEMM sizes.Overall, Tiling+Packing performs best across all platforms.Medium-sized matrices still fit in the low-level caches (L2 and L3), however not in L1.As matrices become larger, they span across multiple memory pages.Therefore, the data reorganization performed by packing improves the utilization of the memory hiearchy by increasing data temporal locality and reducing both TLB and page-faults.PLuTo does not employ packing and thus continues to perform poorly for medium-sized matrices.On AMD, Tiling is still compatitive with Tiling+Packing due to the larger caches and lower cache access latency.Nevertheless, as Figure 6 shows, with large SGEMM sizes that no longer fit the low-level caches (L2 and L3), the performance between Tiling and Tiling+Packing widens.POWER9, Tiling does not have its performance degraded as much as on Intel and AMD for large problems due to large cache line sizes on PowerPC ™ .However, Tiling+Packing remains the best strategy for large SGEMMs due to higher data temporal locality and significantly smaller TLB and page-faults in contrast to Tiling.
LLVM has a polyhedral optimizer, Polly 18 , that is not enabled by default.If Polly's optimizations are enabled 7 the code runs more than 4.8× slower than Tiling+Packing on all three architectures.Polly also has an option to parallelize loops 8 .When this parallelization is run on 20 threads, the resulting code is 1.4× slower than Tiling+Packing on POWER9.Unfortunately the SGEMM results computed with Polly-optimized code are incorrect 9 .Thus, the performance results for Polly cannot be included in the comparison at this time.As reported by Carvalho et al. 16 , we also failed to reproduce the performance results reported by Gareev et al.'s recent work 19 .Enabling Polly's GEMM idiom recognition and optimization pass10 the code runs 4.8× slower than Tiling+Packing.Therefore, to the best of our knowledge, this work is the first to present a compiler-only solution that produces performance that is comparable to high-performance libraries (Section 4.3) without relying on hand-written assembly micro kernels 17 .
LLVM's default lowering of matrix-load intrinsics generates code that spills data from both input operands loaded for the matrix-multiply intrinsic.The maintainers of the matrix intrinsic passes provide an alternative lowering for the matrix-multiply intrisic that bypasses matrix-load intrinsics and generates an unrolled sequence of load-load-multiply instructions with smaller load instructions.This alternative lowering was used as a work-around to eliminate the spill problem on Intel, AMD, and POWER9.However, this alternative lowering is currently not modularly integrated into the framework and thus could not be used for the micro-level algorithm.Therefore, the spill problem for Intrinsic, Tiling, and Tiling+Packing on POWER10 was resolved by applying a code transformation after the micro-kernel lowering to sink load instructions -which load the operands of matrix-multiply -closer to their uses -MMA outer-product instructions.

Performance Comparison Against High-Performance Libraries
The results in Figure 7 indicate that for the smallest SGEMM size the performance of Intrinsic is on-par with both Eigen and BLAS, thus revealing that the backend alone generates efficient code for computing very small SGEMMs.The superior performance of Tiling is due to the better utilization of the vector registers in the target architecture.Tiling is more than 85% faster than BLAS and over 2.6× faster than Eigen on Intel for the problem size = = 16.On AMD, Tiling is more than 2.3× faster than Eigen and 2.8× faster than BLAS for the same problem size.Intrinsic performs worse in comparison with the libraries as the problem size increases because, lacking tiling, it exhibits lower cache locality.In summary, Figure 7 indicates that the compiler-only macro-level approach can surpass high-performance libraries for small problem sizes on multiple platforms.
For medium problem sizes Tiling+Packing is the best code-generation strategy overall and it is on-par with Eigen across all platforms for the problem size = = 128.As the size of matrices increases the performance of Tiling degrades with respect to both BLAS and Eigen because LLVM's matrix-load intrinsic lowering results in additional spill code.However, Tiling+Packing remains competitive across all platforms by matching Eigen's performance and is less than 80% slower than BLAS for = = 512.As discussed in Section 4.2, Tiling and Tiling+Packing show similar performance due to the larger caches and lower cache access latency on AMD.
For large matrices Tiling performs worst overall because it produces a data layout that leads to more cache misses.Tiling increases the cache utilization within a given dimension but makes cross-dimension accesses expensive because elements of a given column, but different rows, are located on different virtual-memory pages (e.g.8KB apart on column-major matrix of 2048 columns).The packing data reorganization creates a better element order that increases data locality on both dimensions.Therefore, Tiling+Packing performs significantly faster than Tiling on Intel and AMD.On POWER9, the gap between Tiling and Tiling+Packing is not as large as in other platforms because POWER9 has larger cache lines (128 bytes) -twice as large as on the other platforms -which, coupled with prefetching, helps to hide memory latency.The Tiling performs significantly worse than Tiling+Packing on POWER9 as the problem size increases due to increased Translation Lookaside Buffer misses and page-faults.A similar pattern appears on both Intel and AMD machines, where Tiling suffers memory penalties as the problem size increases, leaving Tiling+Packing as the best performing code-generation strategy overall.
Compiling the largest SGEMM with Clang -O311 , the state of the art when using Clang prior to this work, cam result in a code that is more than 68× slower than BLAS.This work significantly reduces this performance gap.For the largest SGEMM size ( = = 4096), Tiling+Packing is slower than BLAS: over 2.2× slower on Intel and 43% slower on AMD.In comparison with Eigen, Tiling+Packing is 1.75× slower on Intel and 34% slower on AMD.On POWER9, Tiling+Packing is 34% slower than BLAS and on-par with Eigen.The performance gap between BLAS and Tiling+Packing is in great part due to a performance gap between the BLAS micro kernel and the generic-lowered micro kernel available upstream in LLVM.The generic-lowered micro kernel is 2.1× slower on Intel, 70% slower on AMD, and 30% slower on POWER9 than the micro kernel from BLAS.
Portability is a key design goal for the macro-level algorithm because it leads to the generation of a micro kernel -for any target architecture available in LLVM -with significant performance, as indicated by the results above.Nevertheless, the genericlowered micro kernel is not a match for the hand-crafted assembly micro kernel available to high-performance libraries.This performance gap can be reduced -as demonstrated for POWER10 MMA (See Section 3.2) in the following Section -by the utilization of a platform-specific micro-level lowering algorithm.This strategy of coupling a target-independent macro-level algorithm for tiling and packing with a target-specific micro-level lowering algorithm can be a roadmap for hardware vendors.The results in the following section lead to a prediction that, once micro-level lowering is available for Intel, AMD, and POWER9, the gap between the layered compiler-only solution and high-performance libraries can be bridged.

MMA intrinsic
The performance of the compiler-only macro kernel can be boosted by exploiting matrix engines.This study focuses on IBM POWER10's MMA and uses the second-silicon version of POWER10 which has similar hardware characteristics to those that will be featured in commercially available versions but slightly different firmware settings.Therefore, the reported speedups relative to BLAS are consistent with the results expected for the commercial version of POWER10.The micro kernels in both BLAS and Eigen were re-engineered by IBM experts to make efficient use of MMA.The parameters used for these experiments are mr = 16, nr = 8, and kr = 128.
Different than on other platforms, Figure 10a shows that for small kernels Tiling+Packing results in either better or on-par performance compared to the libraries: Tiling+Packing is over 50% faster than Eigen for the problem size = = 16 and over 83% faster for = = 32 and = = 64.Tiling+Packing achieves up to 10% more performance than BLAS for SGEMMs of size = = 32 and = = 64.With MMA, as GEMM is implement with outer products instead of inner products, columns of matrix A are multiplied by rows of matrix B producing an accumulator result that has partially computed column elements of matrix C -one per VSR associated with the accumulator.Therefore, the best access layout for the MMA builtin is both A and C in column-major and B in row-major order.However, all input matrices are stored, and thus accessed, in column-major order.Therefore, loaded tiles of matrix B need to be transposed prior to calling the MMA builtin.For Intrinsic and Tiling, this means additional vector shuffle and merge instructions in the micro-kernel code.The extra instructions in the case of Tiling+Packing are not generated as part of the micro level but as part of the macro level algorithm that packs the matrices.With fewer instructions, the micro-kernel code generated with Tiling+Packing runs faster and exhibits better instruction-cache utilization.In addition, the packing loads act as prefetching loads and reduce access latency for the operands of outer-product instructions.Moreover, calling a library incurs in performance overheads that are more noticeable on smaller problem sizes.These results indicate that the proposed macro-level approach coupled with an MMA-specific lowering of llvm.matrix.multiplycan provide better performance than two state-of-the-art libraries.
Figure 10b contrasts the performance of the generic-lowered code, which uses VSX instructions on POWER10, with the performance of the new MMA-specific lowered code presented in this paper (Section 3.2).The MMA solution is over 2.6× faster than the VSX solution for the largest SGEMM.The conclusion is that a target-specific lowering can outperform Eigen, when both use MMA instructions, for the SGEMM sizes = = 2048 and = = 4096.In fact, Tiling+Packing with VSX matches the performance of Eigen with MMA, indicating that the macro-level algorithm made better tiling and packing decisions with compile-time information than Eigen.Tiling+Packing provides over 2.6% better performance than Eigen for the largest SGEMM.Furthermore, the MMA-specific lowering algorithm generated code that closely matches BLAS performance (up to 96%), which itself achieves almost 50 flops/cycle (almost 80% of peak).In essence, a compiler-only layered approach built around a compiler intrinsic for matrix multiplication can achieve comparable or better performance than stateof-the-art libraries.In addition, having a target-specific lowering for the matrix-multiplication instrinsic is key to make best use of vector/matrix units, the higher cache locality, and better utilization of the memory hierarchy delivered by the macro-level algorithm.

ADDITIONAL OPPORTUNITIES
This groundwork implementation of the layered approach in LLVM creates opportunities that will benefit other BLAS kernels and other specialized architectures.Vector registers only 2 × 2 matrix operations

Macro-level strategy for other BLAS kernels
Algorithm 1 describes general techniques for blocking, tiling, packing and intrinsic invocation that can be used for other BLAS matrix operations.As an example, consider SYR2K, which computes either the lower or upper triangular half of ← × × + × × + × .Matrix is symmetric while matrices and are general.High performance implementations of SYR2K partition the matrix into blocks and use a pair of GEMM operations to update each block.Both the normal (non-transposed) and transposed versions of matrices and are needed.
An equivalent to Algorithm 1 for SYR2K, will use two calls for packing matrix (line 3) and two calls for packing matrix (line 5).In each case, one of the calls produces a packed version of a block of the matrix and the other call produces a packed version of the transpose of a block with different blocks used for the normal and transposed versions.In the inner loops, the algorithm loads two tiles of (line 10) and two tiles of (line 11), one tile from the normal block and the other from the transposed, each with different tiles used from the normal and transposed versions.Finally, the actual computation (line 12) requires two calls to the llvm.matrix.multiplyintrinsic.This approach reuses the tiling and packing strategies and allows the use of blocks and tiles of different sizes for matrices and to achieve better cache utilization.
The acceleration of other idioms requires changes to our pattern recognition component, KernelFaRer 16 .For a detailed discussion, please refer to the KernelFaRer paper.Other idioms can be supported with our macro-level approach.However, new idioms require simple adjustments to the loop nest generation, mainly to determine a performant loop nesting order.As other works have shown, tiling and packing are easily automated 20,21 .New compiler intrinsics are required, but adding new intrinsics is a trivial task in compiler development.

Targetting other matrix engines
POWER10 MMA is but one of the emerging enhancements conceived to accelerate matrix operations in modern CPUs and GPUs.For CPUs two other architectural extensions have been announced: Intel's AMX 8 and Arm's ME 9 .
Each extension has its own unique characteristics, including the list of supported data types, the set of registers used to operate on matrices, and basic computational operation.Table 3 shows a brief comparison of the three extensions.It seems that AMX is more targeted to image-processing and deep learning applications, while both MMA and Arm's ME can also handle scientific applications.
MMA and Arm's ME are tightly integrated in the processor core while AMX instructions are executed in a more loosely coupled accelerator (TMUL).An Intel host CPU communicates with an AMX unit via a queue of tiles and accelerator commands 8 .Tile commands load/store tile register data from/to memory addressed by host CPU registers.All pointer arithmetic and loop control instructions run in the host CPU.Coherence between accesses from the AMX unit and access from the host CPU is maintained at the main memory level instead of at the cache level 8 .
The approach described in this paper can be used with these other matrix engines.The macro-level algorithm described in Section 3.1 is general and can be lowered to multiple targets supported by LLVM's backend.The compiler designer must tailor a few lowering operations when moving to a new target.In particular, the load/store operations for packing (Algorithm 1, lines 3 and 5) and for loading and storing tiles (Algorithm 1, lines 10, 11, 15, and 21) must be lowered to architecture-dependent instructions by the LLVM backend.Lowering to Arm's ME and IBM's MMA is straightforward because both matrix engines are tightly coupled accelerators and rely on host CPU vector registers to store input and output data.On AMX however, the lowering of load and store operations use theTILELOAD* and TILESTORE* AMX instructions.These cause data movements between AMX's tile register and host memory.There are no instructions to move data between AMX's tile registers and host vector-registers (e.g.AVX-512 registers).Therefore, for both GEMM and SYR2K, where partial products are scaled by and (Algorithm 1, lines 19 and 17), tile-register data would have to be stored back to memory and loaded into host vector register for the scaling computations.

RELATED WORK
In a seminal work, Goto and Van D. Geijn detail a layered approach to improve cache and vector-register utilization on CPUs 4 .Using this approach, modern linear algebra libraries, such as Eigen and BLAS, achieve high performance on HPC workloads.Goto and Van D. Geijn show that modelling both L2 cache and TLB -and not only L1 as considered earlier -is crucial for cache performance.That work is seminal because it publicly explained practical strategies for optimal cache and vector register utilization on CPUs; these strategies were previously only available in proprietary libraries.The layered strategy features two stages: 1. blocking input matrices and packing tiles of these blocks in such a way that tiles lay in main memory in the order that they will be accessed; and 2. computing a small GEMM at the register level.This paper is the first to create a compiler-only code generation for the layered approach and adapts blocking, tiling, and packing to create a data layout that is suitable for computing with MMA and to also improve utilization of the L3 cache.
Gareev et al. 19 implement tiling and packing within Polly 18 without the need for an external library or automatic tuning.Their approach with a hand-optimized SSE kernel reports performance on par with BLIS on Intel Sandy Bridge.When not relying on an assembly kernel, their pass uses the default LLVM vectorizer that delivers only a small speedup over naive code.The solution proposed in this paper implements both memory optimization and micro kernel in the compiler, not requiring any hand-optimized code.
Uday Bondhugula presents an implementation of the BLAS strategy within the emerging MLIR framework 17 .He demonstrates that blocking, packing, register tiling, and unroll+jam yields code that is 34% slower than OpenBLAS on Intel's Coffee Lake 17 .Bondhugula also implemented a custom vectorization pass, to replace the default LLVM vectorizer, to achieve an additional 40% performance improvement, thus reaching 91% of the performance of OpenBLAS.Our experiments with Intel, AMD and IBM POWER9 machines also pointed out the weakness of the default LLVM vectorizer.
Carvalho et al. introduce KernelFaRer, a robust pattern recognition system that can identify matrix-multiplication patterns in the LLVM IR level and can replace these with library calls 16 .While this approach can lead to speedups on the order of 1000s in comparison with non-optimized code, it has the drawback of requiring the integration of libraries into a computer system that may not have it.Moreover, their experimental results indicate that, for smaller matrices, the overhead of invoking functions in the libraries leads to performance degradations.The solution in this paper is orthogonal to Carvalho et al.: their pattern recognition can identify GEMM kernels at the intermediate-level representation and then invoke the compiler-only solution presented here.
When presenting the ILLIAC IV, one of the first SIMD machines, Barnes et al. advocated that data parallelism would be crucial for progress 22 , citing matrix operations as a critical target 23 .Nearly 50 years later, Barnes' direction culminated in the inclusion of vector extensions in all mainstream CPUs.Although fast vectorization is powerful, matrix-multiplication performance could be improved further with specialized hardware units.This possibility is now realized with the introduction of what Domke et al. have dubbed "matrix engines" 24 .
Robust performance benchmarking is critical for the evaluation of vector extensions.While there is extensive performance evaluation of matrix multiplication on vector extensions for Intel architectures 25,26,27 , to the best of our knowledge, similar studies do not exist for the PowerPC or Arm platforms.Moreover, the introduction of matrix engines is recent in all platforms and therefore only simulated or theorized performance estimates exist for AMX, SVE, or MMA 28,24 .Therefore, this work is among the first to present performance evaluation of a matrix engine on actual hardware.
The advent of the "general purpose" GPUs quickly saw study and performance analysis of matrix computations 29,30 .This evolved into implementations of matrix multiplications on GPUs: manually 31 , through libraries like BLAS 32 , and through frameworks such as DistME 33 .Matrix multiplication is also central to the design of hardware for tensor-operation acceleration such as Google's Tensor Processing Unit 34 , Nvidia's Tensor Core 35 , and Huawei's Cube Unit 36 .
Our work proposes a compiler-only approach, thus it does not require profiling data.Nevertheless, works that automate the process of tuning high-performance implementations of BLAS routines (e.g.ATLAS 37 ) could be coupled with our work.One possibility is to employ the tuning automation offline to ensure that the compiler-only approach is obtaining the correct tiling and packing factors.However, as a compiler-only approach, it is more akin to approaches such as code versioning or code generation heuristics, such as the one presented by Rohwedder et al. 21, than offline profiling approaches.

CONCLUSION
This work presents a robust solution to the problem of generating efficient code entirely within a compiler targeting multiple architectures, consisting of implementing, in the widely used LLVM compilation framework, the layered approach broadly used in specialized libraries.A key insight was to create a parameterized algorithm for the tiling and packing layer that only requires the compiler to read the effective sizes of the caches from an existing LLVM pass to determine the appropriate sizes for blocks.In this approach, a target-specific compilation can use the size of the register file for the target architecture to decide on the most appropriate size for tiles.Similar to the layered approach used in libraries, the goal of packing is to lay out the tiles in memory in the order in which they will be accessed during the computation to increase locality.Another essential insight was to use the standard LLVM matrix multiplication intrinsic as the interface between the macro kernel and the micro kernel.This way, for any target architecture, the specialized code generation at the micro-kernel level only needs to be done once and its performance advantages will benefit any code generation path that uses the same intrinsics.The performance evaluation, including machines with and without matrix engines, demonstrates the modularity of the design and reveals significant performance gains from the layered approach in multiple architectures.The experimental evaluation indicates that the macro-level algorithm, coupled with a generic intrinsic lowering, achieves more than 22× better performance than PluTo, a widely used compiler-only polyhedral optimizer.The new compiler-only approach generates code that matches Eigen performance and is only 34% slower than BLAS on POWER9.An MMA-specific implementation results in more than 2.6× the performance of the VSX micro kernel, is over 83% faster than Eigen, and achieves up to 96% of BLAS peak performance for large SGEMMs, even when these libraries are engineered to also benefit from MMA.
Figure 2 (c) shows the layout of the elements of each matrix as they are j ... 2 3 k l ... 4 5 m n ...

Figure 2
Figure 2 Tiling and packing for llvm.matrix.multiply.

Figure 3
Figure 3 Division of CNewTile into MMA accumulators.

Figure 4 Figure 5
Figure 4Speedup over PLuTo for small SGEMM in each platform.

Figure 6
Figure 6 Speedup over PLuTo for large SGEMM in each platform.

Figure 7 Figure 8 Figure 9
Figure 7 Execution time of small SGEMM in each platform.
Speedup over BLAS with MMA.VSX vs. MMA performance.

Figure 10 (
Figure 10 (a) Speedup over BLAS of small SGEMM on POWER10 with MMA; (b) Contrasting VSX and MMA performance of SGEMM kernel on POWER10.

Table 1
MMA instruction summary.bitdatatype, each VSR contains eight elements and the MMA instruction computes and accumulates two outer products into the 4 × 4-element accumulator, thus computing a rank 2 update.For 8-bit data types the instruction computes rank 4 updates and for 4-bit data types it computes rank 8 updates.Table1shows the types supported by MMA.The computation size indicates the size of each operand and the rank of the update computed by an MMA instruction.For all types with up to 32 bits the result in the accumulator is 16 32-bit values organized in a 4 × 4 grid.For f64 the accumulator contains 4 × 2 elements of the matrix and performs a rank 1 update.

Table 2
Machine configuration used in the evaluation. 12

Table 3
Matrix multiplication extensions comparison.