public class FloatMatrix extends Object
float
type.
The matrix transposition algorithm isn't parallelized.While the matrix transposition algorithm could easily be parallelized, on an SMP machine it does not make any sense. If the matrix doesn't fit in any processor specific cache then the memory (or higher level shared cache) bandwidth becomes a bottleneck in the algorithm. Matrix transposition is in principle a very simple algorithm  it doesn't do anything else than move data from one place to another. If shared memory is the bottleneck, then the algorithm isn't any faster if the data is being moved around by one thread or by multiple threads in parallel.
If the data fits in a processor specific cache, then the algorithm could theoretically be made faster with parallelization. To make the parallelization effective however, the data would have to be set up in some kind of a NUMA way. For example, each processor core would hold an equal section of the data in the processor cache. Then the algorithm could be made faster as each processor core could quickly transpose blocks of data that are in the processor cache, and then exchange blocks with other processor cores via the slower higher level shared cache or main memory.
This approach doesn't work well in practice however, at least not in a Java program. The reason is that there are no guarantees where the data is when the algorithm starts (in which processor core caches), and further there are no guarantees of any processor affinity for the threads that are executing in parallel. Different processor cores could be executing the transposition of different sections of the data at any moment, depending on how the operating system (and the JVM) schedule thread execution. And more often than not, the operating system isn't smart enough to apply any such processor affinity for the threads.
An additional problem for any NUMA based attempt is that the data array would have to be aligned on a cache line (e.g. 64 or 128 bytes), to prevent cache contention at the edges of each data section. But a JVM makes no such guarantees about memory alignment. And since pointers do not exist in Java, manually aligning memory addresses isn't possible.
Considering all of the above, the parallel algorithm doesn't in practice work any faster than the singlethread algorithm, as the algorithm is bound by the memory bandwidth (or shared cache bandwidth). In some cases parallelization can even make the execution slower due to increased cache contention.
Modifier and Type  Method and Description 

static void 
transpose(ArrayAccess arrayAccess,
int n1,
int n2)
Transpose a n_{1} x n_{2} matrix.

static void 
transposeSquare(ArrayAccess arrayAccess,
int n1,
int n2)
Transpose a square n_{1} x n_{1} block of n_{1} x n_{2} matrix.

public static void transpose(ArrayAccess arrayAccess, int n1, int n2) throws ApfloatRuntimeException
Both n_{1} and n_{2} must be powers of two. Additionally, one of these must be true:
n_{1} = n_{2}
n_{1} = 2*n_{2}
n_{2} = 2*n_{1}
arrayAccess
 Accessor to the matrix data. This data will be transposed.n1
 Number of rows in the matrix.n2
 Number of columns in the matrix.ApfloatRuntimeException
public static void transposeSquare(ArrayAccess arrayAccess, int n1, int n2) throws ApfloatRuntimeException
Both n_{1} and n_{2} must be powers of two, and n_{1} <= n_{2}.
arrayAccess
 Accessor to the matrix data. This data will be transposed.n1
 Number of rows and columns in the block to be transposed.n2
 Number of columns in the matrix.ApfloatRuntimeException