The author of EJML benchmark his library against several other libraries. ojAlgo is one of them.
What I did:
Hardware & JVM: MacPro, 2 x 2.26 GHz Quad-Core Intel Xeon, 12 GB 1066 MHz DDR3, OS X 10.9.4 (64-bit kernel), JVM 1.8.0u5 64-bit
For each operation there are two charts. One that shows the absolute operation timings for each library and matrix size - to the left - and one that shows relative performance - to the right. The fastest library, for each operation at each matrix size, has relative performance 1.0. A library that performs a specific operation for a specific matrix size, half as fast has the fastest library has relative performance 0.5...
In the charts to the left; a "lower" curve is better as it means less time. In the charts to the right it's better to be "above" as it means higher speed.
In either chart it is no good if a library has no value for the largest matrices. It means the calculations failed. Probably because the implementation is too slow and/or consumes too much memory. None of the tested libraries could do a singular values decomposition for a 10 000 x 10 000 matrix within the benchmark's time limit.
Some questions you may have:
Q: Native code?
A: For heavy operations on large matrices using high quality native code (optimised for the specific hardware) will improve performance. With simpler operations (regardless of matrix size) or small matrices (regardless of operation) there is nothing to gain buy calling native code. Actually the benchmark includes some libraries using native code. I just didn't publish the results here. If you download the full/raw benchmark results mentioned above they include results for libraries using native code. When everything is to native code's advantage it can easily be 10x faster
Q: Other (lesser) hardware?
A: Different hardware and/or different virtual machines yields different results. ojAlgo performs well on higher (server) grade hardware and modern virtual machines.
Project and site sponsored by Optimatika
Copyright © 2000 - 2014