Benchmark

In terms of data shape validation, MLStyle can often be faster than carefully optimized handwritten code.

All of the scripts for the benchmarks are provided in the directory: Matrix-Benchmark.

To run these cross-implementation benchmarks, some extra Julia packages should be installed:

  • (v1.4) pkg> add Gadfly MacroTools Rematch Match BenchmarkTools StatsBase Statistics ArgParse DataFrames.

  • (v1.4) pkg> add MLStyle for a specific version of MLStyle.jl is required.

After installing dependencies, you can directly benchmark them with julia matrix_benchmark.jl tuple array datatype misc structfields vs-match in the root directory.

The benchmarks presented here are made by Julia v1.7 on Windows 11(64 bit).

Benchmark results for other platforms and Julia versions are welcome to get accepted as a pull request, if you figure out a better way to organize the files and their presentations in this README.

(*We leave out the benchmarks of the space used. That should be considered unnecessary as the costs are always zero. *)

On the x-axis, after the name of test-case is the least time-consuming run’s index in units of ns.

The y-axis is the ratio of the implementation’s time cost, made relative to that of the least time-consuming.

The benchmark results in dataframe format are available at this directory.

Arrays

matrix-benchmark/bench-array.jl

Tuples

matrix-benchmark/bench-tuple.jl

Data Types

matrix-benchmark/bench-datatype.jl

Extracting Struct Definitions

matrix-benchmark/bench-structfields.jl

Misc

matrix-benchmark/bench-misc.jl

An Example from Match.jl Documentation

matrix-benchmark/bench-versus-match.jl