![](label.gif)
About the M99 algorithm
The M99 data compression algorithm represents and entirely new class of
statistical compression that is unreleated to other statistical encoding
techniques such as Huffman and Arithmetic coders.
While Huffman coders can offer outstanding encoding and decoding speeds,
the overall compression ratios can often fall far short of the best available.
And, while Arithmetic coders overcome this deficiency, they are slower than
the Huffman method and thus trade speed for compression. Even the fastest
spin offs of Arithmetic coders such as Range coding, ELS, and the Q-coder are
still slow by comparison to Huffman. And to compound this shortcomming, the
decode speeds off Arithmetic coders is slower still than the encoding because
of the need for division operations.
M99 encodes and decodes at speeds which rival the fastest Huffman coders and
achieves compression ratios which are very close to those of arithmetic coders.
The compression results and speeds of the M99 algorithm for the
Calgary corpus can be found here.
The M99 algorithm is naturally adaptive. This is unlike other statistical
compression methods which require constant statistical updating and large
amounts of memory to achieve adaptive behavior. The M99 algorithm does not
calculate statistics and probabilities in order to assume this adaptive behavior
and thus the method can have a far smaller memory requirement (about 40K).
Moreover, M99 has many tasks which in hardware, could be run in parrallel.
This should allow M99 to rival even a static Huffman implementation in hardware!
It is important to note that M99 is an encoding method and NOT a modeling
technique. All statistical coders require modeling of the target data stream
in order to achieve good compression. The beta M99 coder available here does
NOT include a modeling phase. The first full release of M99 will include a
modeling technique called blocksorting. (See the "
What's new & Upcomming" section)
Download the M99 beta (Windows version)
Back to top