• Aucun résultat trouvé

COMPRESSION PERFORMANCE

Dans le document Standard for (Page 30-33)

Like any other system, metrics of performance of a data compression algo- rithm are important criteria for selection of the algorithm. The performance measures of data compression algorithms can be looked at from different per- spectives depending on the application requirements: amount of compression

COMPRESSION PERFORMANCE 13

achieved, objective and subjective quality of the reconstructed data, relative complexity of the algorithm, speed of execution, etc. We explain some of them below.

1.6.1

The most popular metric of performance measure of a data compression al- gorithm is the compression ratio. It is defined as the ratio of the number of bits to represent the original data to the number of bits to represent the com- pressed data. Consider an image of size 256 x 256 requiring 65536 bytes of storage if each pixel is represented by a single byte. If the compressed version of the image can be stored in 4096 bytes, the compression ratio achieved by the compression algorithm will be 16:l.

A variation of the compression ratio is bits per sample. This metric indi- cates the average number of bits to represent a single sample of the data (e.g., bits per pixel for image coding). If 65536 pixels of an image are compressed to 4096 bytes, we can say that the compression algorithm achieved 0.5 bits per pixel on the average. Hence the bits per sample can be measured by the ratio of the number of bits of a single uncompressed sample to the compression ratio.

It should be remembered that the achievable compression ratio using a lossless compression scheme is totally input data dependent. If the same algorithm is applied in a number of distinct data files, the algorithm will yield a different compression ratio in different files. The maximum compression ratio and hence the bits per sample that can be achieved losslessly is restricted by the entropy of the data file according to the noiseless source coding theorem by Shannon. Sources with less redundancy have more entropy and hence are more difficult to achieve compression. For example, it is very difficult to achieve any compression in a file consisting of mainly random data.

Compression Ratio and Bits per Sample

1.6.2 Quality Metrics

This metric is not relevant for lossless compression algorithms. The quality or fidelity metric is particularly important for lossy compression algorithms for video, image, voice, etc., because the reconstructed data differ from the original ones and the human perceptual system is the ultimate judge of the reconstructed quality. For example, if there is no perceivable difference be- tween the reconstructed data and the original ones, the compression algorithm can be claimed to achieve very high quality or high fidelity. The difference of the reconstructed data from the original ones is called the distortion. One expects to have higher quality of the reconstructed data, if the distortion is lower. Quality measures could be very subjective based on human perception or can be objectively defined using mathematical or statistical evaluation.

Although there is no single universally accepted measure of the quality met-

14 lNTRODUCTlON TO DATA COMPRESSION

rics, there are different objective and subjective quality metrics in practice to evaluate the quality of the compression algorithms.

1.6.2.1 Often the subjective quality metric is de-

fined as the mean observers score (MOS). Sometimes, it is also called mean opinion score. There are different statistical ways to compute MOS. In one of the simplest ways, a statistically significant number of observers are ran- domly chosen to evaluate visual quality of the reconstructed images. All the images are compressed and decompressed by the same algorithm. Each ob- server assigns a numeric score to each reconstructed image based on his or her perception of quality of the image, say within a range 1-5 to describe the qual- ity of the image-5 being the highest quality and 1 being the worst quality.

The average of the scores assigned by all the observers to the reconstructed images is called the mean observer score (MOS) and it can be considered as a viable subjective metric if all the observers evaluate the images under the same viewing condition. There are different variations of this approach to calculate MOS-absolute comparison, paired comparison, blind evaluation, etc.

The techniques of measurement of the MOS could well be different for different perceptual data. The methodology to evaluate the subjective quality of a still image could be entirely different for video or voice data. But MOS is computed based on the perceived quality of the reconstructed data by a statistically significant number of human observers.

Subjective Quality Metric

1.6.2.2 Objective Quality Metric There is no universally accepted measure for objective quality of the data compression algorithms. For objective mea- sure, the most widely used objective quality metrics are root-mean-squared error ( R M S E ) , signal-to-noise ratio ( S N R ) , and peak signal-to-noise ratio ( P S N R ) . If I is an A4 x N image and is the corresponding reconstructed image after compression and decompression, R M S E is calculated by

where i , j refer to the pixel position in the image. The S N R in decibel unit (dB) is expressed as S N R =

(1.4) In case of an 8-bit image, the corresponding P S N R in dB is computed as

OVERVIEW OF IMAGE COMPRESSION 15 where 255 is the maximum possible pixel value in 8 bits.

It should be noted that a lower R M S E (or equivalently, higher S N R or P S N R ) does not necessarily always indicate a higher subjective quality.

These objective error metrics do not always correlate well with the subjective quality metrics. There are many cases where the PSNR of a reconstructed image can be reasonably high, but the subjective quality is really bad when visualized by human eyes. Hence the choice of the objective or subjective met- rics to evaluate a compression and decompression algorithm often depends on the application criteria.

Similar objective quality metrics are used for audio and speech signals as well.

1.6.3 Coding Delay

Coding delay is another performance measure of the compression algorithms where interactive encoding and decoding is the requirement (e.g., interactive videoteleconferencing, on-line image browsing, real-time voice communication, etc.) . The complex compression algorithm might provide a better amount of compression, but it could lead to increased coding delay, prohibiting the interactive real-time applications. The constraint to the coding delay often forces the compression system designer to use a less sophisticated algorithm for the compression system.

1.6.4 Coding Complexity

The coding complexity of a compression algorithm is often considered to be a performance measure where the computational requirement to implement the codec is an important criteria. The computational requirements are usually measured in terms of a number of arithmetic operations and memory require- ments. Usually, the number of arithmetic operations is described by MOPS (millions of operations per second). But in the compression literature, the term MIPS (millions of instructions per second) is often used to measure the compression performance in a specific computing engine’s architecture. Espe- cially, the implementation of the compression schemes using special-purpose DSP (digital signal processor) architectures is common in communication sys- tems. In portable systems, this coding complexity is an important criterion from the perspective of the low-power hardware implementation.

Dans le document Standard for (Page 30-33)