Second-generation image-coding techniques

Abstract
The digital representation of an image requires a very large number of bits. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio, starting at 1 with the first digital picture in the early 1960s, reached a saturation level around 10:1 a couple of years ago. This certainly does not mean that the upper bound given by the entropy of the source has also been reached. First, this entropy is not known and depends heavily on the model used for the source, i.e., the digital image. Second, the information theory does not take into account what the human eye sees and how it sees. Recent progress in the study of the brain mechanism of vision has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 70:1. Image quality, of course, remains as an important problem to be investigated. This class of methods, that we call second generation, is the subject of this paper. Two groups can be formed in this class: methods using local operators and combining their output in a suitable way and methods using contour-texture descriptions. Four methods, two in each class, are described in detail. They are applied to the same set of original pictures to allow a fair comparison of the quality in the decoded pictures. If more effort is devoted to this subject, a compression ratio of 100:1 is within reach.

This publication has 28 references indexed in Scilit: