Sequential neural text compression
- 1 January 1996
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Neural Networks
- Vol. 7 (1) , 142-146
- https://doi.org/10.1109/72.478398
Abstract
The purpose of this paper is to show that neural networks may be promising tools for data compression without loss of information. We combine predictive neural nets and statistical coding techniques to compress text files. We apply our methods to certain short newspaper articles and obtain compression ratios exceeding those of the widely used Lempel-Ziv algorithms (which build the basis of the UNIX functions "compress" and "gzip"). The main disadvantage of our methods is that they are about three orders of magnitude slower than standard methods.Keywords
This publication has 6 references indexed in Scilit:
- Discovering Predictable ClassificationsNeural Computation, 1993
- Learning Factorial Codes by Predictability MinimizationNeural Computation, 1992
- Learning Complex, Extended Sequences Using the Principle of History CompressionNeural Computation, 1992
- Fixed data base version of the Lempel-Ziv data compression algorithmIEEE Transactions on Information Theory, 1991
- Arithmetic coding for data compressionCommunications of the ACM, 1987
- A universal algorithm for sequential data compressionIEEE Transactions on Information Theory, 1977