summaryrefslogtreecommitdiff
path: root/lzo/doc
diff options
context:
space:
mode:
Diffstat (limited to 'lzo/doc')
-rw-r--r--lzo/doc/LZO.FAQ10
-rw-r--r--lzo/doc/LZO.TXT29
-rw-r--r--lzo/doc/LZOAPI.TXT30
-rw-r--r--lzo/doc/LZOTEST.TXT10
4 files changed, 40 insertions, 39 deletions
diff --git a/lzo/doc/LZO.FAQ b/lzo/doc/LZO.FAQ
index 604c98fa..cb1f38aa 100644
--- a/lzo/doc/LZO.FAQ
+++ b/lzo/doc/LZO.FAQ
@@ -47,7 +47,7 @@ Because of historical reasons - I want to support unlimited
backward compatibility.
Don't get misled by the size of the library - using one algorithm
-increases the size of your application by only a few kB.
+increases the size of your application by only a few KiB.
If you just want to add a little bit of data compression to your
application you may be looking for miniLZO.
@@ -73,7 +73,7 @@ What's the difference between the decompressors per algorithm ?
Once again let's use LZO1X for explanation:
- lzo1x_decompress
- The `standard' decompressor. Pretty fast - use this whenever possible.
+ The 'standard' decompressor. Pretty fast - use this whenever possible.
This decompressor expects valid compressed data.
If the compressed data gets corrupted somehow (e.g. transmission
@@ -81,7 +81,7 @@ Once again let's use LZO1X for explanation:
your application because absolutely no additional checks are done.
- lzo1x_decompress_safe
- The `safe' decompressor. Somewhat slower.
+ The 'safe' decompressor. Somewhat slower.
This decompressor will catch all compressed data violations and
return an error code in this case - it will never crash.
@@ -111,7 +111,7 @@ Once again let's use LZO1X for explanation:
Notes:
------
- When using a safe decompressor you must pass the number of
- bytes available in `dst' via the parameter `dst_len'.
+ bytes available in 'dst' via the parameter 'dst_len'.
- If you want to be sure that your data is not corrupted you must
use a checksum - just using the safe decompressor is not enough,
@@ -153,7 +153,7 @@ So after verifying that everything works fine you can try if activating
the LZO_ALIGNED_OK_4 macro improves LZO1X and LZO1Y decompression
performance. Change the file config.h accordingly and recompile everything.
-On a i386 architecture you should evaluate the assembler versions.
+On an i386 architecture you should evaluate the assembler versions.
How can I reduce memory requirements when (de)compressing ?
diff --git a/lzo/doc/LZO.TXT b/lzo/doc/LZO.TXT
index addf4303..7426ab2b 100644
--- a/lzo/doc/LZO.TXT
+++ b/lzo/doc/LZO.TXT
@@ -6,8 +6,8 @@
Author : Markus Franz Xaver Johannes Oberhumer
<markus@oberhumer.com>
http://www.oberhumer.com/opensource/lzo/
- Version : 2.03
- Date : 30 Apr 2008
+ Version : 2.06
+ Date : 12 Aug 2011
Abstract
@@ -40,12 +40,12 @@
- Decompression is simple and *very* fast.
- Requires no memory for decompression.
- Compression is pretty fast.
- - Requires 64 kB of memory for compression.
+ - Requires 64 KiB of memory for compression.
- Allows you to dial up extra compression at a speed cost in the
compressor. The speed of the decompressor is not reduced.
- Includes compression levels for generating pre-compressed
data which achieve a quite competitive compression ratio.
- - There is also a compression level which needs only 8 kB for compression.
+ - There is also a compression level which needs only 8 KiB for compression.
- Algorithm is thread safe.
- Algorithm is lossless.
@@ -69,12 +69,12 @@
-----------
To keep you interested, here is an overview of the average results
when compressing the Calgary Corpus test suite with a blocksize
- of 256 kB, originally done on an ancient Intel Pentium 133.
+ of 256 KiB, originally done on an ancient Intel Pentium 133.
The naming convention of the various algorithms goes LZOxx-N, where N is
the compression level. Range 1-9 indicates the fast standard levels using
- 64 kB memory for compression. Level 99 offers better compression at the
- cost of more memory (256 kB), and is still reasonably fast.
+ 64 KiB memory for compression. Level 99 offers better compression at the
+ cost of more memory (256 KiB), and is still reasonably fast.
Level 999 achieves nearly optimal compression - but it is slow
and uses much memory, and is mainly intended for generating
pre-compressed data.
@@ -154,12 +154,12 @@
and long literal runs so that it produces good results on highly
redundant data and deals acceptably with non-compressible data.
- When dealing with uncompressible data, LZO expands the input
- block by a maximum of 16 bytes per 1024 bytes input.
+ When dealing with incompressible data, LZO expands the input
+ block by a maximum of 64 bytes per 1024 bytes input.
I have verified LZO using such tools as valgrind and other memory checkers.
And in addition to compressing gigabytes of files when tuning some parameters
- I have also consulted various `lint' programs to spot potential portability
+ I have also consulted various 'lint' programs to spot potential portability
problems. LZO is free of any known bugs.
@@ -171,7 +171,7 @@
As the many object files are mostly independent of each other, the
size overhead for an executable statically linked with the LZO library
- is usually pretty low (just a few kB) because the linker will only add
+ is usually pretty low (just a few KiB) because the linker will only add
the modules that you are actually using.
I first published LZO1 and LZO1A in the Internet newsgroups
@@ -262,7 +262,7 @@
Some comments about the source code
-----------------------------------
- Be warned: the main source code in the `src' directory is a
+ Be warned: the main source code in the 'src' directory is a
real pain to understand as I've experimented with hundreds of slightly
different versions. It contains many #if and some gotos, and
is *completely optimized for speed* and not for readability.
@@ -277,8 +277,9 @@
Copyright
---------
- LZO is Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004,
- 2005, 2006, 2007, 2008 Markus Franz Xaver Johannes Oberhumer
+ LZO is Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002,
+ 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011
+ Markus Franz Xaver Oberhumer <markus@oberhumer.com>.
LZO is distributed under the terms of the GNU General Public License (GPL).
See the file COPYING.
diff --git a/lzo/doc/LZOAPI.TXT b/lzo/doc/LZOAPI.TXT
index 8d285845..5ae73532 100644
--- a/lzo/doc/LZOAPI.TXT
+++ b/lzo/doc/LZOAPI.TXT
@@ -37,7 +37,7 @@ Table of Contents
1.1 Preliminary notes
---------------------
-- `C90' is short for ISO 9899-1990, the ANSI/ISO standard for the C
+- 'C90' is short for ISO 9899-1990, the ANSI/ISO standard for the C
programming language
@@ -83,7 +83,7 @@ old Atari ST, which has 16 bit integers and a flat 32-bit memory model.
Using 'huge' 32-bit pointers under 16-bit DOS is a workaround for this.
While LZO also works with a strict 16-bit memory model, I don't officially
-support this because this limits the maximum block size to 64 kB - and this
+support this because this limits the maximum block size to 64 KiB - and this
makes the library incompatible with other platforms, i.e. you cannot
decompress larger blocks compressed on those platforms.
@@ -162,10 +162,10 @@ int lzo_init ( void );
3.2 Compression
---------------
-All compressors compress the memory block at `src' with the uncompressed
-length `src_len' to the address given by `dst'.
+All compressors compress the memory block at 'src' with the uncompressed
+length 'src_len' to the address given by 'dst'.
The length of the compressed blocked will be returned in the variable
-pointed by `dst_len'.
+pointed by 'dst_len'.
The two blocks may overlap under certain conditions (see examples/overlap.c),
thereby allowing "in-place" compression.
@@ -180,7 +180,7 @@ int lzo1x_1_compress ( const lzo_bytep src, lzo_uint src_len,
Algorithm: LZO1X
Compression level: LZO1X-1
- Memory requirements: LZO1X_1_MEM_COMPRESS (64 kB on 32-bit machines)
+ Memory requirements: LZO1X_1_MEM_COMPRESS (64 KiB on 32-bit machines)
This compressor is pretty fast.
@@ -196,7 +196,7 @@ int lzo1x_999_compress ( const lzo_bytep src, lzo_uint src_len,
Algorithm: LZO1X
Compression level: LZO1X-999
- Memory requirements: LZO1X_999_MEM_COMPRESS (448 kB on 32-bit machines)
+ Memory requirements: LZO1X_999_MEM_COMPRESS (448 KiB on 32-bit machines)
This compressor is quite slow but achieves a good compression
ratio. It is mainly intended for generating pre-compressed data.
@@ -212,14 +212,14 @@ int lzo1x_999_compress ( const lzo_bytep src, lzo_uint src_len,
3.3 Decompression
-----------------
-All decompressors decompress the memory block at `src' with the compressed
-length `src_len' to the address given by `dst'.
+All decompressors decompress the memory block at 'src' with the compressed
+length 'src_len' to the address given by 'dst'.
The length of the decompressed block will be returned in the variable
-pointed by `dst_len' - on error the number of bytes that have
+pointed by 'dst_len' - on error the number of bytes that have
been decompressed so far will be returned.
The safe decompressors expect that the number of bytes available in
-the `dst' block is passed via the variable pointed by `dst_len'.
+the 'dst' block is passed via the variable pointed by 'dst_len'.
The two blocks may overlap under certain conditions (see examples/overlap.c),
thereby allowing "in-place" decompression.
@@ -233,25 +233,25 @@ Description of return values:
LZO_E_INPUT_NOT_CONSUMED
The end of the compressed block has been detected before all
bytes in the compressed block have been used.
- This may actually not be an error (if `src_len' is too large).
+ This may actually not be an error (if 'src_len' is too large).
LZO_E_INPUT_OVERRUN
The decompressor requested more bytes from the compressed
block than available.
- Your data is corrupted (or `src_len' is too small).
+ Your data is corrupted (or 'src_len' is too small).
LZO_E_OUTPUT_OVERRUN
The decompressor requested to write more bytes to the uncompressed
block than available.
Either your data is corrupted, or you should increase the number of
- available bytes passed in the variable pointed by `dst_len'.
+ available bytes passed in the variable pointed by 'dst_len'.
LZO_E_LOOKBEHIND_OVERRUN
Your data is corrupted.
LZO_E_EOF_NOT_FOUND
No EOF code was found in the compressed block.
- Your data is corrupted (or `src_len' is too small).
+ Your data is corrupted (or 'src_len' is too small).
LZO_E_ERROR
Any other error (data corrupted).
diff --git a/lzo/doc/LZOTEST.TXT b/lzo/doc/LZOTEST.TXT
index c5ec5052..93c86591 100644
--- a/lzo/doc/LZOTEST.TXT
+++ b/lzo/doc/LZOTEST.TXT
@@ -1,4 +1,4 @@
-The test driver `lzotest' has grown into a fairly powerful program
+The test driver 'lzotest' has grown into a fairly powerful program
of it's own. Here is a short description of the various options.
[ to be written - this is only a very first draft ]
@@ -22,16 +22,16 @@ Basic options:
-A use assembler decompressor (if available)
-F use fast assembler decompressor (if available)
-O optimize compressed data (if available)
- -s DIR process Calgary Corpus test suite in directory `DIR'
+ -s DIR process Calgary Corpus test suite in directory 'DIR'
-@ read list of files to compress from stdin
-q be quiet
-L display software license
-More about `-m':
+More about '-m':
================
-Use `-m' to list all available methods.
+Use '-m' to list all available methods.
You can select methods by number:
-m71
@@ -54,7 +54,7 @@ You can specify multiple methods/groups separated by ',':
-m1,2,3,4
-m1,2,3,4,lzo1x-1,m99,81
-And finally you can use multiple `-m' options:
+And finally you can use multiple '-m' options:
-m962,972 -mm99,982,m1