ICSharpCode.SharpZipLib An example class to demonstrate compression and decompression of BZip2 streams. Decompress the input writing uncompressed data to the output stream The readable stream containing data to decompress. The output stream to receive the decompressed data. Both streams are closed on completion if true. Compress the input stream sending result data to output stream The readable stream to compress. The output stream to receive the compressed data. Both streams are closed on completion if true. Block size acts as compression level (1 to 9) with 1 giving the lowest compression and 9 the highest. Defines internal values for both compression and decompression Random numbers used to randomise repetitive blocks When multiplied by compression parameter (1-9) gives the block size for compression 9 gives the best compression but uses the most memory. Backend constant Backend constant Backend constant Backend constant Backend constant Backend constant Backend constant Backend constant Backend constant BZip2Exception represents exceptions specific to BZip2 classes and code. Initialise a new instance of . Initialise a new instance of with its message string. A that describes the error. Initialise a new instance of . A that describes the error. The that caused this exception. Initializes a new instance of the BZip2Exception class with serialized data. The System.Runtime.Serialization.SerializationInfo that holds the serialized object data about the exception being thrown. The System.Runtime.Serialization.StreamingContext that contains contextual information about the source or destination. An input stream that decompresses files in the BZip2 format Construct instance for reading from stream Data source Get/set flag indicating ownership of underlying stream. When the flag is true will close the underlying stream also. Gets a value indicating if the stream supports reading Gets a value indicating whether the current stream supports seeking. Gets a value indicating whether the current stream supports writing. This property always returns false Gets the length in bytes of the stream. Gets the current position of the stream. Setting the position is not supported and will throw a NotSupportException. Any attempt to set the position. Flushes the stream. Set the streams position. This operation is not supported and will throw a NotSupportedException A byte offset relative to the parameter. A value of type indicating the reference point used to obtain the new position. The new position of the stream. Any access Sets the length of this stream to the given value. This operation is not supported and will throw a NotSupportedExceptionortedException The new length for the stream. Any access Writes a block of bytes to this stream using data from a buffer. This operation is not supported and will throw a NotSupportedException The buffer to source data from. The offset to start obtaining data from. The number of bytes of data to write. Any access Writes a byte to the current position in the file stream. This operation is not supported and will throw a NotSupportedException The value to write. Any access Read a sequence of bytes and advances the read position by one byte. Array of bytes to store values in Offset in array to begin storing data The maximum number of bytes to read The total number of bytes read into the buffer. This might be less than the number of bytes requested if that number of bytes are not currently available or zero if the end of the stream is reached. Closes the stream, releasing any associated resources. Read a byte from stream advancing position byte read or -1 on end of stream An output stream that compresses into the BZip2 format including file header chars into another stream. Construct a default output stream with maximum block size The stream to write BZip data onto. Initialise a new instance of the for the specified stream, using the given blocksize. The stream to write compressed data to. The block size to use. Valid block sizes are in the range 1..9, with 1 giving the lowest compression and 9 the highest. Ensures that resources are freed and other cleanup operations are performed when the garbage collector reclaims the BZip2OutputStream. Gets or sets a flag indicating ownership of underlying stream. When the flag is true will close the underlying stream also. The default value is true. Gets a value indicating whether the current stream supports reading Gets a value indicating whether the current stream supports seeking Gets a value indicating whether the current stream supports writing Gets the length in bytes of the stream Gets or sets the current position of this stream. Sets the current position of this stream to the given value. The point relative to the offset from which to being seeking. The reference point from which to begin seeking. The new position in the stream. Sets the length of this stream to the given value. The new stream length. Read a byte from the stream advancing the position. The byte read cast to an int; -1 if end of stream. Read a block of bytes The buffer to read into. The offset in the buffer to start storing data at. The maximum number of bytes to read. The total number of bytes read. This might be less than the number of bytes requested if that number of bytes are not currently available, or zero if the end of the stream is reached. Write a block of bytes to the stream The buffer containing data to write. The offset of the first byte to write. The number of bytes to write. Write a byte to the stream. The byte to write to the stream. Get the number of bytes written to output. Get the number of bytes written to the output. Releases the unmanaged resources used by the and optionally releases the managed resources. true to release both managed and unmanaged resources; false to release only unmanaged resources. Flush output buffers Computes Adler32 checksum for a stream of data. An Adler32 checksum is not as reliable as a CRC32 checksum, but a lot faster to compute. The specification for Adler32 may be found in RFC 1950. ZLIB Compressed Data Format Specification version 3.3) From that document: "ADLER32 (Adler-32 checksum) This contains a checksum value of the uncompressed data (excluding any dictionary data) computed according to Adler-32 algorithm. This algorithm is a 32-bit extension and improvement of the Fletcher algorithm, used in the ITU-T X.224 / ISO 8073 standard. Adler-32 is composed of two sums accumulated per byte: s1 is the sum of all bytes, s2 is the sum of all s1 values. Both sums are done modulo 65521. s1 is initialized to 1, s2 to zero. The Adler-32 checksum is stored as s2*65536 + s1 in most- significant-byte first (network) order." "8.2. The Adler-32 algorithm The Adler-32 algorithm is much faster than the CRC32 algorithm yet still provides an extremely low probability of undetected errors. The modulo on unsigned long accumulators can be delayed for 5552 bytes, so the modulo operation time is negligible. If the bytes are a, b, c, the second sum is 3a + 2b + c + 3, and so is position and order sensitive, unlike the first sum, which is just a checksum. That 65521 is prime is important to avoid a possible large class of two-byte errors that leave the check unchanged. (The Fletcher checksum uses 255, which is not prime and which also makes the Fletcher check insensitive to single byte changes 0 - 255.) The sum s1 is initialized to 1 instead of zero to make the length of the sequence part of s2, so that the length does not have to be checked separately. (Any sequence of zeroes has a Fletcher checksum of zero.)" largest prime smaller than 65536 The CRC data checksum so far. Initialise a default instance of Resets the Adler32 data checksum as if no update was ever called. Returns the Adler32 data checksum computed so far. Updates the checksum with the byte b. The data value to add. The high byte of the int is ignored. Updates the Adler32 data checksum with the bytes taken from a block of data. Contains the data to update the checksum with. Update Adler32 data checksum based on a portion of a block of data The chunk of data to add CRC-32 with unreversed data and reversed output Generate a table for a byte-wise 32-bit CRC calculation on the polynomial: x^32+x^26+x^23+x^22+x^16+x^12+x^11+x^10+x^8+x^7+x^5+x^4+x^2+x^1+x^0. Polynomials over GF(2) are represented in binary, one bit per coefficient, with the lowest powers in the most significant bit. Then adding polynomials is just exclusive-or, and multiplying a polynomial by x is a right shift by one. If we call the above polynomial p, and represent a byte as the polynomial q, also with the lowest power in the most significant bit (so the byte 0xb1 is the polynomial x^7+x^3+x+1), then the CRC is (q*x^32) mod p, where a mod b means the remainder after dividing a by b. This calculation is done using the shift-register method of multiplying and taking the remainder. The register is initialized to zero, and for each incoming bit, x^32 is added mod p to the register if the bit is a one (where x^32 mod p is p+x^32 = x^26+...+1), and the register is multiplied mod p by x (which is shifting right by one and adding x^32 mod p if the bit shifted out is a one). We start with the highest power (least significant bit) of q and repeat for all eight bits of q. This implementation uses sixteen lookup tables stored in one linear array to implement the slicing-by-16 algorithm, a variant of the slicing-by-8 algorithm described in this Intel white paper: https://web.archive.org/web/20120722193753/http://download.intel.com/technology/comms/perfnet/download/slicing-by-8.pdf The first lookup table is simply the CRC of all possible eight bit values. Each successive lookup table is derived from the original table generated by Sarwate's algorithm. Slicing a 16-bit input and XORing the outputs together will produce the same output as a byte-by-byte CRC loop with fewer arithmetic and bit manipulation operations, at the cost of increased memory consumed by the lookup tables. (Slicing-by-16 requires a 16KB table, which is still small enough to fit in most processors' L1 cache.) The CRC data checksum so far. Initialise a default instance of Resets the CRC data checksum as if no update was ever called. Returns the CRC data checksum computed so far. Reversed Out = true Updates the checksum with the int bval. the byte is taken as the lower 8 bits of bval Reversed Data = false Updates the CRC data checksum with the bytes taken from a block of data. Contains the data to update the CRC with. Update CRC data checksum based on a portion of a block of data The chunk of data to add Internal helper function for updating a block of data using slicing. The array containing the data to add Range start for (inclusive) The number of bytes to checksum starting from A non-inlined function for updating data that doesn't fit in a 16-byte block. We don't expect to enter this function most of the time, and when we do we're not here for long, so disabling inlining here improves performance overall. The array containing the data to add Range start for (inclusive) Range end for (exclusive) CRC-32 with reversed data and unreversed output Generate a table for a byte-wise 32-bit CRC calculation on the polynomial: x^32+x^26+x^23+x^22+x^16+x^12+x^11+x^10+x^8+x^7+x^5+x^4+x^2+x^1+x^0. Polynomials over GF(2) are represented in binary, one bit per coefficient, with the lowest powers in the most significant bit. Then adding polynomials is just exclusive-or, and multiplying a polynomial by x is a right shift by one. If we call the above polynomial p, and represent a byte as the polynomial q, also with the lowest power in the most significant bit (so the byte 0xb1 is the polynomial x^7+x^3+x+1), then the CRC is (q*x^32) mod p, where a mod b means the remainder after dividing a by b. This calculation is done using the shift-register method of multiplying and taking the remainder. The register is initialized to zero, and for each incoming bit, x^32 is added mod p to the register if the bit is a one (where x^32 mod p is p+x^32 = x^26+...+1), and the register is multiplied mod p by x (which is shifting right by one and adding x^32 mod p if the bit shifted out is a one). We start with the highest power (least significant bit) of q and repeat for all eight bits of q. This implementation uses sixteen lookup tables stored in one linear array to implement the slicing-by-16 algorithm, a variant of the slicing-by-8 algorithm described in this Intel white paper: https://web.archive.org/web/20120722193753/http://download.intel.com/technology/comms/perfnet/download/slicing-by-8.pdf The first lookup table is simply the CRC of all possible eight bit values. Each successive lookup table is derived from the original table generated by Sarwate's algorithm. Slicing a 16-bit input and XORing the outputs together will produce the same output as a byte-by-byte CRC loop with fewer arithmetic and bit manipulation operations, at the cost of increased memory consumed by the lookup tables. (Slicing-by-16 requires a 16KB table, which is still small enough to fit in most processors' L1 cache.) The CRC data checksum so far. Initialise a default instance of Resets the CRC data checksum as if no update was ever called. Returns the CRC data checksum computed so far. Reversed Out = false Updates the checksum with the int bval. the byte is taken as the lower 8 bits of bval Reversed Data = true Updates the CRC data checksum with the bytes taken from a block of data. Contains the data to update the CRC with. Update CRC data checksum based on a portion of a block of data The chunk of data to add Internal helper function for updating a block of data using slicing. The array containing the data to add Range start for (inclusive) The number of bytes to checksum starting from A non-inlined function for updating data that doesn't fit in a 16-byte block. We don't expect to enter this function most of the time, and when we do we're not here for long, so disabling inlining here improves performance overall. The array containing the data to add Range start for (inclusive) Range end for (exclusive) The number of slicing lookup tables to generate. Generates multiple CRC lookup tables for a given polynomial, stored in a linear array of uints. The first block (i.e. the first 256 elements) is the same as the byte-by-byte CRC lookup table. The generating CRC polynomial Whether the polynomial is in reversed bit order A linear array of 256 * elements This table could also be generated as a rectangular array, but the JIT compiler generates slower code than if we use a linear array. Known issue, see: https://github.com/dotnet/runtime/issues/30275 Mixes the first four bytes of input with using normal ordering before calling . Array of data to checksum Offset to start reading from The table to use for slicing-by-16 lookup Checksum state before this update call A new unfinalized checksum value Assumes input[offset]..input[offset + 15] are valid array indexes. For performance reasons, this must be checked by the caller. Mixes the first four bytes of input with using reflected ordering before calling . Array of data to checksum Offset to start reading from The table to use for slicing-by-16 lookup Checksum state before this update call A new unfinalized checksum value Assumes input[offset]..input[offset + 15] are valid array indexes. For performance reasons, this must be checked by the caller. A shared method for updating an unfinalized CRC checksum using slicing-by-16. Array of data to checksum Offset to start reading from The table to use for slicing-by-16 lookup First byte of input after mixing with the old CRC Second byte of input after mixing with the old CRC Third byte of input after mixing with the old CRC Fourth byte of input after mixing with the old CRC A new unfinalized checksum value Even though the first four bytes of input are fed in as arguments, should be the same value passed to this function's caller (either or ). This method will get inlined into both functions, so using the same offset produces faster code. Because most processors running C# have some kind of instruction-level parallelism, the order of XOR operations can affect performance. This ordering assumes that the assembly code generated by the just-in-time compiler will emit a bunch of arithmetic operations for checking array bounds. Then it opportunistically XORs a1 and a2 to keep the processor busy while those other parts of the pipeline handle the range check calculations. Interface to compute a data checksum used by checked input/output streams. A data checksum can be updated by one byte or with a byte array. After each update the value of the current checksum can be returned by calling getValue. The complete checksum object can also be reset so it can be used again with new data. Resets the data checksum as if no update was ever called. Returns the data checksum computed so far. Adds one byte to the data checksum. the data value to add. The high byte of the int is ignored. Updates the data checksum with the bytes taken from the array. buffer an array of bytes Adds the byte array to the data checksum. The chunk of data to add Event arguments for scanning. Initialise a new instance of The file or directory name. The file or directory name for this event. Get set a value indicating if scanning should continue or not. Event arguments during processing of a single file or directory. Initialise a new instance of The file or directory name if known. The number of bytes processed so far The total number of bytes to process, 0 if not known The name for this event if known. Get set a value indicating whether scanning should continue or not. Get a percentage representing how much of the has been processed 0.0 to 100.0 percent; 0 if target is not known. The number of bytes processed so far The number of bytes to process. Target may be 0 or negative if the value isnt known. Event arguments for directories. Initialize an instance of . The name for this directory. Flag value indicating if any matching files are contained in this directory. Get a value indicating if the directory contains any matching files or not. Arguments passed when scan failures are detected. Initialise a new instance of The name to apply. The exception to use. The applicable name. The applicable exception. Get / set a value indicating whether scanning should continue. Delegate invoked before starting to process a file. The source of the event The event arguments. Delegate invoked during processing of a file or directory The source of the event The event arguments. Delegate invoked when a file has been completely processed. The source of the event The event arguments. Delegate invoked when a directory failure is detected. The source of the event The event arguments. Delegate invoked when a file failure is detected. The source of the event The event arguments. FileSystemScanner provides facilities scanning of files and directories. Initialise a new instance of The file filter to apply when scanning. Initialise a new instance of The file filter to apply. The directory filter to apply. Initialise a new instance of The file filter to apply. Initialise a new instance of The file filter to apply. The directory filter to apply. Delegate to invoke when a directory is processed. Delegate to invoke when a file is processed. Delegate to invoke when processing for a file has finished. Delegate to invoke when a directory failure is detected. Delegate to invoke when a file failure is detected. Raise the DirectoryFailure event. The directory name. The exception detected. Raise the FileFailure event. The file name. The exception detected. Raise the ProcessFile event. The file name. Raise the complete file event The file name Raise the ProcessDirectory event. The directory name. Flag indicating if the directory has matching files. Scan a directory. The base directory to scan. True to recurse subdirectories, false to scan a single directory. The file filter currently in use. The directory filter currently in use. Flag indicating if scanning should continue running. INameTransform defines how file system names are transformed for use with archives, or vice versa. Given a file name determine the transformed value. The name to transform. The transformed file name. Given a directory name determine the transformed value. The name to transform. The transformed directory name InvalidNameException is thrown for invalid names such as directory traversal paths and names with invalid characters Initializes a new instance of the InvalidNameException class with a default error message. Initializes a new instance of the InvalidNameException class with a specified error message. A message describing the exception. Initializes a new instance of the InvalidNameException class with a specified error message and a reference to the inner exception that is the cause of this exception. A message describing the exception. The inner exception Initializes a new instance of the InvalidNameException class with serialized data. The System.Runtime.Serialization.SerializationInfo that holds the serialized object data about the exception being thrown. The System.Runtime.Serialization.StreamingContext that contains contextual information about the source or destination. Scanning filters support filtering of names. Test a name to see if it 'matches' the filter. The name to test. Returns true if the name matches the filter, false if it does not match. NameFilter is a string matching class which allows for both positive and negative matching. A filter is a sequence of independant regular expressions separated by semi-colons ';'. To include a semi-colon it may be quoted as in \;. Each expression can be prefixed by a plus '+' sign or a minus '-' sign to denote the expression is intended to include or exclude names. If neither a plus or minus sign is found include is the default. A given name is tested for inclusion before checking exclusions. Only names matching an include spec and not matching an exclude spec are deemed to match the filter. An empty filter matches any name. The following expression includes all name ending in '.dat' with the exception of 'dummy.dat' "+\.dat$;-^dummy\.dat$" Construct an instance based on the filter expression passed The filter expression. Test a string to see if it is a valid regular expression. The expression to test. True if expression is a valid false otherwise. Test an expression to see if it is valid as a filter. The filter expression to test. True if the expression is valid, false otherwise. Split a string into its component pieces The original string Returns an array of values containing the individual filter elements. Convert this filter to its string equivalent. The string equivalent for this filter. Test a value to see if it is included by the filter. The value to test. True if the value is included, false otherwise. Test a value to see if it is excluded by the filter. The value to test. True if the value is excluded, false otherwise. Test a value to see if it matches the filter. The value to test. True if the value matches, false otherwise. Compile this filter. PathFilter filters directories and files using a form of regular expressions by full path name. See NameFilter for more detail on filtering. Initialise a new instance of . The filter expression to apply. Test a name to see if it matches the filter. The name to test. True if the name matches, false otherwise. is used to get the full path before matching. ExtendedPathFilter filters based on name, file size, and the last write time of the file. Provides an example of how to customise filtering. Initialise a new instance of ExtendedPathFilter. The filter to apply. The minimum file size to include. The maximum file size to include. Initialise a new instance of ExtendedPathFilter. The filter to apply. The minimum to include. The maximum to include. Initialise a new instance of ExtendedPathFilter. The filter to apply. The minimum file size to include. The maximum file size to include. The minimum to include. The maximum to include. Test a filename to see if it matches the filter. The filename to test. True if the filter matches, false otherwise. The doesnt exist Get/set the minimum size/length for a file that will match this filter. The default value is zero. value is less than zero; greater than Get/set the maximum size/length for a file that will match this filter. The default value is value is less than zero or less than Get/set the minimum value that will match for this filter. Files with a LastWrite time less than this value are excluded by the filter. Get/set the maximum value that will match for this filter. Files with a LastWrite time greater than this value are excluded by the filter. NameAndSizeFilter filters based on name and file size. A sample showing how filters might be extended. Initialise a new instance of NameAndSizeFilter. The filter to apply. The minimum file size to include. The maximum file size to include. Test a filename to see if it matches the filter. The filename to test. True if the filter matches, false otherwise. Get/set the minimum size for a file that will match this filter. Get/set the maximum size for a file that will match this filter. PathUtils provides simple utilities for handling paths. Remove any path root present in the path A containing path information. The path with the root removed if it was present; path otherwise. Returns a random file name in the users temporary directory, or in directory of if specified If specified, used as the base file name for the temporary file Returns a temporary file name Provides simple " utilities. Read from a ensuring all the required data is read. The stream to read. The buffer to fill. Read from a " ensuring all the required data is read. The stream to read data from. The buffer to store data in. The offset at which to begin storing data. The number of bytes of data to store. Required parameter is null and or are invalid. End of stream is encountered before all the data has been read. Read as much data as possible from a ", up to the requested number of bytes The stream to read data from. The buffer to store data in. The offset at which to begin storing data. The number of bytes of data to store. Required parameter is null and or are invalid. Copy the contents of one to another. The stream to source data from. The stream to write data to. The buffer to use during copying. Copy the contents of one to another. The stream to source data from. The stream to write data to. The buffer to use during copying. The progress handler delegate to use. The minimum between progress updates. The source for this event. The name to use with the event. This form is specialised for use within #Zip to support events during archive operations. Copy the contents of one to another. The stream to source data from. The stream to write data to. The buffer to use during copying. The progress handler delegate to use. The minimum between progress updates. The source for this event. The name to use with the event. A predetermined fixed target value to use with progress updates. If the value is negative the target is calculated by looking at the stream. This form is specialised for use within #Zip to support events during archive operations. Initialise an instance of SharpZipBaseException is the base exception class for SharpZipLib. All library exceptions are derived from this. NOTE: Not all exceptions thrown will be derived from this class. A variety of other exceptions are possible for example Initializes a new instance of the SharpZipBaseException class. Initializes a new instance of the SharpZipBaseException class with a specified error message. A message describing the exception. Initializes a new instance of the SharpZipBaseException class with a specified error message and a reference to the inner exception that is the cause of this exception. A message describing the exception. The inner exception Initializes a new instance of the SharpZipBaseException class with serialized data. The System.Runtime.Serialization.SerializationInfo that holds the serialized object data about the exception being thrown. The System.Runtime.Serialization.StreamingContext that contains contextual information about the source or destination. Indicates that an error occurred during decoding of a input stream due to corrupt data or (unintentional) library incompatibility. Initializes a new instance of the StreamDecodingException with a generic message Initializes a new instance of the StreamDecodingException class with a specified error message. A message describing the exception. Initializes a new instance of the StreamDecodingException class with a specified error message and a reference to the inner exception that is the cause of this exception. A message describing the exception. The inner exception Initializes a new instance of the StreamDecodingException class with serialized data. The System.Runtime.Serialization.SerializationInfo that holds the serialized object data about the exception being thrown. The System.Runtime.Serialization.StreamingContext that contains contextual information about the source or destination. Indicates that the input stream could not decoded due to known library incompability or missing features Initializes a new instance of the StreamUnsupportedException with a generic message Initializes a new instance of the StreamUnsupportedException class with a specified error message. A message describing the exception. Initializes a new instance of the StreamUnsupportedException class with a specified error message and a reference to the inner exception that is the cause of this exception. A message describing the exception. The inner exception Initializes a new instance of the StreamUnsupportedException class with serialized data. The System.Runtime.Serialization.SerializationInfo that holds the serialized object data about the exception being thrown. The System.Runtime.Serialization.StreamingContext that contains contextual information about the source or destination. Indicates that the input stream could not decoded due to the stream ending before enough data had been provided Initializes a new instance of the UnexpectedEndOfStreamException with a generic message Initializes a new instance of the UnexpectedEndOfStreamException class with a specified error message. A message describing the exception. Initializes a new instance of the UnexpectedEndOfStreamException class with a specified error message and a reference to the inner exception that is the cause of this exception. A message describing the exception. The inner exception Initializes a new instance of the UnexpectedEndOfStreamException class with serialized data. The System.Runtime.Serialization.SerializationInfo that holds the serialized object data about the exception being thrown. The System.Runtime.Serialization.StreamingContext that contains contextual information about the source or destination. Indicates that a value was outside of the expected range when decoding an input stream Initializes a new instance of the ValueOutOfRangeException class naming the causing variable Name of the variable, use: nameof() Initializes a new instance of the ValueOutOfRangeException class naming the causing variable, it's current value and expected range. Name of the variable, use: nameof() The invalid value Expected maximum value Expected minimum value Initializes a new instance of the ValueOutOfRangeException class naming the causing variable, it's current value and expected range. Name of the variable, use: nameof() The invalid value Expected maximum value Expected minimum value Initializes a new instance of the ValueOutOfRangeException class with serialized data. The System.Runtime.Serialization.SerializationInfo that holds the serialized object data about the exception being thrown. The System.Runtime.Serialization.StreamingContext that contains contextual information about the source or destination. PkzipClassic embodies the classic or original encryption facilities used in Pkzip archives. While it has been superceded by more recent and more powerful algorithms, its still in use and is viable for preventing casual snooping Generates new encryption keys based on given seed The seed value to initialise keys with. A new key value. PkzipClassicCryptoBase provides the low level facilities for encryption and decryption using the PkzipClassic algorithm. Transform a single byte The transformed value Set the key schedule for encryption/decryption. The data use to set the keys from. Update encryption keys Reset the internal state. PkzipClassic CryptoTransform for encryption. Initialise a new instance of The key block to use. Transforms the specified region of the specified byte array. The input for which to compute the transform. The offset into the byte array from which to begin using data. The number of bytes in the byte array to use as data. The computed transform. Transforms the specified region of the input byte array and copies the resulting transform to the specified region of the output byte array. The input for which to compute the transform. The offset into the input byte array from which to begin using data. The number of bytes in the input byte array to use as data. The output to which to write the transform. The offset into the output byte array from which to begin writing data. The number of bytes written. Gets a value indicating whether the current transform can be reused. Gets the size of the input data blocks in bytes. Gets the size of the output data blocks in bytes. Gets a value indicating whether multiple blocks can be transformed. Cleanup internal state. PkzipClassic CryptoTransform for decryption. Initialise a new instance of . The key block to decrypt with. Transforms the specified region of the specified byte array. The input for which to compute the transform. The offset into the byte array from which to begin using data. The number of bytes in the byte array to use as data. The computed transform. Transforms the specified region of the input byte array and copies the resulting transform to the specified region of the output byte array. The input for which to compute the transform. The offset into the input byte array from which to begin using data. The number of bytes in the input byte array to use as data. The output to which to write the transform. The offset into the output byte array from which to begin writing data. The number of bytes written. Gets a value indicating whether the current transform can be reused. Gets the size of the input data blocks in bytes. Gets the size of the output data blocks in bytes. Gets a value indicating whether multiple blocks can be transformed. Cleanup internal state. Defines a wrapper object to access the Pkzip algorithm. This class cannot be inherited. Get / set the applicable block size in bits. The only valid block size is 8. Get an array of legal key sizes. Generate an initial vector. Get an array of legal block sizes. Get / set the key value applicable. Generate a new random key. Create an encryptor. The key to use for this encryptor. Initialisation vector for the new encryptor. Returns a new PkzipClassic encryptor Create a decryptor. Keys to use for this new decryptor. Initialisation vector for the new decryptor. Returns a new decryptor. Encrypts and decrypts AES ZIP Based on information from http://www.winzip.com/aes_info.htm and http://www.gladman.me.uk/cryptography_technology/fileencrypt/ Constructor The stream on which to perform the cryptographic transformation. Instance of ZipAESTransform Read or Write Reads a sequence of bytes from the current CryptoStream into buffer, and advances the position within the stream by the number of bytes read. Writes a sequence of bytes to the current stream and advances the current position within this stream by the number of bytes written. An array of bytes. This method copies count bytes from buffer to the current stream. The byte offset in buffer at which to begin copying bytes to the current stream. The number of bytes to be written to the current stream. Transforms stream using AES in CTR mode Constructor. Password string Random bytes, length depends on encryption strength. 128 bits = 8 bytes, 192 bits = 12 bytes, 256 bits = 16 bytes. The encryption strength, in bytes eg 16 for 128 bits. True when creating a zip, false when reading. For the AuthCode. Implement the ICryptoTransform method. Returns the 2 byte password verifier Returns the 10 byte AUTH CODE to be checked or appended immediately following the AES data stream. Not implemented. Gets the size of the input data blocks in bytes. Gets the size of the output data blocks in bytes. Gets a value indicating whether multiple blocks can be transformed. Gets a value indicating whether the current transform can be reused. Cleanup internal state. An example class to demonstrate compression and decompression of GZip streams. Decompress the input writing uncompressed data to the output stream The readable stream containing data to decompress. The output stream to receive the decompressed data. Both streams are closed on completion if true. Input or output stream is null Compress the input stream sending result data to output stream The readable stream to compress. The output stream to receive the compressed data. Both streams are closed on completion if true. Deflate buffer size, minimum 512 Deflate compression level, 0-9 Input or output stream is null Buffer Size is smaller than 512 Compression level outside 0-9 This class contains constants used for gzip. First GZip identification byte Second GZip identification byte Deflate compression method Get the GZip specified encoding (CP-1252 if supported, otherwise ASCII) GZip header flags Text flag hinting that the file is in ASCII CRC flag indicating that a CRC16 preceeds the data Extra flag indicating that extra fields are present Filename flag indicating that the original filename is present Flag bit mask indicating that a comment is present GZipException represents exceptions specific to GZip classes and code. Initialise a new instance of . Initialise a new instance of with its message string. A that describes the error. Initialise a new instance of . A that describes the error. The that caused this exception. Initializes a new instance of the GZipException class with serialized data. The System.Runtime.Serialization.SerializationInfo that holds the serialized object data about the exception being thrown. The System.Runtime.Serialization.StreamingContext that contains contextual information about the source or destination. This filter stream is used to decompress a "GZIP" format stream. The "GZIP" format is described baseInputStream RFC 1952. author of the original java version : John Leuner This sample shows how to unzip a gzipped file using System; using System.IO; using ICSharpCode.SharpZipLib.Core; using ICSharpCode.SharpZipLib.GZip; class MainClass { public static void Main(string[] args) { using (Stream inStream = new GZipInputStream(File.OpenRead(args[0]))) using (FileStream outStream = File.Create(Path.GetFileNameWithoutExtension(args[0]))) { byte[] buffer = new byte[4096]; StreamUtils.Copy(inStream, outStream, buffer); } } } CRC-32 value for uncompressed data Flag to indicate if we've read the GZIP header yet for the current member (block of compressed data). This is tracked per-block as the file is parsed. Flag to indicate if at least one block in a stream with concatenated blocks was read successfully. This allows us to exit gracefully if downstream data is not in gzip format. Creates a GZipInputStream with the default buffer size The stream to read compressed data from (baseInputStream GZIP format) Creates a GZIPInputStream with the specified buffer size The stream to read compressed data from (baseInputStream GZIP format) Size of the buffer to use Reads uncompressed data into an array of bytes The buffer to read uncompressed data into The offset indicating where the data should be placed The number of uncompressed bytes to be read Returns the number of bytes actually read. Retrieves the filename header field for the block last read This filter stream is used to compress a stream into a "GZIP" stream. The "GZIP" format is described in RFC 1952. author of the original java version : John Leuner This sample shows how to gzip a file using System; using System.IO; using ICSharpCode.SharpZipLib.GZip; using ICSharpCode.SharpZipLib.Core; class MainClass { public static void Main(string[] args) { using (Stream s = new GZipOutputStream(File.Create(args[0] + ".gz"))) using (FileStream fs = File.OpenRead(args[0])) { byte[] writeData = new byte[4096]; Streamutils.Copy(s, fs, writeData); } } } } CRC-32 value for uncompressed data Creates a GzipOutputStream with the default buffer size The stream to read data (to be compressed) from Creates a GZipOutputStream with the specified buffer size The stream to read data (to be compressed) from Size of the buffer to use Sets the active compression level (0-9). The new level will be activated immediately. The compression level to set. Level specified is not supported. Get the current compression level. The current compression level. Original filename Write given buffer to output updating crc Buffer to write Offset of first byte in buf to write Number of bytes to write Writes remaining compressed output data to the output stream and closes it. Flushes the stream by ensuring the header is written, and then calling Flush on the deflater. Finish compression and write any footer information required to stream This class contains constants used for LZW Magic number found at start of LZW header: 0x1f 0x9d Maximum number of bits per code Mask for 'number of compression bits' Indicates the presence of a fourth header byte Reserved bits Block compression: if table is full and compression rate is dropping, clear the dictionary. LZW file header size (in bytes) Initial number of bits per code LzwException represents exceptions specific to LZW classes and code. Initialise a new instance of . Initialise a new instance of with its message string. A that describes the error. Initialise a new instance of . A that describes the error. The that caused this exception. Initializes a new instance of the LzwException class with serialized data. The System.Runtime.Serialization.SerializationInfo that holds the serialized object data about the exception being thrown. The System.Runtime.Serialization.StreamingContext that contains contextual information about the source or destination. This filter stream is used to decompress a LZW format stream. Specifically, a stream that uses the LZC compression method. This file format is usually associated with the .Z file extension. See http://en.wikipedia.org/wiki/Compress See http://wiki.wxwidgets.org/Development:_Z_File_Format The file header consists of 3 (or optionally 4) bytes. The first two bytes contain the magic marker "0x1f 0x9d", followed by a byte of flags. Based on Java code by Ronald Tschalar, which in turn was based on the unlzw.c code in the gzip package. This sample shows how to unzip a compressed file using System; using System.IO; using ICSharpCode.SharpZipLib.Core; using ICSharpCode.SharpZipLib.LZW; class MainClass { public static void Main(string[] args) { using (Stream inStream = new LzwInputStream(File.OpenRead(args[0]))) using (FileStream outStream = File.Create(Path.GetFileNameWithoutExtension(args[0]))) { byte[] buffer = new byte[4096]; StreamUtils.Copy(inStream, outStream, buffer); // OR inStream.Read(buffer, 0, buffer.Length); // now do something with the buffer } } } Gets or sets a flag indicating ownership of underlying stream. When the flag is true will close the underlying stream also. The default value is true. Creates a LzwInputStream The stream to read compressed data from (baseInputStream LZW format) See Reads decompressed data into the provided buffer byte array The array to read and decompress data into The offset indicating where the data should be placed The number of bytes to decompress The number of bytes read. Zero signals the end of stream Moves the unread data in the buffer to the beginning and resets the pointers. Gets a value indicating whether the current stream supports reading Gets a value of false indicating seeking is not supported for this stream. Gets a value of false indicating that this stream is not writeable. A value representing the length of the stream in bytes. The current position within the stream. Throws a NotSupportedException when attempting to set the position Attempting to set the position Flushes the baseInputStream Sets the position within the current stream Always throws a NotSupportedException The relative offset to seek to. The defining where to seek from. The new position in the stream. Any access Set the length of the current stream Always throws a NotSupportedException The new length value for the stream. Any access Writes a sequence of bytes to stream and advances the current position This method always throws a NotSupportedException The buffer containing data to write. The offset of the first byte to write. The number of bytes to write. Any access Writes one byte to the current stream and advances the current position Always throws a NotSupportedException The byte to write. Any access Closes the input stream. When is true the underlying stream is also closed. Flag indicating wether this instance has been closed or not. This exception is used to indicate that there is a problem with a TAR archive header. Initialise a new instance of the InvalidHeaderException class. Initialises a new instance of the InvalidHeaderException class with a specified message. Message describing the exception cause. Initialise a new instance of InvalidHeaderException Message describing the problem. The exception that is the cause of the current exception. Initializes a new instance of the InvalidHeaderException class with serialized data. The System.Runtime.Serialization.SerializationInfo that holds the serialized object data about the exception being thrown. The System.Runtime.Serialization.StreamingContext that contains contextual information about the source or destination. Used to advise clients of 'events' while processing archives The TarArchive class implements the concept of a 'Tape Archive'. A tar archive is a series of entries, each of which represents a file system object. Each entry in the archive consists of a header block followed by 0 or more data blocks. Directory entries consist only of the header block, and are followed by entries for the directory's contents. File entries consist of a header followed by the number of blocks needed to contain the file's contents. All entries are written on block boundaries. Blocks are 512 bytes long. TarArchives are instantiated in either read or write mode, based upon whether they are instantiated with an InputStream or an OutputStream. Once instantiated TarArchives read/write mode can not be changed. There is currently no support for random access to tar archives. However, it seems that subclassing TarArchive, and using the TarBuffer.CurrentRecord and TarBuffer.CurrentBlock properties, this would be rather trivial. Client hook allowing detailed information to be reported during processing Raises the ProgressMessage event The TarEntry for this event message for this event. Null is no message Constructor for a default . Initialise a TarArchive for input. The to use for input. Initialise a TarArchive for output. The to use for output. The InputStream based constructors create a TarArchive for the purposes of extracting or listing a tar archive. Thus, use these constructors when you wish to extract files from or list the contents of an existing tar archive. The stream to retrieve archive data from. Returns a new suitable for reading from. The InputStream based constructors create a TarArchive for the purposes of extracting or listing a tar archive. Thus, use these constructors when you wish to extract files from or list the contents of an existing tar archive. The stream to retrieve archive data from. The used for the Name fields, or null for ASCII only Returns a new suitable for reading from. Create TarArchive for reading setting block factor A stream containing the tar archive contents The blocking factor to apply Returns a suitable for reading. Create TarArchive for reading setting block factor A stream containing the tar archive contents The blocking factor to apply The used for the Name fields, or null for ASCII only Returns a suitable for reading. Create a TarArchive for writing to, using the default blocking factor The to write to The used for the Name fields, or null for ASCII only Returns a suitable for writing. Create a TarArchive for writing to, using the default blocking factor The to write to Returns a suitable for writing. Create a tar archive for writing. The stream to write to The blocking factor to use for buffering. Returns a suitable for writing. Create a tar archive for writing. The stream to write to The blocking factor to use for buffering. The used for the Name fields, or null for ASCII only Returns a suitable for writing. Set the flag that determines whether existing files are kept, or overwritten during extraction. If true, do not overwrite existing files. Get/set the ascii file translation flag. If ascii file translation is true, then the file is checked to see if it a binary file or not. If the flag is true and the test indicates it is ascii text file, it will be translated. The translation converts the local operating system's concept of line ends into the UNIX line end, '\n', which is the defacto standard for a TAR archive. This makes text files compatible with UNIX. Set the ascii file translation flag. If true, translate ascii text files. PathPrefix is added to entry names as they are written if the value is not null. A slash character is appended after PathPrefix RootPath is removed from entry names if it is found at the beginning of the name. Set user and group information that will be used to fill in the tar archive's entry headers. This information is based on that available for the linux operating system, which is not always available on other operating systems. TarArchive allows the programmer to specify values to be used in their place. is set to true by this call. The user id to use in the headers. The user name to use in the headers. The group id to use in the headers. The group name to use in the headers. Get or set a value indicating if overrides defined by SetUserInfo should be applied. If overrides are not applied then the values as set in each header will be used. Get the archive user id. See ApplyUserInfoOverrides for detail on how to allow setting values on a per entry basis. The current user id. Get the archive user name. See ApplyUserInfoOverrides for detail on how to allow setting values on a per entry basis. The current user name. Get the archive group id. See ApplyUserInfoOverrides for detail on how to allow setting values on a per entry basis. The current group id. Get the archive group name. See ApplyUserInfoOverrides for detail on how to allow setting values on a per entry basis. The current group name. Get the archive's record size. Tar archives are composed of a series of RECORDS each containing a number of BLOCKS. This allowed tar archives to match the IO characteristics of the physical device being used. Archives are expected to be properly "blocked". The record size this archive is using. Sets the IsStreamOwner property on the underlying stream. Set this to false to prevent the Close of the TarArchive from closing the stream. Close the archive. Perform the "list" command for the archive contents. NOTE That this method uses the progress event to actually list the contents. If the progress display event is not set, nothing will be listed! Perform the "extract" command and extract the contents of the archive. The destination directory into which to extract. Perform the "extract" command and extract the contents of the archive. The destination directory into which to extract. Allow parent directory traversal in file paths (e.g. ../file) Extract an entry from the archive. This method assumes that the tarIn stream has been properly set with a call to GetNextEntry(). The destination directory into which to extract. The TarEntry returned by tarIn.GetNextEntry(). Allow parent directory traversal in file paths (e.g. ../file) Write an entry to the archive. This method will call the putNextEntry and then write the contents of the entry, and finally call closeEntry() for entries that are files. For directories, it will call putNextEntry(), and then, if the recurse flag is true, process each entry that is a child of the directory. The TarEntry representing the entry to write to the archive. If true, process the children of directory entries. Write an entry to the archive. This method will call the putNextEntry and then write the contents of the entry, and finally call closeEntry() for entries that are files. For directories, it will call putNextEntry(), and then, if the recurse flag is true, process each entry that is a child of the directory. The TarEntry representing the entry to write to the archive. If true, process the children of directory entries. Performs application-defined tasks associated with freeing, releasing, or resetting unmanaged resources. Releases the unmanaged resources used by the FileStream and optionally releases the managed resources. true to release both managed and unmanaged resources; false to release only unmanaged resources. Closes the archive and releases any associated resources. Ensures that resources are freed and other cleanup operations are performed when the garbage collector reclaims the . The TarBuffer class implements the tar archive concept of a buffered input stream. This concept goes back to the days of blocked tape drives and special io devices. In the C# universe, the only real function that this class performs is to ensure that files have the correct "record" size, or other tars will complain.

You should never have a need to access this class directly. TarBuffers are created by Tar IO Streams.

The size of a block in a tar archive in bytes. This is 512 bytes. The number of blocks in a default record. The default value is 20 blocks per record. The size in bytes of a default record. The default size is 10KB. Get the record size for this buffer The record size in bytes. This is equal to the multiplied by the Get the TAR Buffer's record size. The record size in bytes. This is equal to the multiplied by the Get the Blocking factor for the buffer This is the number of blocks in each record. Get the TAR Buffer's block factor The block factor; the number of blocks per record. Construct a default TarBuffer Create TarBuffer for reading with default BlockFactor Stream to buffer A new suitable for input. Construct TarBuffer for reading inputStream setting BlockFactor Stream to buffer Blocking factor to apply A new suitable for input. Construct TarBuffer for writing with default BlockFactor output stream for buffer A new suitable for output. Construct TarBuffer for writing Tar output to streams. Output stream to write to. Blocking factor to apply A new suitable for output. Initialization common to all constructors. Determine if an archive block indicates End of Archive. End of archive is indicated by a block that consists entirely of null bytes. All remaining blocks for the record should also be null's However some older tars only do a couple of null blocks (Old GNU tar for one) and also partial records The data block to check. Returns true if the block is an EOF block; false otherwise. Determine if an archive block indicates the End of an Archive has been reached. End of archive is indicated by a block that consists entirely of null bytes. All remaining blocks for the record should also be null's However some older tars only do a couple of null blocks (Old GNU tar for one) and also partial records The data block to check. Returns true if the block is an EOF block; false otherwise. Skip over a block on the input stream. Read a block from the input stream. The block of data read. Read a record from data stream. false if End-Of-File, else true. Get the current block number, within the current record, zero based. Block numbers are zero based values Gets or sets a flag indicating ownership of underlying stream. When the flag is true will close the underlying stream also. The default value is true. Get the current block number, within the current record, zero based. The current zero based block number. The absolute block number = (record number * block factor) + block number. Get the current record number. The current zero based record number. Get the current record number. The current zero based record number. Write a block of data to the archive. The data to write to the archive. Write an archive record to the archive, where the record may be inside of a larger array buffer. The buffer must be "offset plus record size" long. The buffer containing the record data to write. The offset of the record data within buffer. Write a TarBuffer record to the archive. WriteFinalRecord writes the current record buffer to output any unwritten data is present. Any trailing bytes are set to zero which is by definition correct behaviour for the end of a tar stream. Close the TarBuffer. If this is an output buffer, also flush the current block before closing. This class represents an entry in a Tar archive. It consists of the entry's header, as well as the entry's File. Entries can be instantiated in one of three ways, depending on how they are to be used.

TarEntries that are created from the header bytes read from an archive are instantiated with the TarEntry( byte[] ) constructor. These entries will be used when extracting from or listing the contents of an archive. These entries have their header filled in using the header bytes. They also set the File to null, since they reference an archive entry not a file.

TarEntries that are created from files that are to be written into an archive are instantiated with the CreateEntryFromFile(string) pseudo constructor. These entries have their header filled in using the File's information. They also keep a reference to the File for convenience when writing entries.

Finally, TarEntries can be constructed from nothing but a name. This allows the programmer to construct the entry by hand, for instance when only an InputStream is available for writing to the archive, and the header information is constructed from other information. In this case the header fields are set to defaults and the File is set to null.

Initialise a default instance of . Construct an entry from an archive's header bytes. File is set to null. The header bytes from a tar archive entry. Construct an entry from an archive's header bytes. File is set to null. The header bytes from a tar archive entry. The used for the Name fields, or null for ASCII only Construct a TarEntry using the header provided Header details for entry Clone this tar entry. Returns a clone of this entry. Construct an entry with only a name. This allows the programmer to construct the entry's header "by hand". The name to use for the entry Returns the newly created Construct an entry for a file. File is set to file, and the header is constructed from information from the file. The file name that the entry represents. Returns the newly created Determine if the two entries are equal. Equality is determined by the header names being equal. The to compare with the current Object. True if the entries are equal; false if not. Derive a Hash value for the current A Hash code for the current Determine if the given entry is a descendant of this entry. Descendancy is determined by the name of the descendant starting with this entry's name. Entry to be checked as a descendent of this. True if entry is a descendant of this. Get this entry's header. This entry's TarHeader. Get/Set this entry's name. Get/set this entry's user id. Get/set this entry's group id. Get/set this entry's user name. Get/set this entry's group name. Convenience method to set this entry's group and user ids. This entry's new user id. This entry's new group id. Convenience method to set this entry's group and user names. This entry's new user name. This entry's new group name. Get/Set the modification time for this entry Get this entry's file. This entry's file. Get/set this entry's recorded file size. Return true if this entry represents a directory, false otherwise True if this entry is a directory. Fill in a TarHeader with information from a File. The TarHeader to fill in. The file from which to get the header information. Get entries for all files present in this entries directory. If this entry doesnt represent a directory zero entries are returned. An array of TarEntry's for this entry's children. Write an entry's header information to a header buffer. The tar entry header buffer to fill in. Write an entry's header information to a header buffer. The tar entry header buffer to fill in. The used for the Name fields, or null for ASCII only Convenience method that will modify an entry's name directly in place in an entry header buffer byte array. The buffer containing the entry header to modify. The new name to place into the header buffer. Convenience method that will modify an entry's name directly in place in an entry header buffer byte array. The buffer containing the entry header to modify. The new name to place into the header buffer. The used for the Name fields, or null for ASCII only Fill in a TarHeader given only the entry's name. The TarHeader to fill in. The tar entry name. The name of the file this entry represents or null if the entry is not based on a file. The entry's header information. TarException represents exceptions specific to Tar classes and code. Initialise a new instance of . Initialise a new instance of with its message string. A that describes the error. Initialise a new instance of . A that describes the error. The that caused this exception. Initializes a new instance of the TarException class with serialized data. The System.Runtime.Serialization.SerializationInfo that holds the serialized object data about the exception being thrown. The System.Runtime.Serialization.StreamingContext that contains contextual information about the source or destination. Reads the extended header of a Tar stream Creates a new . Read bytes from Returns the parsed headers as key-value strings This class encapsulates the Tar Entry Header used in Tar Archives. The class also holds a number of tar constants, used mostly in headers. The tar format and its POSIX successor PAX have a long history which makes for compatability issues when creating and reading files. This is further complicated by a large number of programs with variations on formats One common issue is the handling of names longer than 100 characters. GNU style long names are currently supported. This is the ustar (Posix 1003.1) header. struct header { char t_name[100]; // 0 Filename char t_mode[8]; // 100 Permissions char t_uid[8]; // 108 Numerical User ID char t_gid[8]; // 116 Numerical Group ID char t_size[12]; // 124 Filesize char t_mtime[12]; // 136 st_mtime char t_chksum[8]; // 148 Checksum char t_typeflag; // 156 Type of File char t_linkname[100]; // 157 Target of Links char t_magic[6]; // 257 "ustar" or other... char t_version[2]; // 263 Version fixed to 00 char t_uname[32]; // 265 User Name char t_gname[32]; // 297 Group Name char t_devmajor[8]; // 329 Major for devices char t_devminor[8]; // 337 Minor for devices char t_prefix[155]; // 345 Prefix for t_name char t_mfill[12]; // 500 Filler up to 512 }; The length of the name field in a header buffer. The length of the mode field in a header buffer. The length of the user id field in a header buffer. The length of the group id field in a header buffer. The length of the checksum field in a header buffer. Offset of checksum in a header buffer. The length of the size field in a header buffer. The length of the magic field in a header buffer. The length of the version field in a header buffer. The length of the modification time field in a header buffer. The length of the user name field in a header buffer. The length of the group name field in a header buffer. The length of the devices field in a header buffer. The length of the name prefix field in a header buffer. The "old way" of indicating a normal file. Normal file type. Link file type. Symbolic link file type. Character device file type. Block device file type. Directory file type. FIFO (pipe) file type. Contiguous file type. Posix.1 2001 global extended header Posix.1 2001 extended header Solaris access control list file type GNU dir dump file type This is a dir entry that contains the names of files that were in the dir at the time the dump was made Solaris Extended Attribute File Inode (metadata only) no file content Identifies the next file on the tape as having a long link name Identifies the next file on the tape as having a long name Continuation of a file that began on another volume For storing filenames that dont fit in the main header (old GNU) GNU Sparse file GNU Tape/volume header ignore on extraction The magic tag representing a POSIX tar archive. (would be written with a trailing NULL) The magic tag representing an old GNU tar archive where version is included in magic and overwrites it Initialise a default TarHeader instance Get/set the name for this tar entry. Thrown when attempting to set the property to null. Get the name of this entry. The entry's name. Get/set the entry's Unix style permission mode. The entry's user id. This is only directly relevant to unix systems. The default is zero. Get/set the entry's group id. This is only directly relevant to linux/unix systems. The default value is zero. Get/set the entry's size. Thrown when setting the size to less than zero. Get/set the entry's modification time. The modification time is only accurate to within a second. Thrown when setting the date time to less than 1/1/1970. Get the entry's checksum. This is only valid/updated after writing or reading an entry. Get value of true if the header checksum is valid, false otherwise. Get/set the entry's type flag. The entry's link name. Thrown when attempting to set LinkName to null. Get/set the entry's magic tag. Thrown when attempting to set Magic to null. The entry's version. Thrown when attempting to set Version to null. The entry's user name. Get/set the entry's group name. This is only directly relevant to unix systems. Get/set the entry's major device number. Get/set the entry's minor device number. Create a new that is a copy of the current instance. A new that is a copy of the current instance. Parse TarHeader information from a header buffer. The tar entry header buffer to get information from. The used for the Name field, or null for ASCII only Parse TarHeader information from a header buffer. The tar entry header buffer to get information from. 'Write' header information to buffer provided, updating the check sum. output buffer for header information 'Write' header information to buffer provided, updating the check sum. output buffer for header information The used for the Name field, or null for ASCII only Get a hash code for the current object. A hash code for the current object. Determines if this instance is equal to the specified object. The object to compare with. true if the objects are equal, false otherwise. Set defaults for values used when constructing a TarHeader instance. Value to apply as a default for userId. Value to apply as a default for userName. Value to apply as a default for groupId. Value to apply as a default for groupName. Parse an octal string from a header buffer. The header buffer from which to parse. The offset into the buffer from which to parse. The number of header bytes to parse. The long equivalent of the octal string. Parse a name from a header buffer. The header buffer from which to parse. The offset into the buffer from which to parse. The number of header bytes to parse. The name parsed. Parse a name from a header buffer. The header buffer from which to parse. The offset into the buffer from which to parse. The number of header bytes to parse. name encoding, or null for ASCII only The name parsed. Add name to the buffer as a collection of bytes The name to add The offset of the first character The buffer to add to The index of the first byte to add The number of characters/bytes to add The next free index in the Add name to the buffer as a collection of bytes The name to add The offset of the first character The buffer to add to The index of the first byte to add The number of characters/bytes to add The next free index in the Add name to the buffer as a collection of bytes The name to add The offset of the first character The buffer to add to The index of the first byte to add The number of characters/bytes to add name encoding, or null for ASCII only The next free index in the Add an entry name to the buffer The name to add The buffer to add to The offset into the buffer from which to start adding The number of header bytes to add The index of the next free byte in the buffer TODO: what should be default behavior?(omit upper byte or UTF8?) Add an entry name to the buffer The name to add The buffer to add to The offset into the buffer from which to start adding The number of header bytes to add The index of the next free byte in the buffer Add an entry name to the buffer The name to add The buffer to add to The offset into the buffer from which to start adding The number of header bytes to add The index of the next free byte in the buffer TODO: what should be default behavior?(omit upper byte or UTF8?) Add an entry name to the buffer The name to add The buffer to add to The offset into the buffer from which to start adding The number of header bytes to add The index of the next free byte in the buffer Add a string to a buffer as a collection of ascii bytes. The string to add The offset of the first character to add. The buffer to add to. The offset to start adding at. The number of ascii characters to add. The next free index in the buffer. Add a string to a buffer as a collection of ascii bytes. The string to add The offset of the first character to add. The buffer to add to. The offset to start adding at. The number of ascii characters to add. String encoding, or null for ASCII only The next free index in the buffer. Put an octal representation of a value into a buffer the value to be converted to octal buffer to store the octal string The offset into the buffer where the value starts The length of the octal string to create The offset of the character next byte after the octal string Put an octal or binary representation of a value into a buffer Value to be convert to octal The buffer to update The offset into the buffer to store the value The length of the octal string. Must be 12. Index of next byte Add the checksum integer to header buffer. The header buffer to set the checksum for The offset into the buffer for the checksum The number of header bytes to update. It's formatted differently from the other fields: it has 6 digits, a null, then a space -- rather than digits, a space, then a null. The final space is already there, from checksumming The modified buffer offset Compute the checksum for a tar entry header. The checksum field must be all spaces prior to this happening The tar entry's header buffer. The computed checksum. Make a checksum for a tar entry ignoring the checksum contents. The tar entry's header buffer. The checksum for the buffer The TarInputStream reads a UNIX tar archive as an InputStream. methods are provided to position at each successive entry in the archive, and the read each entry as a normal input stream using read(). Construct a TarInputStream with default block factor stream to source data from Construct a TarInputStream with default block factor stream to source data from The used for the Name fields, or null for ASCII only Construct a TarInputStream with user specified block factor stream to source data from block factor to apply to archive Construct a TarInputStream with user specified block factor stream to source data from block factor to apply to archive The used for the Name fields, or null for ASCII only Gets or sets a flag indicating ownership of underlying stream. When the flag is true will close the underlying stream also. The default value is true. Gets a value indicating whether the current stream supports reading Gets a value indicating whether the current stream supports seeking This property always returns false. Gets a value indicating if the stream supports writing. This property always returns false. The length in bytes of the stream Gets or sets the position within the stream. Setting the Position is not supported and throws a NotSupportedExceptionNotSupportedException Any attempt to set position Flushes the baseInputStream Set the streams position. This operation is not supported and will throw a NotSupportedException The offset relative to the origin to seek to. The to start seeking from. The new position in the stream. Any access Sets the length of the stream This operation is not supported and will throw a NotSupportedException The new stream length. Any access Writes a block of bytes to this stream using data from a buffer. This operation is not supported and will throw a NotSupportedException The buffer containing bytes to write. The offset in the buffer of the frist byte to write. The number of bytes to write. Any access Writes a byte to the current position in the file stream. This operation is not supported and will throw a NotSupportedException The byte value to write. Any access Reads a byte from the current tar archive entry. A byte cast to an int; -1 if the at the end of the stream. Reads bytes from the current tar archive entry. This method is aware of the boundaries of the current entry in the archive and will deal with them appropriately The buffer into which to place bytes read. The offset at which to place bytes read. The number of bytes to read. The number of bytes read, or 0 at end of stream/EOF. Closes this stream. Calls the TarBuffer's close() method. The underlying stream is closed by the TarBuffer. Set the entry factory for this instance. The factory for creating new entries Get the record size being used by this stream's TarBuffer. Get the record size being used by this stream's TarBuffer. TarBuffer record size. Get the available data that can be read from the current entry in the archive. This does not indicate how much data is left in the entire archive, only in the current entry. This value is determined from the entry's size header field and the amount of data already read from the current entry. The number of available bytes for the current entry. Skip bytes in the input buffer. This skips bytes in the current entry's data, not the entire archive, and will stop at the end of the current entry's data if the number to skip extends beyond that point. The number of bytes to skip. Return a value of true if marking is supported; false otherwise. Currently marking is not supported, the return value is always false. Since we do not support marking just yet, we do nothing. The limit to mark. Since we do not support marking just yet, we do nothing. Get the next entry in this tar archive. This will skip over any remaining data in the current entry, if there is one, and place the input stream at the header of the next entry, and read the header and instantiate a new TarEntry from the header bytes and return that entry. If there are no more entries in the archive, null will be returned to indicate that the end of the archive has been reached. The next TarEntry in the archive, or null. Copies the contents of the current tar archive entry directly into an output stream. The OutputStream into which to write the entry's data. This interface is provided, along with the method , to allow the programmer to have their own subclass instantiated for the entries return from . Create an entry based on name alone Name of the new EntryPointNotFoundException to create created TarEntry or descendant class Create an instance based on an actual file Name of file to represent in the entry Created TarEntry or descendant class Create a tar entry based on the header information passed Buffer containing header information to create an entry from. Created TarEntry or descendant class Standard entry factory class creating instances of the class TarEntry Construct standard entry factory class with ASCII name encoding Construct standard entry factory with name encoding The used for the Name fields, or null for ASCII only Create a based on named The name to use for the entry A new Create a tar entry with details obtained from file The name of the file to retrieve details from. A new Create an entry based on details in header The buffer containing entry details. A new Flag set when last block has been read Size of this entry as recorded in header Number of bytes read for this entry so far Buffer used with calls to Read() Working buffer Current entry being read Factory used to create TarEntry or descendant class instance Stream used as the source of input data. The TarOutputStream writes a UNIX tar archive as an OutputStream. Methods are provided to put entries, and then write their contents by writing to this stream using write(). public Construct TarOutputStream using default block factor stream to write to Construct TarOutputStream using default block factor stream to write to The used for the Name fields, or null for ASCII only Construct TarOutputStream with user specified block factor stream to write to blocking factor Construct TarOutputStream with user specified block factor stream to write to blocking factor The used for the Name fields, or null for ASCII only Gets or sets a flag indicating ownership of underlying stream. When the flag is true will close the underlying stream also. The default value is true. true if the stream supports reading; otherwise, false. true if the stream supports seeking; otherwise, false. true if stream supports writing; otherwise, false. length of stream in bytes gets or sets the position within the current stream. set the position within the current stream The offset relative to the to seek to The to seek from. The new position in the stream. Set the length of the current stream The new stream length. Read a byte from the stream and advance the position within the stream by one byte or returns -1 if at the end of the stream. The byte value or -1 if at end of stream read bytes from the current stream and advance the position within the stream by the number of bytes read. The buffer to store read bytes in. The index into the buffer to being storing bytes at. The desired number of bytes to read. The total number of bytes read, or zero if at the end of the stream. The number of bytes may be less than the count requested if data is not available. All buffered data is written to destination Ends the TAR archive without closing the underlying OutputStream. The result is that the EOF block of nulls is written. Ends the TAR archive and closes the underlying OutputStream. This means that Finish() is called followed by calling the TarBuffer's Close(). Get the record size being used by this stream's TarBuffer. Get the record size being used by this stream's TarBuffer. The TarBuffer record size. Get a value indicating whether an entry is open, requiring more data to be written. Put an entry on the output stream. This writes the entry's header and positions the output stream for writing the contents of the entry. Once this method is called, the stream is ready for calls to write() to write the entry's contents. Once the contents are written, closeEntry() MUST be called to ensure that all buffered data is completely written to the output stream. The TarEntry to be written to the archive. Close an entry. This method MUST be called for all file entries that contain data. The reason is that we must buffer data written to the stream in order to satisfy the buffer's block based writes. Thus, there may be data fragments still being assembled that must be written to the output stream before this entry is closed and the next entry written. Writes a byte to the current tar archive entry. This method simply calls Write(byte[], int, int). The byte to be written. Writes bytes to the current tar archive entry. This method is aware of the current entry and will throw an exception if you attempt to write bytes past the length specified for the current entry. The method is also (painfully) aware of the record buffering required by TarBuffer, and manages buffers that are not a multiple of recordsize in length, including assembling records from small buffers. The buffer to write to the archive. The offset in the buffer from which to get bytes. The number of bytes to write. Write an EOF (end of archive) block to the tar archive. The end of the archive is indicated by two blocks consisting entirely of zero bytes. bytes written for this entry so far current 'Assembly' buffer length Flag indicating whether this instance has been closed or not. Size for the current entry single block working buffer 'Assembly' buffer used to assemble data before writing TarBuffer used to provide correct blocking factor the destination stream for the archive contents name encoding This is the Deflater class. The deflater class compresses input with the deflate algorithm described in RFC 1951. It has several compression levels and three different strategies described below. This class is not thread safe. This is inherent in the API, due to the split of deflate and setInput. author of the original java version : Jochen Hoenicke The best and slowest compression level. This tries to find very long and distant string repetitions. The worst but fastest compression level. The default compression level. This level won't compress at all but output uncompressed blocks. The compression method. This is the only method supported so far. There is no need to use this constant at all. Compression Level as an enum for safer use The best and slowest compression level. This tries to find very long and distant string repetitions. The worst but fastest compression level. The default compression level. This level won't compress at all but output uncompressed blocks. The compression method. This is the only method supported so far. There is no need to use this constant at all. Creates a new deflater with default compression level. Creates a new deflater with given compression level. the compression level, a value between NO_COMPRESSION and BEST_COMPRESSION, or DEFAULT_COMPRESSION. if lvl is out of range. Creates a new deflater with given compression level. the compression level, a value between NO_COMPRESSION and BEST_COMPRESSION. true, if we should suppress the Zlib/RFC1950 header at the beginning and the adler checksum at the end of the output. This is useful for the GZIP/PKZIP formats. if lvl is out of range. Resets the deflater. The deflater acts afterwards as if it was just created with the same compression level and strategy as it had before. Gets the current adler checksum of the data that was processed so far. Gets the number of input bytes processed so far. Gets the number of output bytes so far. Flushes the current input block. Further calls to deflate() will produce enough output to inflate everything in the current input block. This is not part of Sun's JDK so I have made it package private. It is used by DeflaterOutputStream to implement flush(). Finishes the deflater with the current input block. It is an error to give more input after this method was called. This method must be called to force all bytes to be flushed. Returns true if the stream was finished and no more output bytes are available. Returns true, if the input buffer is empty. You should then call setInput(). NOTE: This method can also return true when the stream was finished. Sets the data which should be compressed next. This should be only called when needsInput indicates that more input is needed. If you call setInput when needsInput() returns false, the previous input that is still pending will be thrown away. The given byte array should not be changed, before needsInput() returns true again. This call is equivalent to setInput(input, 0, input.length). the buffer containing the input data. if the buffer was finished() or ended(). Sets the data which should be compressed next. This should be only called when needsInput indicates that more input is needed. The given byte array should not be changed, before needsInput() returns true again. the buffer containing the input data. the start of the data. the number of data bytes of input. if the buffer was Finish()ed or if previous input is still pending. Sets the compression level. There is no guarantee of the exact position of the change, but if you call this when needsInput is true the change of compression level will occur somewhere near before the end of the so far given input. the new compression level. Get current compression level Returns the current compression level Sets the compression strategy. Strategy is one of DEFAULT_STRATEGY, HUFFMAN_ONLY and FILTERED. For the exact position where the strategy is changed, the same as for SetLevel() applies. The new compression strategy. Deflates the current input block with to the given array. The buffer where compressed data is stored The number of compressed bytes added to the output, or 0 if either IsNeedingInput() or IsFinished returns true or length is zero. Deflates the current input block to the given array. Buffer to store the compressed data. Offset into the output array. The maximum number of bytes that may be stored. The number of compressed bytes added to the output, or 0 if either needsInput() or finished() returns true or length is zero. If Finish() was previously called. If offset or length don't match the array length. Sets the dictionary which should be used in the deflate process. This call is equivalent to setDictionary(dict, 0, dict.Length). the dictionary. if SetInput () or Deflate () were already called or another dictionary was already set. Sets the dictionary which should be used in the deflate process. The dictionary is a byte array containing strings that are likely to occur in the data which should be compressed. The dictionary is not stored in the compressed output, only a checksum. To decompress the output you need to supply the same dictionary again. The dictionary data The index where dictionary information commences. The number of bytes in the dictionary. If SetInput () or Deflate() were already called or another dictionary was already set. Compression level. If true no Zlib/RFC1950 headers or footers are generated The current state. The total bytes of output written. The pending output. The deflater engine. This class contains constants used for deflation. Set to true to enable debugging Written to Zip file to identify a stored block Identifies static tree in Zip file Identifies dynamic tree in Zip file Header flag indicating a preset dictionary for deflation Sets internal buffer sizes for Huffman encoding Internal compression engine constant Internal compression engine constant Internal compression engine constant Internal compression engine constant Internal compression engine constant Internal compression engine constant Internal compression engine constant Internal compression engine constant Internal compression engine constant Internal compression engine constant Internal compression engine constant Internal compression engine constant Internal compression engine constant Internal compression engine constant Internal compression engine constant Internal compression engine constant Internal compression engine constant Internal compression engine constant Internal compression engine constant Internal compression engine constant Internal compression engine constant Strategies for deflater The default strategy This strategy will only allow longer string repetitions. It is useful for random data with a small character set. This strategy will not look for string repetitions at all. It only encodes with Huffman trees (which means, that more common characters get a smaller encoding. Low level compression engine for deflate algorithm which uses a 32K sliding window with secondary compression from Huffman/Shannon-Fano codes. Construct instance with pending buffer Adler calculation will be performed Pending buffer to use Construct instance with pending buffer Pending buffer to use If no adler calculation should be performed Deflate drives actual compression of data True to flush input buffers Finish deflation with the current input. Returns true if progress has been made. Sets input data to be deflated. Should only be called when NeedsInput() returns true The buffer containing input data. The offset of the first byte of data. The number of bytes of data to use as input. Determines if more input is needed. Return true if input is needed via SetInput Set compression dictionary The buffer containing the dictionary data The offset in the buffer for the first byte of data The length of the dictionary data. Reset internal state Reset Adler checksum Get current value of Adler checksum Total data processed Get/set the deflate strategy Set the deflate level (0-9) The value to set the level to. Fill the window Inserts the current string in the head hash and returns the previous value for this hash. The previous hash value Find the best (longest) string in the window matching the string starting at strstart. Preconditions: strstart + DeflaterConstants.MAX_MATCH <= window.length. True if a match greater than the minimum length is found Hashtable, hashing three characters to an index for window, so that window[index]..window[index+2] have this hash code. Note that the array should really be unsigned short, so you need to and the values with 0xffff. prev[index & WMASK] points to the previous index that has the same hash code as the string starting at index. This way entries with the same hash code are in a linked list. Note that the array should really be unsigned short, so you need to and the values with 0xffff. Points to the current character in the window. lookahead is the number of characters starting at strstart in window that are valid. So window[strstart] until window[strstart+lookahead-1] are valid characters. This array contains the part of the uncompressed stream that is of relevance. The current character is indexed by strstart. The current compression function. The input data for compression. The total bytes of input read. The offset into inputBuf, where input data starts. The end offset of the input data. The adler checksum This is the DeflaterHuffman class. This class is not thread safe. This is inherent in the API, due to the split of Deflate and SetInput. author of the original java version : Jochen Hoenicke Resets the internal state of the tree Check that all frequencies are zero At least one frequency is non-zero Set static codes and length new codes length for new codes Build dynamic codes and lengths Get encoded length Encoded length, the sum of frequencies * lengths Scan a literal or distance tree to determine the frequencies of the codes in the bit length tree. Write tree values Tree to write Pending buffer to use Construct instance with pending buffer Pending buffer to use Reset internal state Write all trees to pending buffer The number/rank of treecodes to send. Compress current buffer writing data to pending buffer Flush block to output with no compression Data to write Index of first byte to write Count of bytes to write True if this is the last block Flush block to output with compression Data to flush Index of first byte to flush Count of bytes to flush True if this is the last block Get value indicating if internal buffer is full true if buffer is full Add literal to buffer Literal value to add to buffer. Value indicating internal buffer is full Add distance code and length to literal and distance trees Distance code Length Value indicating if internal buffer is full Reverse the bits of a 16 bit value. Value to reverse bits Value with bits reversed This class stores the pending output of the Deflater. author of the original java version : Jochen Hoenicke Construct instance with default buffer size Inflater is used to decompress data that has been compressed according to the "deflate" standard described in rfc1951. By default Zlib (rfc1950) headers and footers are expected in the input. You can use constructor public Inflater(bool noHeader) passing true if there is no Zlib header information The usage is as following. First you have to set some input with SetInput(), then Inflate() it. If inflate doesn't inflate any bytes there may be three reasons:
  • IsNeedingInput() returns true because the input buffer is empty. You have to provide more input with SetInput(). NOTE: IsNeedingInput() also returns true when, the stream is finished.
  • IsNeedingDictionary() returns true, you have to provide a preset dictionary with SetDictionary().
  • IsFinished returns true, the inflater has finished.
Once the first output byte is produced, a dictionary will not be needed at a later stage. author of the original java version : John Leuner, Jochen Hoenicke
Copy lengths for literal codes 257..285 Extra bits for literal codes 257..285 Copy offsets for distance codes 0..29 Extra bits for distance codes These are the possible states for an inflater This variable contains the current state. The adler checksum of the dictionary or of the decompressed stream, as it is written in the header resp. footer of the compressed stream. Only valid if mode is DECODE_DICT or DECODE_CHKSUM. The number of bits needed to complete the current state. This is valid, if mode is DECODE_DICT, DECODE_CHKSUM, DECODE_HUFFMAN_LENBITS or DECODE_HUFFMAN_DISTBITS. True, if the last block flag was set in the last block of the inflated stream. This means that the stream ends after the current block. The total number of inflated bytes. The total number of bytes set with setInput(). This is not the value returned by the TotalIn property, since this also includes the unprocessed input. This variable stores the noHeader flag that was given to the constructor. True means, that the inflated stream doesn't contain a Zlib header or footer. Creates a new inflater or RFC1951 decompressor RFC1950/Zlib headers and footers will be expected in the input data Creates a new inflater. True if no RFC1950/Zlib header and footer fields are expected in the input data This is used for GZIPed/Zipped input. For compatibility with Sun JDK you should provide one byte of input more than needed in this case. Resets the inflater so that a new stream can be decompressed. All pending input and output will be discarded. Decodes a zlib/RFC1950 header. False if more input is needed. The header is invalid. Decodes the dictionary checksum after the deflate header. False if more input is needed. Decodes the huffman encoded symbols in the input stream. false if more input is needed, true if output window is full or the current block ends. if deflated stream is invalid. Decodes the adler checksum after the deflate stream. false if more input is needed. If checksum doesn't match. Decodes the deflated stream. false if more input is needed, or if finished. if deflated stream is invalid. Sets the preset dictionary. This should only be called, if needsDictionary() returns true and it should set the same dictionary, that was used for deflating. The getAdler() function returns the checksum of the dictionary needed. The dictionary. Sets the preset dictionary. This should only be called, if needsDictionary() returns true and it should set the same dictionary, that was used for deflating. The getAdler() function returns the checksum of the dictionary needed. The dictionary. The index into buffer where the dictionary starts. The number of bytes in the dictionary. No dictionary is needed. The adler checksum for the buffer is invalid Sets the input. This should only be called, if needsInput() returns true. the input. Sets the input. This should only be called, if needsInput() returns true. The source of input data The index into buffer where the input starts. The number of bytes of input to use. No input is needed. The index and/or count are wrong. Inflates the compressed stream to the output buffer. If this returns 0, you should check, whether IsNeedingDictionary(), IsNeedingInput() or IsFinished() returns true, to determine why no further output is produced. the output buffer. The number of bytes written to the buffer, 0 if no further output can be produced. if buffer has length 0. if deflated stream is invalid. Inflates the compressed stream to the output buffer. If this returns 0, you should check, whether needsDictionary(), needsInput() or finished() returns true, to determine why no further output is produced. the output buffer. the offset in buffer where storing starts. the maximum number of bytes to output. the number of bytes written to the buffer, 0 if no further output can be produced. if count is less than 0. if the index and / or count are wrong. if deflated stream is invalid. Returns true, if the input buffer is empty. You should then call setInput(). NOTE: This method also returns true when the stream is finished. Returns true, if a preset dictionary is needed to inflate the input. Returns true, if the inflater has finished. This means, that no input is needed and no output can be produced. Gets the adler checksum. This is either the checksum of all uncompressed bytes returned by inflate(), or if needsDictionary() returns true (and thus no output was yet produced) this is the adler checksum of the expected dictionary. the adler checksum. Gets the total number of output bytes returned by Inflate(). the total number of output bytes. Gets the total number of processed compressed input bytes. The total number of bytes of processed input bytes. Gets the number of unprocessed input bytes. Useful, if the end of the stream is reached and you want to further process the bytes after the deflate stream. The number of bytes of the input which have not been processed. Continue decoding header from until more bits are needed or decoding has been completed Returns whether decoding could be completed Get literal/length huffman tree, must not be used before has returned true If hader has not been successfully read by the state machine Get distance huffman tree, must not be used before has returned true If hader has not been successfully read by the state machine Huffman tree used for inflation Literal length tree Distance tree Constructs a Huffman tree from the array of code lengths. the array of code lengths Reads the next symbol from input. The symbol is encoded using the huffman tree. input the input source. the next symbol, or -1 if not enough input is available. This class is general purpose class for writing data to a buffer. It allows you to write bits as well as bytes Based on DeflaterPending.java author of the original java version : Jochen Hoenicke Internal work buffer construct instance using default buffer size of 4096 construct instance using specified buffer size size to use for internal buffer Clear internal state/buffers Write a byte to buffer The value to write Write a short value to buffer LSB first The value to write. write an integer LSB first The value to write. Write a block of data to buffer data to write offset of first byte to write number of bytes to write The number of bits written to the buffer Align internal buffer on a byte boundary Write bits to internal buffer source of bits number of bits to write Write a short value to internal buffer most significant byte first value to write Indicates if buffer has been flushed Flushes the pending buffer into the given output array. If the output array is to small, only a partial flush is done. The output array. The offset into output array. The maximum number of bytes to store. The number of bytes flushed. Convert internal buffer to byte array. Buffer is empty on completion The internal buffer contents converted to a byte array. A special stream deflating or compressing the bytes that are written to it. It uses a Deflater to perform actual deflating.
Authors of the original java version : Tom Tromey, Jochen Hoenicke
Creates a new DeflaterOutputStream with a default Deflater and default buffer size. the output stream where deflated output should be written. Creates a new DeflaterOutputStream with the given Deflater and default buffer size. the output stream where deflated output should be written. the underlying deflater. Creates a new DeflaterOutputStream with the given Deflater and buffer size. The output stream where deflated output is written. The underlying deflater to use The buffer size in bytes to use when deflating (minimum value 512) bufsize is less than or equal to zero. baseOutputStream does not support writing deflater instance is null Finishes the stream by calling finish() on the deflater. Not all input is deflated Gets or sets a flag indicating ownership of underlying stream. When the flag is true will close the underlying stream also. The default value is true. Allows client to determine if an entry can be patched after its added The CryptoTransform currently being used to encrypt the compressed data. Returns the 10 byte AUTH CODE to be appended immediately following the AES data stream. Encrypt a block of data Data to encrypt. NOTE the original contents of the buffer are lost Offset of first byte in buffer to encrypt Number of bytes in buffer to encrypt Deflates everything in the input buffers. This will call def.deflate() until all bytes from the input buffers are processed. Gets value indicating stream can be read from Gets a value indicating if seeking is supported for this stream This property always returns false Get value indicating if this stream supports writing Get current length of stream Gets the current position within the stream. Any attempt to set position Sets the current position of this stream to the given value. Not supported by this class! The offset relative to the to seek. The to seek from. The new position in the stream. Any access Sets the length of this stream to the given value. Not supported by this class! The new stream length. Any access Read a byte from stream advancing position by one The byte read cast to an int. THe value is -1 if at the end of the stream. Any access Read a block of bytes from stream The buffer to store read data in. The offset to start storing at. The maximum number of bytes to read. The actual number of bytes read. Zero if end of stream is detected. Any access Flushes the stream by calling Flush on the deflater and then on the underlying stream. This ensures that all bytes are flushed. Calls and closes the underlying stream when is true. Get the Auth code for AES encrypted entries Writes a single byte to the compressed output stream. The byte value. Writes bytes from an array to the compressed stream. The byte array The offset into the byte array where to start. The number of bytes to write. This buffer is used temporarily to retrieve the bytes from the deflater and write them to the underlying output stream. The deflater which is used to deflate the stream. Base stream the deflater depends on. An input buffer customised for use by The buffer supports decryption of incoming data. Initialise a new instance of with a default buffer size The stream to buffer. Initialise a new instance of The stream to buffer. The size to use for the buffer A minimum buffer size of 1KB is permitted. Lower sizes are treated as 1KB. Get the length of bytes in the Get the contents of the raw data buffer. This may contain encrypted data. Get the number of useable bytes in Get the contents of the clear text buffer. Get/set the number of bytes available Call passing the current clear text buffer contents. The inflater to set input for. Fill the buffer from the underlying input stream. Read a buffer directly from the input stream The buffer to fill Returns the number of bytes read. Read a buffer directly from the input stream The buffer to read into The offset to start reading data into. The number of bytes to read. Returns the number of bytes read. Read clear text data from the input stream. The buffer to add data to. The offset to start adding data at. The number of bytes to read. Returns the number of bytes actually read. Read a from the input stream. Returns the byte read. Read an in little endian byte order. The short value read case to an int. Read an in little endian byte order. The int value read. Read a in little endian byte order. The long value read. Get/set the to apply to any data. Set this value to null to have no transform applied. This filter stream is used to decompress data compressed using the "deflate" format. The "deflate" format is described in RFC 1951. This stream may form the basis for other decompression filters, such as the GZipInputStream. Author of the original java version : John Leuner. Create an InflaterInputStream with the default decompressor and a default buffer size of 4KB. The InputStream to read bytes from Create an InflaterInputStream with the specified decompressor and a default buffer size of 4KB. The source of input data The decompressor used to decompress data read from baseInputStream Create an InflaterInputStream with the specified decompressor and the specified buffer size. The InputStream to read bytes from The decompressor to use Size of the buffer to use Gets or sets a flag indicating ownership of underlying stream. When the flag is true will close the underlying stream also. The default value is true. Skip specified number of bytes of uncompressed data Number of bytes to skip The number of bytes skipped, zero if the end of stream has been reached The number of bytes to skip is less than or equal to zero. Clear any cryptographic state. Returns 0 once the end of the stream (EOF) has been reached. Otherwise returns 1. Fills the buffer with more data to decompress. Stream ends early Gets a value indicating whether the current stream supports reading Gets a value of false indicating seeking is not supported for this stream. Gets a value of false indicating that this stream is not writeable. A value representing the length of the stream in bytes. The current position within the stream. Throws a NotSupportedException when attempting to set the position Attempting to set the position Flushes the baseInputStream Sets the position within the current stream Always throws a NotSupportedException The relative offset to seek to. The defining where to seek from. The new position in the stream. Any access Set the length of the current stream Always throws a NotSupportedException The new length value for the stream. Any access Writes a sequence of bytes to stream and advances the current position This method always throws a NotSupportedException The buffer containing data to write. The offset of the first byte to write. The number of bytes to write. Any access Writes one byte to the current stream and advances the current position Always throws a NotSupportedException The byte to write. Any access Closes the input stream. When is true the underlying stream is also closed. Reads decompressed data into the provided buffer byte array The array to read and decompress data into The offset indicating where the data should be placed The number of bytes to decompress The number of bytes read. Zero signals the end of stream Inflater needs a dictionary Decompressor for this stream Input buffer for this stream. Base stream the inflater reads from. The compressed size Flag indicating whether this instance has been closed or not. Contains the output from the Inflation process. We need to have a window so that we can refer backwards into the output stream to repeat stuff.
Author of the original java version : John Leuner
Write a byte to this output window value to write if window is full Append a byte pattern already in the window itself length of pattern to copy distance from end of window pattern occurs If the repeated data overflows the window Copy from input manipulator to internal window source of data length of data to copy the number of bytes copied Copy dictionary to window source dictionary offset of start in source dictionary length of dictionary If window isnt empty Get remaining unfilled space in window Number of bytes left in window Get bytes available for output in window Number of bytes filled Copy contents of window to output buffer to copy to offset to start at number of bytes to count The number of bytes copied If a window underflow occurs Reset by clearing window so GetAvailable returns 0 This class allows us to retrieve a specified number of bits from the input buffer, as well as copy big byte blocks. It uses an int buffer to store up to 31 bits for direct manipulation. This guarantees that we can get at least 16 bits, but we only need at most 15, so this is all safe. There are some optimizations in this class, for example, you must never peek more than 8 bits more than needed, and you must first peek bits before you may drop them. This is not a general purpose class but optimized for the behaviour of the Inflater. authors of the original java version : John Leuner, Jochen Hoenicke Get the next sequence of bits but don't increase input pointer. bitCount must be less or equal 16 and if this call succeeds, you must drop at least n - 8 bits in the next call. The number of bits to peek. the value of the bits, or -1 if not enough bits available. */ Tries to grab the next bits from the input and sets to the value, adding . true if enough bits could be read, otherwise false Tries to grab the next bits from the input and sets of to the value. true if enough bits could be read, otherwise false Drops the next n bits from the input. You should have called PeekBits with a bigger or equal n before, to make sure that enough bits are in the bit buffer. The number of bits to drop. Gets the next n bits and increases input pointer. This is equivalent to followed by , except for correct error handling. The number of bits to retrieve. the value of the bits, or -1 if not enough bits available. Gets the number of bits available in the bit buffer. This must be only called when a previous PeekBits() returned -1. the number of bits available. Gets the number of bytes available. The number of bytes available. Skips to the next byte boundary. Returns true when SetInput can be called Copies bytes from input buffer to output buffer starting at output[offset]. You have to make sure, that the buffer is byte aligned. If not enough bytes are available, copies fewer bytes. The buffer to copy bytes to. The offset in the buffer at which copying starts The length to copy, 0 is allowed. The number of bytes copied, 0 if no bytes were available. Length is less than zero Bit buffer isnt byte aligned Resets state and empties internal buffers Add more input for consumption. Only call when IsNeedingInput returns true data to be input offset of first byte of input number of bytes of input to add. FastZipEvents supports all events applicable to FastZip operations. Delegate to invoke when processing directories. Delegate to invoke when processing files. Delegate to invoke during processing of files. Delegate to invoke when processing for a file has been completed. Delegate to invoke when processing directory failures. Delegate to invoke when processing file failures. Raise the directory failure event. The directory causing the failure. The exception for this event. A boolean indicating if execution should continue or not. Fires the file failure handler delegate. The file causing the failure. The exception for this failure. A boolean indicating if execution should continue or not. Fires the ProcessFile delegate. The file being processed. A boolean indicating if execution should continue or not. Fires the delegate The file whose processing has been completed. A boolean indicating if execution should continue or not. Fires the process directory delegate. The directory being processed. Flag indicating if the directory has matching files as determined by the current filter. A of true if the operation should continue; false otherwise. The minimum timespan between events. The minimum period of time between events. The default interval is three seconds. FastZip provides facilities for creating and extracting zip files. Defines the desired handling when overwriting files during extraction. Prompt the user to confirm overwriting Never overwrite files. Always overwrite files. Initialise a default instance of . Initialise a new instance of using the specified The time setting to use when creating or extracting Zip entries. Using TimeSetting.LastAccessTime[Utc] when creating an archive will set the file time to the moment of reading. Initialise a new instance of using the specified The time to set all values for created or extracted Zip Entries. Initialise a new instance of The events to use during operations. Get/set a value indicating whether empty directories should be created. Get / set the password value. Get / set the method of encrypting entries. Only applies when is set. Defaults to ZipCrypto for backwards compatibility purposes. Get or set the active when creating Zip files. Get or set the active when creating Zip files. Gets or sets the setting for Zip64 handling when writing. The default value is dynamic which is not backwards compatible with old programs and can cause problems with XP's built in compression which cant read Zip64 archives. However it does avoid the situation were a large file is added and cannot be completed correctly. NOTE: Setting the size for entries before they are added is the best solution! By default the EntryFactory used by FastZip will set the file size. Get/set a value indicating whether file dates and times should be restored when extracting files from an archive. The default value is false. Get/set a value indicating whether file attributes should be restored during extract operations Get/set the Compression Level that will be used when creating the zip Delegate called when confirming overwriting of files. Create a zip file. The name of the zip file to create. The directory to source files from. True to recurse directories, false for no recursion. The file filter to apply. The directory filter to apply. Create a zip file/archive. The name of the zip file to create. The directory to obtain files and directories from. True to recurse directories, false for no recursion. The file filter to apply. Create a zip archive sending output to the passed. The stream to write archive data to. The directory to source files from. True to recurse directories, false for no recursion. The file filter to apply. The directory filter to apply. The is closed after creation. Create a zip archive sending output to the passed. The stream to write archive data to. The directory to source files from. True to recurse directories, false for no recursion. The file filter to apply. The directory filter to apply. true to leave open after the zip has been created, false to dispose it. Create a zip file. The name of the zip file to create. The directory to source files from. True to recurse directories, false for no recursion. The file filter to apply. The directory filter to apply. Create a zip archive sending output to the passed. The stream to write archive data to. The directory to source files from. True to recurse directories, false for no recursion. The file filter to apply. The directory filter to apply. true to leave open after the zip has been created, false to dispose it. Create a zip archive sending output to the passed. The stream to write archive data to. The directory to source files from. True to recurse directories, false for no recursion. For performing the actual file system scan true to leave open after the zip has been created, false to dispose it. The is closed after creation. Extract the contents of a zip file. The zip file to extract from. The directory to save extracted information in. A filter to apply to files. Extract the contents of a zip file. The zip file to extract from. The directory to save extracted information in. The style of overwriting to apply. A delegate to invoke when confirming overwriting. A filter to apply to files. A filter to apply to directories. Flag indicating whether to restore the date and time for extracted files. Allow parent directory traversal in file paths (e.g. ../file) Extract the contents of a zip file held in a stream. The seekable input stream containing the zip to extract from. The directory to save extracted information in. The style of overwriting to apply. A delegate to invoke when confirming overwriting. A filter to apply to files. A filter to apply to directories. Flag indicating whether to restore the date and time for extracted files. Flag indicating whether the inputStream will be closed by this method. Allow parent directory traversal in file paths (e.g. ../file) Defines factory methods for creating new values. Create a for a file given its name The name of the file to create an entry for. Returns a file entry based on the passed. Create a for a file given its name The name of the file to create an entry for. If true get details from the file system if the file exists. Returns a file entry based on the passed. Create a for a file given its actual name and optional override name The name of the file to create an entry for. An alternative name to be used for the new entry. Null if not applicable. If true get details from the file system if the file exists. Returns a file entry based on the passed. Create a for a directory given its name The name of the directory to create an entry for. Returns a directory entry based on the passed. Create a for a directory given its name The name of the directory to create an entry for. If true get details from the file system for this directory if it exists. Returns a directory entry based on the passed. Get/set the applicable. Get the in use. Get the value to use when is set to , or if not specified, the value of when the class was the initialized WindowsNameTransform transforms names to windows compatible ones. The maximum windows path name permitted. This may not valid for all windows systems - CE?, etc but I cant find the equivalent in the CLR. In this case we need Windows' invalid path characters. Path.GetInvalidPathChars() only returns a subset invalid on all platforms. Initialises a new instance of Allow parent directory traversal in file paths (e.g. ../file) Initialise a default instance of Gets or sets a value containing the target directory to prefix values with. Allow parent directory traversal in file paths (e.g. ../file) Gets or sets a value indicating whether paths on incoming values should be removed. Transform a Zip directory name to a windows directory name. The directory name to transform. The transformed name. Transform a Zip format file name to a windows style one. The file name to transform. The transformed name. Test a name to see if it is a valid name for a windows filename as extracted from a Zip archive. The name to test. Returns true if the name is a valid zip name; false otherwise. The filename isnt a true windows path in some fundamental ways like no absolute paths, no rooted paths etc. Force a name to be valid by replacing invalid characters with a fixed value The name to make valid The replacement character to use for any invalid characters. Returns a valid name Gets or set the character to replace invalid characters during transformations. Determines how entries are tested to see if they should use Zip64 extensions or not. Zip64 will not be forced on entries during processing. An entry can have this overridden if required Zip64 should always be used. #ZipLib will determine use based on entry values when added to archive. The kind of compression used for an entry in an archive A direct copy of the file contents is held in the archive Common Zip compression method using a sliding dictionary of up to 32KB and secondary compression from Huffman/Shannon-Fano trees An extension to deflate with a 64KB window. Not supported by #Zip currently BZip2 compression. Not supported by #Zip. LZMA compression. Not supported by #Zip. PPMd compression. Not supported by #Zip. WinZip special for AES encryption, Now supported by #Zip. Identifies the encryption algorithm used for an entry No encryption has been used. Encrypted using PKZIP 2.0 or 'classic' encryption. DES encryption has been used. RC2 encryption has been used for encryption. Triple DES encryption with 168 bit keys has been used for this entry. Triple DES with 112 bit keys has been used for this entry. AES 128 has been used for encryption. AES 192 has been used for encryption. AES 256 has been used for encryption. RC2 corrected has been used for encryption. Blowfish has been used for encryption. Twofish has been used for encryption. RC4 has been used for encryption. An unknown algorithm has been used for encryption. Defines the contents of the general bit flags field for an archive entry. Bit 0 if set indicates that the file is encrypted Bits 1 and 2 - Two bits defining the compression method (only for Method 6 Imploding and 8,9 Deflating) Bit 3 if set indicates a trailing data descriptor is appended to the entry data Bit 4 is reserved for use with method 8 for enhanced deflation Bit 5 if set indicates the file contains Pkzip compressed patched data. Requires version 2.7 or greater. Bit 6 if set indicates strong encryption has been used for this entry. Bit 7 is currently unused Bit 8 is currently unused Bit 9 is currently unused Bit 10 is currently unused Bit 11 if set indicates the filename and comment fields for this file must be encoded using UTF-8. Bit 12 is documented as being reserved by PKware for enhanced compression. Bit 13 if set indicates that values in the local header are masked to hide their actual values, and the central directory is encrypted. Used when encrypting the central directory contents. Bit 14 is documented as being reserved for use by PKware Bit 15 is documented as being reserved for use by PKware This class contains constants used for Zip format files The version made by field for entries in the central header when created by this library This is also the Zip version for the library when comparing against the version required to extract for an entry. See . The version made by field for entries in the central header when created by this library This is also the Zip version for the library when comparing against the version required to extract for an entry. See ZipInputStream.CanDecompressEntry. The minimum version required to support strong encryption The minimum version required to support strong encryption Version indicating AES encryption The version required for Zip64 extensions (4.5 or higher) The version required for BZip2 compression (4.6 or higher) Size of local entry header (excluding variable length fields at end) Size of local entry header (excluding variable length fields at end) Size of Zip64 data descriptor Size of data descriptor Size of data descriptor Size of central header entry (excluding variable fields) Size of central header entry Size of end of central record (excluding variable fields) Size of end of central record (excluding variable fields) Size of 'classic' cryptographic header stored before any entry data Size of cryptographic header stored before entry data The size of the Zip64 central directory locator. Signature for local entry header Signature for local entry header Signature for spanning entry Signature for spanning entry Signature for temporary spanning entry Signature for temporary spanning entry Signature for data descriptor This is only used where the length, Crc, or compressed size isnt known when the entry is created and the output stream doesnt support seeking. The local entry cannot be 'patched' with the correct values in this case so the values are recorded after the data prefixed by this header, as well as in the central directory. Signature for data descriptor This is only used where the length, Crc, or compressed size isnt known when the entry is created and the output stream doesnt support seeking. The local entry cannot be 'patched' with the correct values in this case so the values are recorded after the data prefixed by this header, as well as in the central directory. Signature for central header Signature for central header Signature for Zip64 central file header Signature for Zip64 central file header Signature for Zip64 central directory locator Signature for archive extra data signature (were headers are encrypted). Central header digital signature Central header digital signature End of central directory record signature End of central directory record signature Default encoding used for string conversion. 0 gives the default system OEM code page. Using the default code page isnt the full solution necessarily there are many variable factors, codepage 850 is often a good choice for European users, however be careful about compatability. Deprecated wrapper for Deprecated wrapper for Deprecated wrapper for Deprecated wrapper for Deprecated wrapper for Deprecated wrapper for The method of encrypting entries when creating zip archives. No encryption will be used. Encrypt entries with ZipCrypto. Encrypt entries with AES 128. Encrypt entries with AES 256. Defines known values for the property. Host system = MSDOS Host system = Amiga Host system = Open VMS Host system = Unix Host system = VMCms Host system = Atari ST Host system = OS2 Host system = Macintosh Host system = ZSystem Host system = Cpm Host system = Windows NT Host system = MVS Host system = VSE Host system = Acorn RISC Host system = VFAT Host system = Alternate MVS Host system = BEOS Host system = Tandem Host system = OS400 Host system = OSX Host system = WinZIP AES This class represents an entry in a zip archive. This can be a file or a directory ZipFile and ZipInputStream will give you instances of this class as information about the members in an archive. ZipOutputStream uses an instance of this class when creating an entry in a Zip file.

Author of the original java version : Jochen Hoenicke
Creates a zip entry with the given name. The name for this entry. Can include directory components. The convention for names is 'unix' style paths with relative names only. There are with no device names and path elements are separated by '/' characters. The name passed is null Creates a zip entry with the given name and version required to extract The name for this entry. Can include directory components. The convention for names is 'unix' style paths with no device names and path elements separated by '/' characters. This is not enforced see CleanName on how to ensure names are valid if this is desired. The minimum 'feature version' required this entry The name passed is null Initializes an entry with the given name and made by information Name for this entry Version and HostSystem Information Minimum required zip feature version required to extract this entry Compression method for this entry. The name passed is null versionRequiredToExtract should be 0 (auto-calculate) or > 10 This constructor is used by the ZipFile class when reading from the central header It is not generally useful, use the constructor specifying the name only. Creates a deep copy of the given zip entry. The entry to copy. Get a value indicating whether the entry has a CRC value available. Get/Set flag indicating if entry is encrypted. A simple helper routine to aid interpretation of flags This is an assistant that interprets the flags property. Get / set a flag indicating whether entry name and comment text are encoded in unicode UTF8. This is an assistant that interprets the flags property. Value used during password checking for PKZIP 2.0 / 'classic' encryption. Get/Set general purpose bit flag for entry General purpose bit flag

Bit 0: If set, indicates the file is encrypted
Bit 1-2 Only used for compression type 6 Imploding, and 8, 9 deflating
Imploding:
Bit 1 if set indicates an 8K sliding dictionary was used. If clear a 4k dictionary was used
Bit 2 if set indicates 3 Shannon-Fanno trees were used to encode the sliding dictionary, 2 otherwise

Deflating:
Bit 2 Bit 1
0 0 Normal compression was used
0 1 Maximum compression was used
1 0 Fast compression was used
1 1 Super fast compression was used

Bit 3: If set, the fields crc-32, compressed size and uncompressed size are were not able to be written during zip file creation The correct values are held in a data descriptor immediately following the compressed data.
Bit 4: Reserved for use by PKZIP for enhanced deflating
Bit 5: If set indicates the file contains compressed patch data
Bit 6: If set indicates strong encryption was used.
Bit 7-10: Unused or reserved
Bit 11: If set the name and comments for this entry are in unicode.
Bit 12-15: Unused or reserved
Get/Set index of this entry in Zip file This is only valid when the entry is part of a Get/set offset for use in central header Get/Set external file attributes as an integer. The values of this are operating system dependent see HostSystem for details Get the version made by for this entry or zero if unknown. The value / 10 indicates the major version number, and the value mod 10 is the minor version number Get a value indicating this entry is for a DOS/Windows system. Test the external attributes for this to see if the external attributes are Dos based (including WINNT and variants) and match the values The attributes to test. Returns true if the external attributes are known to be DOS/Windows based and have the same attributes set as the value passed. Gets the compatibility information for the external file attribute If the external file attributes are compatible with MS-DOS and can be read by PKZIP for DOS version 2.04g then this value will be zero. Otherwise the value will be non-zero and identify the host system on which the attributes are compatible. The values for this as defined in the Zip File format and by others are shown below. The values are somewhat misleading in some cases as they are not all used as shown. You should consult the relevant documentation to obtain up to date and correct information. The modified appnote by the infozip group is particularly helpful as it documents a lot of peculiarities. The document is however a little dated. 0 - MS-DOS and OS/2 (FAT / VFAT / FAT32 file systems) 1 - Amiga 2 - OpenVMS 3 - Unix 4 - VM/CMS 5 - Atari ST 6 - OS/2 HPFS 7 - Macintosh 8 - Z-System 9 - CP/M 10 - Windows NTFS 11 - MVS (OS/390 - Z/OS) 12 - VSE 13 - Acorn Risc 14 - VFAT 15 - Alternate MVS 16 - BeOS 17 - Tandem 18 - OS/400 19 - OS/X (Darwin) 99 - WinZip AES remainder - unused Get minimum Zip feature version required to extract this entry Minimum features are defined as:
1.0 - Default value
1.1 - File is a volume label
2.0 - File is a folder/directory
2.0 - File is compressed using Deflate compression
2.0 - File is encrypted using traditional encryption
2.1 - File is compressed using Deflate64
2.5 - File is compressed using PKWARE DCL Implode
2.7 - File is a patch data set
4.5 - File uses Zip64 format extensions
4.6 - File is compressed using BZIP2 compression
5.0 - File is encrypted using DES
5.0 - File is encrypted using 3DES
5.0 - File is encrypted using original RC2 encryption
5.0 - File is encrypted using RC4 encryption
5.1 - File is encrypted using AES encryption
5.1 - File is encrypted using corrected RC2 encryption
5.1 - File is encrypted using corrected RC2-64 encryption
6.1 - File is encrypted using non-OAEP key wrapping
6.2 - Central directory encryption (not confirmed yet)
6.3 - File is compressed using LZMA
6.3 - File is compressed using PPMD+
6.3 - File is encrypted using Blowfish
6.3 - File is encrypted using Twofish
Get a value indicating whether this entry can be decompressed by the library. This is based on the and whether the compression method is supported. Force this entry to be recorded using Zip64 extensions. Get a value indicating whether Zip64 extensions were forced. A value of true if Zip64 extensions have been forced on; false if not. Gets a value indicating if the entry requires Zip64 extensions to store the full entry values. A value of true if a local header requires Zip64 extensions; false if not. Get a value indicating whether the central directory entry requires Zip64 extensions to be stored. Get/Set DosTime value. The MS-DOS date format can only represent dates between 1/1/1980 and 12/31/2107. Gets/Sets the time of last modification of the entry. The property is updated to match this as far as possible. Returns the entry name. The unix naming convention is followed. Path components in the entry should always separated by forward slashes ('/'). Dos device names like C: should also be removed. See the class, or Gets/Sets the size of the uncompressed data. The size or -1 if unknown. Setting the size before adding an entry to an archive can help avoid compatibility problems with some archivers which don't understand Zip64 extensions. Gets/Sets the size of the compressed data. The compressed entry size or -1 if unknown. Gets/Sets the crc of the uncompressed data. Crc is not in the range 0..0xffffffffL The crc value or -1 if unknown. Gets/Sets the compression method. The compression method for this entry Gets the compression method for outputting to the local or central header. Returns same value as CompressionMethod except when AES encrypting, which places 99 in the method and places the real method in the extra data. Gets/Sets the extra data. Extra data is longer than 64KB (0xffff) bytes. Extra data or null if not set. For AES encrypted files returns or sets the number of bits of encryption (128, 192 or 256). When setting, only 0 (off), 128 or 256 is supported. AES Encryption strength for storage in extra data in entry header. 1 is 128 bit, 2 is 192 bit, 3 is 256 bit. Returns the length of the salt, in bytes Key size -> Salt length: 128 bits = 8 bytes, 192 bits = 12 bytes, 256 bits = 16 bytes. Number of extra bytes required to hold the AES Header fields (Salt, Pwd verify, AuthCode) File format: Bytes | Content ---------+--------------------------- Variable | Salt value 2 | Password verification value Variable | Encrypted file data 10 | Authentication code Number of extra bytes required to hold the encryption header fields. Process extra data fields updating the entry based on the contents. True if the extra data fields should be handled for a local header, rather than for a central header. Gets/Sets the entry comment. If comment is longer than 0xffff. The comment or null if not set. A comment is only available for entries when read via the class. The class doesn't have the comment data available. Gets a value indicating if the entry is a directory. however. A directory is determined by an entry name with a trailing slash '/'. The external file attributes can also indicate an entry is for a directory. Currently only dos/windows attributes are tested in this manner. The trailing slash convention should always be followed. Get a value of true if the entry appears to be a file; false otherwise This only takes account of DOS/Windows attributes. Other operating systems are ignored. For linux and others the result may be incorrect. Test entry to see if data can be extracted. Returns true if data can be extracted for this entry; false otherwise. Creates a copy of this zip entry. An that is a copy of the current instance. Gets a string representation of this ZipEntry. A readable textual representation of this Test a compression method to see if this library supports extracting data compressed with that method The compression method to test. Returns true if the compression method is supported; false otherwise Cleans a name making it conform to Zip file conventions. Devices names ('c:\') and UNC share names ('\\server\share') are removed and forward slashes ('\') are converted to back slashes ('/'). Names are made relative by trimming leading slashes which is compatible with the ZIP naming convention. The name to clean The 'cleaned' name. The Zip name transform class is more flexible. General ZipEntry helper extensions Efficiently check if a flag is set without enum un-/boxing Returns whether the flag was set Efficiently set a flag without enum un-/boxing Whether the passed flag should be set (1) or cleared (0) Basic implementation of Defines the possible values to be used for the . Use the recorded LastWriteTime value for the file. Use the recorded LastWriteTimeUtc value for the file Use the recorded CreateTime value for the file. Use the recorded CreateTimeUtc value for the file. Use the recorded LastAccessTime value for the file. Use the recorded LastAccessTimeUtc value for the file. Use a fixed value. The actual value used can be specified via the constructor or using the with the setting set to which will use the when this class was constructed. The property can also be used to set this value. Initialise a new instance of the class. A default , and the LastWriteTime for files is used. Initialise a new instance of using the specified The time setting to use when creating Zip entries. Initialise a new instance of using the specified The time to set all values to. Get / set the to be used when creating new values. Setting this property to null will cause a default name transform to be used. Get / set the in use. Get / set the value to use when is set to A bitmask defining the attributes to be retrieved from the actual file. The default is to get all possible attributes from the actual file. A bitmask defining which attributes are to be set on. By default no attributes are set on. Get set a value indicating whether unidoce text should be set on. Make a new for a file. The name of the file to create a new entry for. Returns a new based on the . Make a new for a file. The name of the file to create a new entry for. If true entry detail is retrieved from the file system if the file exists. Returns a new based on the . Make a new from a name. The name of the file to create a new entry for. An alternative name to be used for the new entry. Null if not applicable. If true entry detail is retrieved from the file system if the file exists. Returns a new based on the . Make a new for a directory. The raw untransformed name for the new directory Returns a new representing a directory. Make a new for a directory. The raw untransformed name for the new directory If true entry detail is retrieved from the file system if the file exists. Returns a new representing a directory. ZipException represents exceptions specific to Zip classes and code. Initialise a new instance of . Initialise a new instance of with its message string. A that describes the error. Initialise a new instance of . A that describes the error. The that caused this exception. Initializes a new instance of the ZipException class with serialized data. The System.Runtime.Serialization.SerializationInfo that holds the serialized object data about the exception being thrown. The System.Runtime.Serialization.StreamingContext that contains contextual information about the source or destination. ExtraData tagged value interface. Get the ID for this tagged data value. Set the contents of this instance from the data passed. The data to extract contents from. The offset to begin extracting data from. The number of bytes to extract. Get the data representing this instance. Returns the data for this instance. A raw binary tagged value Initialise a new instance. The tag ID. Get the ID for this tagged data value. Set the data from the raw values provided. The raw data to extract values from. The index to start extracting values from. The number of bytes available. Get the binary data representing this instance. The raw binary data representing this instance. Get /set the binary data representing this instance. The raw binary data representing this instance. The tag ID for this instance. Class representing extended unix date time values. Flags indicate which values are included in this instance. The modification time is included The access time is included The create time is included. Get the ID Set the data from the raw values provided. The raw data to extract values from. The index to start extracting values from. The number of bytes available. Get the binary data representing this instance. The raw binary data representing this instance. Test a value to see if is valid and can be represented here. The value to test. Returns true if the value is valid and can be represented; false if not. The standard Unix time is a signed integer data type, directly encoding the Unix time number, which is the number of seconds since 1970-01-01. Being 32 bits means the values here cover a range of about 136 years. The minimum representable time is 1901-12-13 20:45:52, and the maximum representable time is 2038-01-19 03:14:07. Get /set the Modification Time Get / set the Access Time Get / Set the Create Time Get/set the values to include. Class handling NT date time values. Get the ID for this tagged data value. Set the data from the raw values provided. The raw data to extract values from. The index to start extracting values from. The number of bytes available. Get the binary data representing this instance. The raw binary data representing this instance. Test a valuie to see if is valid and can be represented here. The value to test. Returns true if the value is valid and can be represented; false if not. NTFS filetimes are 64-bit unsigned integers, stored in Intel (least significant byte first) byte order. They determine the number of 1.0E-07 seconds (1/10th microseconds!) past WinNT "epoch", which is "01-Jan-1601 00:00:00 UTC". 28 May 60056 is the upper limit Get/set the last modification time. Get /set the create time Get /set the last access time. A factory that creates tagged data instances. Get data for a specific tag value. The tag ID to find. The data to search. The offset to begin extracting data from. The number of bytes to extract. The located value found, or null if not found. A class to handle the extra data field for Zip entries Extra data contains 0 or more values each prefixed by a header tag and length. They contain zero or more bytes of actual data. The data is held internally using a copy on write strategy. This is more efficient but means that for extra data created by passing in data can have the values modified by the caller in some circumstances. Initialise a default instance. Initialise with known extra data. The extra data. Get the raw extra data value Returns the raw byte[] extra data this instance represents. Clear the stored data. Gets the current extra data length. Get a read-only for the associated tag. The tag to locate data for. Returns a containing tag data or null if no tag was found. Get the tagged data for a tag. The tag to search for. Returns a tagged value or null if none found. Get the length of the last value found by This is only valid if has previously returned true. Get the index for the current read value. This is only valid if has previously returned true. Initially the result will be the index of the first byte of actual data. The value is updated after calls to , and . Get the number of bytes remaining to be read for the current value; Find an extra data value The identifier for the value to find. Returns true if the value was found; false otherwise. Add a new entry to extra data. The value to add. Add a new entry to extra data The ID for this entry. The data to add. If the ID already exists its contents are replaced. Start adding a new entry. Add data using , , , or . The new entry is completed and actually added by calling Add entry data added since using the ID passed. The identifier to use for this entry. Add a byte of data to the pending new entry. The byte to add. Add data to a pending new entry. The data to add. Add a short value in little endian order to the pending new entry. The data to add. Add an integer value in little endian order to the pending new entry. The data to add. Add a long value in little endian order to the pending new entry. The data to add. Delete an extra data field. The identifier of the field to delete. Returns true if the field was found and deleted. Read a long in little endian form from the last found data value Returns the long value read. Read an integer in little endian form from the last found data value. Returns the integer read. Read a short value in little endian form from the last found data value. Returns the short value read. Read a byte from an extra data The byte value read or -1 if the end of data has been reached. Skip data during reading. The number of bytes to skip. Internal form of that reads data at any location. Returns the short value read. Dispose of this instance. Arguments used with KeysRequiredEvent Initialise a new instance of The name of the file for which keys are required. Initialise a new instance of The name of the file for which keys are required. The current key value. Gets the name of the file for which keys are required. Gets or sets the key value The strategy to apply to testing. Find the first error only. Find all possible errors. The operation in progress reported by a during testing. TestArchive Setting up testing. Testing an individual entries header Testing an individual entries data Testing an individual entry has completed. Running miscellaneous tests Testing is complete Status returned by during testing. TestArchive Initialise a new instance of The this status applies to. Get the current in progress. Get the this status is applicable to. Get the current/last entry tested. Get the number of errors detected so far. Get the number of bytes tested so far for the current entry. Get a value indicating whether the last entry test was valid. Delegate invoked during testing if supplied indicating current progress and status. If the message is non-null an error has occured. If the message is null the operation as found in status has started. The possible ways of applying updates to an archive. Perform all updates on temporary files ensuring that the original file is saved. Update the archive directly, which is faster but less safe. This class represents a Zip archive. You can ask for the contained entries, or get an input stream for a file entry. The entry is automatically decompressed. You can also update the archive adding or deleting entries. This class is thread safe for input: You can open input streams for arbitrary entries in different threads.

Author of the original java version : Jochen Hoenicke
using System; using System.Text; using System.Collections; using System.IO; using ICSharpCode.SharpZipLib.Zip; class MainClass { static public void Main(string[] args) { using (ZipFile zFile = new ZipFile(args[0])) { Console.WriteLine("Listing of : " + zFile.Name); Console.WriteLine(""); Console.WriteLine("Raw Size Size Date Time Name"); Console.WriteLine("-------- -------- -------- ------ ---------"); foreach (ZipEntry e in zFile) { if ( e.IsFile ) { DateTime d = e.DateTime; Console.WriteLine("{0, -10}{1, -10}{2} {3} {4}", e.Size, e.CompressedSize, d.ToString("dd-MM-yy"), d.ToString("HH:mm"), e.Name); } } } } }
Delegate for handling keys/password setting during compression/decompression. Event handler for handling encryption keys. Handles getting of encryption keys when required. The file for which encryption keys are required. Get/set the encryption key value. Password to be used for encrypting/decrypting files. Set to null if no password is required. Get a value indicating whether encryption keys are currently available. Opens a Zip file with the given name for reading. The name of the file to open. The argument supplied is null. An i/o error occurs The file doesn't contain a valid zip archive. Opens a Zip file reading the given . The to read archive data from. The supplied argument is null. An i/o error occurs. The file doesn't contain a valid zip archive. Opens a Zip file reading the given . The to read archive data from. true to leave the file open when the ZipFile is disposed, false to dispose of it The supplied argument is null. An i/o error occurs. The file doesn't contain a valid zip archive. Opens a Zip file reading the given . The to read archive data from. An i/o error occurs The stream doesn't contain a valid zip archive.
The stream doesnt support seeking. The stream argument is null.
Opens a Zip file reading the given . The to read archive data from. true to leave the stream open when the ZipFile is disposed, false to dispose of it An i/o error occurs The stream doesn't contain a valid zip archive.
The stream doesnt support seeking. The stream argument is null.
Initialises a default instance with no entries and no file storage. Finalize this instance. Closes the ZipFile. If the stream is owned then this also closes the underlying input stream. Once closed, no further instance methods should be called. An i/o error occurs. Create a new whose data will be stored in a file. The name of the archive to create. Returns the newly created is null Create a new whose data will be stored on a stream. The stream providing data storage. Returns the newly created is null doesnt support writing. Get/set a flag indicating if the underlying stream is owned by the ZipFile instance. If the flag is true then the stream will be closed when Close is called. The default value is true in all cases. Get a value indicating whether this archive is embedded in another file or not. Get a value indicating that this archive is a new one. Gets the comment for the zip file. Gets the name of this zip file. Gets the number of entries in this zip file. The Zip file has been closed. Get the number of entries contained in this . Indexer property for ZipEntries Gets an enumerator for the Zip entries in this Zip file. Returns an for this archive. The Zip file has been closed. Return the index of the entry with a matching name Entry name to find If true the comparison is case insensitive The index position of the matching entry or -1 if not found The Zip file has been closed. Searches for a zip entry in this archive with the given name. String comparisons are case insensitive The name to find. May contain directory components separated by slashes ('/'). A clone of the zip entry, or null if no entry with that name exists. The Zip file has been closed. Gets an input stream for reading the given zip entry data in an uncompressed form. Normally the should be an entry returned by GetEntry(). The to obtain a data for An input containing data for this The ZipFile has already been closed The compression method for the entry is unknown The entry is not found in the ZipFile Creates an input stream reading a zip entry The index of the entry to obtain an input stream for. An input containing data for this The ZipFile has already been closed The compression method for the entry is unknown The entry is not found in the ZipFile Test an archive for integrity/validity Perform low level data Crc check true if all tests pass, false otherwise Testing will terminate on the first error found. Test an archive for integrity/validity Perform low level data Crc check The to apply. The handler to call during testing. true if all tests pass, false otherwise The object has already been closed. Test a local header against that provided from the central directory The entry to test against The type of tests to carry out. The offset of the entries data in the file The kind of update to apply. Get / set the to apply to names when updating. Get/set the used to generate values during updates. Get /set the buffer size to be used when updating this zip file. Get a value indicating an update has been started. Get / set a value indicating how Zip64 Extension usage is determined when adding entries. Begin updating this archive. The archive storage for use during the update. The data source to utilise during updating. ZipFile has been closed. One of the arguments provided is null ZipFile has been closed. Begin updating to this archive. The storage to use during the update. Begin updating this archive. Commit current updates, updating this archive. ZipFile has been closed. Abort updating leaving the archive unchanged. Set the file comment to be recorded when the current update is commited. The comment to record. ZipFile has been closed. Add a new entry to the archive. The name of the file to add. The compression method to use. Ensure Unicode text is used for name and comment for this entry. Argument supplied is null. ZipFile has been closed. Compression method is not supported for creating entries. Add a new entry to the archive. The name of the file to add. The compression method to use. ZipFile has been closed. Compression method is not supported for creating entries. Add a file to the archive. The name of the file to add. Argument supplied is null. Add a file to the archive. The name of the file to add. The name to use for the on the Zip file created. Argument supplied is null. Add a file entry with data. The source of the data for this entry. The name to give to the entry. Add a file entry with data. The source of the data for this entry. The name to give to the entry. The compression method to use. Compression method is not supported for creating entries. Add a file entry with data. The source of the data for this entry. The name to give to the entry. The compression method to use. Ensure Unicode text is used for name and comments for this entry. Compression method is not supported for creating entries. Add a that contains no data. The entry to add. This can be used to add directories, volume labels, or empty file entries. Add a with data. The source of the data for this entry. The entry to add. This can be used to add file entries with a custom data source. The encryption method specified in is unsupported. Compression method is not supported for creating entries. Add a directory entry to the archive. The directory to add. Check if the specified compression method is supported for adding a new entry. The compression method for the new entry. Delete an entry by name The filename to delete True if the entry was found and deleted; false otherwise. Delete a from the archive. The entry to delete. Write an unsigned short in little endian byte order. Write an int in little endian byte order. Write an unsigned int in little endian byte order. Write a long in little endian byte order. Get a raw memory buffer. Returns a raw memory buffer. Get the size of the source descriptor for a . The update to get the size for. Whether to include the signature size The descriptor size, zero if there isn't one. Get an output stream for the specified The entry to get an output stream for. The output stream obtained for the entry. Class used to sort updates. Compares two objects and returns a value indicating whether one is less than, equal to or greater than the other. First object to compare Second object to compare. Compare result. Represents a pending update to a Zip file. Copy an existing entry. The existing entry to copy. Get the for this update. This is the source or original entry. Get the that will be written to the updated/new file. Get the command for this update. Get the filename if any for this update. Null if none exists. Get/set the location of the size patch for this update. Get /set the location of the crc patch for this update. Get/set the size calculated by offset. Specifically, the difference between this and next entry's starting offset. Releases the unmanaged resources used by the this instance and optionally releases the managed resources. true to release both managed and unmanaged resources; false to release only unmanaged resources. Read an unsigned short in little endian byte order. Returns the value read. The stream ends prematurely Read a uint in little endian byte order. Returns the value read. An i/o error occurs. The file ends prematurely Search for and read the central directory of a zip file filling the entries array. An i/o error occurs. The central directory is malformed or cannot be found Locate the data for a given entry. The start offset of the data. The stream ends prematurely The local header signature is invalid, the entry and central header file name lengths are different or the local and entry compression methods dont match Represents a string from a which is stored as an array of bytes. Initialise a with a string. The textual string form. Initialise a using a string in its binary 'raw' form. Get a value indicating the original source of data for this instance. True if the source was a string; false if the source was binary data. Get the length of the comment when represented as raw bytes. Get the comment in its 'raw' form as plain bytes. Reset the comment to its initial state. Implicit conversion of comment to a string. The to convert to a string. The textual equivalent for the input value. An enumerator for Zip entries An is a stream that you can write uncompressed data to and flush, but cannot read, seek or do anything else to. Gets a value indicating whether the current stream supports reading. Write any buffered data to underlying storage. Gets a value indicating whether the current stream supports writing. Gets a value indicating whether the current stream supports seeking. Get the length in bytes of the stream. Gets or sets the position within the current stream. Reads a sequence of bytes from the current stream and advances the position within the stream by the number of bytes read. An array of bytes. When this method returns, the buffer contains the specified byte array with the values between offset and (offset + count - 1) replaced by the bytes read from the current source. The zero-based byte offset in buffer at which to begin storing the data read from the current stream. The maximum number of bytes to be read from the current stream. The total number of bytes read into the buffer. This can be less than the number of bytes requested if that many bytes are not currently available, or zero (0) if the end of the stream has been reached. The sum of offset and count is larger than the buffer length. Methods were called after the stream was closed. The stream does not support reading. buffer is null. An I/O error occurs. offset or count is negative. Sets the position within the current stream. A byte offset relative to the origin parameter. A value of type indicating the reference point used to obtain the new position. The new position within the current stream. An I/O error occurs. The stream does not support seeking, such as if the stream is constructed from a pipe or console output. Methods were called after the stream was closed. Sets the length of the current stream. The desired length of the current stream in bytes. The stream does not support both writing and seeking, such as if the stream is constructed from a pipe or console output. An I/O error occurs. Methods were called after the stream was closed. Writes a sequence of bytes to the current stream and advances the current position within this stream by the number of bytes written. An array of bytes. This method copies count bytes from buffer to the current stream. The zero-based byte offset in buffer at which to begin copying bytes to the current stream. The number of bytes to be written to the current stream. An I/O error occurs. The stream does not support writing. Methods were called after the stream was closed. buffer is null. The sum of offset and count is greater than the buffer length. offset or count is negative. A is an whose data is only a part or subsection of a file. Initialise a new instance of the class. The containing the underlying stream to use for IO. The start of the partial data. The length of the partial data. Read a byte from this stream. Returns the byte read or -1 on end of stream. Reads a sequence of bytes from the current stream and advances the position within the stream by the number of bytes read. An array of bytes. When this method returns, the buffer contains the specified byte array with the values between offset and (offset + count - 1) replaced by the bytes read from the current source. The zero-based byte offset in buffer at which to begin storing the data read from the current stream. The maximum number of bytes to be read from the current stream. The total number of bytes read into the buffer. This can be less than the number of bytes requested if that many bytes are not currently available, or zero (0) if the end of the stream has been reached. The sum of offset and count is larger than the buffer length. Methods were called after the stream was closed. The stream does not support reading. buffer is null. An I/O error occurs. offset or count is negative. Writes a sequence of bytes to the current stream and advances the current position within this stream by the number of bytes written. An array of bytes. This method copies count bytes from buffer to the current stream. The zero-based byte offset in buffer at which to begin copying bytes to the current stream. The number of bytes to be written to the current stream. An I/O error occurs. The stream does not support writing. Methods were called after the stream was closed. buffer is null. The sum of offset and count is greater than the buffer length. offset or count is negative. When overridden in a derived class, sets the length of the current stream. The desired length of the current stream in bytes. The stream does not support both writing and seeking, such as if the stream is constructed from a pipe or console output. An I/O error occurs. Methods were called after the stream was closed. When overridden in a derived class, sets the position within the current stream. A byte offset relative to the origin parameter. A value of type indicating the reference point used to obtain the new position. The new position within the current stream. An I/O error occurs. The stream does not support seeking, such as if the stream is constructed from a pipe or console output. Methods were called after the stream was closed. Clears all buffers for this stream and causes any buffered data to be written to the underlying device. An I/O error occurs. Gets or sets the position within the current stream. The current position within the stream. An I/O error occurs. The stream does not support seeking. Methods were called after the stream was closed. Gets the length in bytes of the stream. A long value representing the length of the stream in bytes. A class derived from Stream does not support seeking. Methods were called after the stream was closed. Gets a value indicating whether the current stream supports writing. false true if the stream supports writing; otherwise, false. Gets a value indicating whether the current stream supports seeking. true true if the stream supports seeking; otherwise, false. Gets a value indicating whether the current stream supports reading. true. true if the stream supports reading; otherwise, false. Gets a value that determines whether the current stream can time out. A value that determines whether the current stream can time out. Provides a static way to obtain a source of data for an entry. Get a source of data by creating a new stream. Returns a to use for compression input. Ideally a new stream is created and opened to achieve this, to avoid locking problems. Represents a source of data that can dynamically provide multiple data sources based on the parameters passed. Get a data source. The to get a source for. The name for data if known. Returns a to use for compression input. Ideally a new stream is created and opened to achieve this, to avoid locking problems. Default implementation of a for use with files stored on disk. Initialise a new instance of The name of the file to obtain data from. Get a providing data. Returns a providing data. Default implementation of for files stored on disk. Get a providing data for an entry. The entry to provide data for. The file name for data if known. Returns a stream providing data; or null if not available Defines facilities for data storage when updating Zip Archives. Get the to apply during updates. Get an empty that can be used for temporary output. Returns a temporary output Convert a temporary output stream to a final stream. The resulting final Make a temporary copy of the original stream. The to copy. Returns a temporary output that is a copy of the input. Return a stream suitable for performing direct updates on the original source. The current stream. Returns a stream suitable for direct updating. This may be the current stream passed. Dispose of this instance. An abstract suitable for extension by inheritance. Initializes a new instance of the class. The update mode. Gets a temporary output Returns the temporary output stream. Converts the temporary to its final form. Returns a that can be used to read the final storage for the archive. Make a temporary copy of a . The to make a copy of. Returns a temporary output that is a copy of the input. Return a stream suitable for performing direct updates on the original source. The to open for direct update. Returns a stream suitable for direct updating. Disposes this instance. Gets the update mode applicable. The update mode. An implementation suitable for hard disks. Initializes a new instance of the class. The file. The update mode. Initializes a new instance of the class. The file. Gets a temporary output for performing updates on. Returns the temporary output stream. Converts a temporary to its final form. Returns a that can be used to read the final storage for the archive. Make a temporary copy of a stream. The to copy. Returns a temporary output that is a copy of the input. Return a stream suitable for performing direct updates on the original source. The current stream. Returns a stream suitable for direct updating. If the is not null this is used as is. Disposes this instance. An implementation suitable for in memory streams. Initializes a new instance of the class. Initializes a new instance of the class. The to use This constructor is for testing as memory streams dont really require safe mode. Get the stream returned by if this was in fact called. Gets the temporary output Returns the temporary output stream. Converts the temporary to its final form. Returns a that can be used to read the final storage for the archive. Make a temporary copy of the original stream. The to copy. Returns a temporary output that is a copy of the input. Return a stream suitable for performing direct updates on the original source. The original source stream Returns a stream suitable for direct updating. If the passed is not null this is used; otherwise a new is returned. Disposes this instance. Holds data pertinent to a data descriptor. Get /set the compressed size of data. Get / set the uncompressed size of data Get /set the crc value. This class assists with writing/reading from Zip files. Initialise an instance of this class. The name of the file to open. Initialise a new instance of . The stream to use. Get / set a value indicating whether the underlying stream is owned or not. If the stream is owned it is closed when this instance is closed. Close the stream. The underlying stream is closed only if is true. Locates a block with the desired . The signature to find. Location, marking the end of block. Minimum size of the block. The maximum variable data. Returns the offset of the first byte after the signature; -1 if not found Write Zip64 end of central directory records (File header and locator). The number of entries in the central directory. The size of entries in the central directory. The offset of the central directory. Write the required records to end the central directory. The number of entries in the directory. The size of the entries in the directory. The start of the central directory. The archive comment. (This can be null). Read an unsigned short in little endian byte order. Returns the value read. An i/o error occurs. The file ends prematurely Read an int in little endian byte order. Returns the value read. An i/o error occurs. The file ends prematurely Read a long in little endian byte order. The value read. Write an unsigned short in little endian byte order. The value to write. Write a ushort in little endian byte order. The value to write. Write an int in little endian byte order. The value to write. Write a uint in little endian byte order. The value to write. Write a long in little endian byte order. The value to write. Write a ulong in little endian byte order. The value to write. Write a data descriptor. The entry to write a descriptor for. Returns the number of descriptor bytes written. Read data descriptor at the end of compressed data. if set to true [zip64]. The data to fill in. Returns the number of bytes read in the descriptor. This is an InflaterInputStream that reads the files baseInputStream an zip archive one after another. It has a special method to get the zip entry of the next file. The zip entry contains information about the file name size, compressed size, Crc, etc. It includes support for Stored and Deflated entries.

Author of the original java version : Jochen Hoenicke
This sample shows how to read a zip file using System; using System.Text; using System.IO; using ICSharpCode.SharpZipLib.Zip; class MainClass { public static void Main(string[] args) { using ( ZipInputStream s = new ZipInputStream(File.OpenRead(args[0]))) { ZipEntry theEntry; const int size = 2048; byte[] data = new byte[2048]; while ((theEntry = s.GetNextEntry()) != null) { if ( entry.IsFile ) { Console.Write("Show contents (y/n) ?"); if (Console.ReadLine() == "y") { while (true) { size = s.Read(data, 0, data.Length); if (size > 0) { Console.Write(new ASCIIEncoding().GetString(data, 0, size)); } else { break; } } } } } } } }
Delegate for reading bytes from a stream. The current reader this instance. Creates a new Zip input stream, for reading a zip archive. The underlying providing data. Creates a new Zip input stream, for reading a zip archive. The underlying providing data. Size of the buffer. Optional password used for encryption when non-null A password for all encrypted entries in this Gets a value indicating if there is a current entry and it can be decompressed The entry can only be decompressed if the library supports the zip features required to extract it. See the ZipEntry Version property for more details. Since uses the local headers for extraction, entries with no compression combined with the flag set, cannot be extracted as the end of the entry data cannot be deduced. Is the compression method for the specified entry supported? Uses entry.CompressionMethodForHeader so that entries of type WinZipAES will be rejected. the entry to check. true if the compression method is supported, false if not. Advances to the next entry in the archive The next entry in the archive or null if there are no more entries. If the previous entry is still open CloseEntry is called. Input stream is closed Password is not set, password is invalid, compression method is invalid, version required to extract is not supported Read data descriptor at the end of compressed data. Complete cleanup as the final part of closing. True if the crc value should be tested Closes the current zip entry and moves to the next one. The stream is closed The Zip stream ends early Returns 1 if there is an entry available Otherwise returns 0. Returns the current size that can be read from the current entry if available Thrown if the entry size is not known. Thrown if no entry is currently available. Reads a byte from the current zip entry. The byte or -1 if end of stream is reached. Handle attempts to read by throwing an . The destination array to store data in. The offset at which data read should be stored. The maximum number of bytes to read. Returns the number of bytes actually read. Handle attempts to read from this entry by throwing an exception Handle attempts to read from this entry by throwing an exception Perform the initial read on an entry which may include reading encryption headers and setting up inflation. The destination to fill with data read. The offset to start reading at. The maximum number of bytes to read. The actual number of bytes read. Read a block of bytes from the stream. The destination for the bytes. The index to start storing data. The number of bytes to attempt to read. Returns the number of bytes read. Zero bytes read means end of stream. Reads a block of bytes from the current zip entry. The number of bytes read (this may be less than the length requested, even before the end of stream), or 0 on end of stream. An i/o error occurred. The deflated stream is corrupted. The stream is not open. Closes the zip input stream ZipNameTransform transforms names as per the Zip file naming convention. The use of absolute names is supported although its use is not valid according to Zip naming conventions, and should not be used if maximum compatability is desired. Initialize a new instance of Initialize a new instance of The string to trim from the front of paths if found. Static constructor. Transform a windows directory name according to the Zip file naming conventions. The directory name to transform. The transformed name. Transform a windows file name according to the Zip file naming conventions. The file name to transform. The transformed name. Get/set the path prefix to be trimmed from paths if present. The prefix is trimmed before any conversion from a windows path is done. Force a name to be valid by replacing invalid characters with a fixed value The name to force valid The replacement character to use. Returns a valid name Test a name to see if it is a valid name for a zip entry. The name to test. If true checking is relaxed about windows file names and absolute paths. Returns true if the name is a valid zip name; false otherwise. Zip path names are actually in Unix format, and should only contain relative paths. This means that any path stored should not contain a drive or device letter, or a leading slash. All slashes should forward slashes '/'. An empty name is valid for a file where the input comes from standard input. A null name is not considered valid. Test a name to see if it is a valid name for a zip entry. The name to test. Returns true if the name is a valid zip name; false otherwise. Zip path names are actually in unix format, and should only contain relative paths if a path is present. This means that the path stored should not contain a drive or device letter, or a leading slash. All slashes should forward slashes '/'. An empty name is valid where the input comes from standard input. A null name is not considered valid. An implementation of INameTransform that transforms entry paths as per the Zip file naming convention. Strips path roots and puts directory separators in the correct format ('/') Initialize a new instance of Transform a windows directory name according to the Zip file naming conventions. The directory name to transform. The transformed name. Transform a windows file name according to the Zip file naming conventions. The file name to transform. The transformed name. This is a DeflaterOutputStream that writes the files into a zip archive one after another. It has a special method to start a new zip entry. The zip entries contains information about the file name size, compressed size, CRC, etc. It includes support for Stored and Deflated entries. This class is not thread safe.

Author of the original java version : Jochen Hoenicke
This sample shows how to create a zip file using System; using System.IO; using ICSharpCode.SharpZipLib.Core; using ICSharpCode.SharpZipLib.Zip; class MainClass { public static void Main(string[] args) { string[] filenames = Directory.GetFiles(args[0]); byte[] buffer = new byte[4096]; using ( ZipOutputStream s = new ZipOutputStream(File.Create(args[1])) ) { s.SetLevel(9); // 0 - store only to 9 - means best compression foreach (string file in filenames) { ZipEntry entry = new ZipEntry(file); s.PutNextEntry(entry); using (FileStream fs = File.OpenRead(file)) { StreamUtils.Copy(fs, s, buffer); } } } } }
Creates a new Zip output stream, writing a zip archive. The output stream to which the archive contents are written. Creates a new Zip output stream, writing a zip archive. The output stream to which the archive contents are written. Size of the buffer to use. Gets a flag value of true if the central header has been added for this archive; false if it has not been added. No further entries can be added once this has been done. Set the zip file comment. The comment text for the entire archive. The converted comment is longer than 0xffff bytes. Sets the compression level. The new level will be activated immediately. The new compression level (1 to 9). Level specified is not supported. Get the current deflater compression level The current compression level Get / set a value indicating how Zip64 Extension usage is determined when adding entries. Older archivers may not understand Zip64 extensions. If backwards compatability is an issue be careful when adding entries to an archive. Setting this property to off is workable but less desirable as in those circumstances adding a file larger then 4GB will fail. Used for transforming the names of entries added by . Defaults to , set to null to disable transforms and use names as supplied. Get/set the password used for encryption. When set to null or if the password is empty no encryption is performed Write an unsigned short in little endian byte order. Write an int in little endian byte order. Write an int in little endian byte order. Starts a new Zip entry. It automatically closes the previous entry if present. All entry elements bar name are optional, but must be correct if present. If the compression method is stored and the output is not patchable the compression for that entry is automatically changed to deflate level 0 the entry. if entry passed is null. if an I/O error occured. if stream was finished Too many entries in the Zip file
Entry name is too long
Finish has already been called
The Compression method specified for the entry is unsupported.
Closes the current entry, updating header and footer information as required Invalid entry field values. An I/O error occurs. No entry is active. Initializes encryption keys based on given . The password. Initializes encryption keys based on given password. Writes the given buffer to the current entry. The buffer containing data to write. The offset of the first byte to write. The number of bytes to write. Archive size is invalid No entry is active. Finishes the stream. This will write the central directory at the end of the zip file and flush the stream. This is automatically called when the stream is closed. An I/O error occurs. Comment exceeds the maximum length
Entry name exceeds the maximum length
Flushes the stream by calling Flush on the deflater stream unless the current compression method is . Then it flushes the underlying output stream. The entries for the archive. Used to track the crc of data added to entries. The current entry being added. Used to track the size of data for an entry during writing. Offset to be recorded for each entry in the central header. Comment for the entire archive recorded in central header. Flag indicating that header patching is required for the current entry. Position to patch crc Position to patch size. The password to use when encrypting archive entries. This static class contains functions for encoding and decoding zip file strings Code page backing field The original Zip specification (https://pkware.cachefly.net/webdocs/casestudies/APPNOTE.TXT) states that file names should only be encoded with IBM Code Page 437 or UTF-8. In practice, most zip apps use OEM or system encoding (typically cp437 on Windows). Let's be good citizens and default to UTF-8 http://utf8everywhere.org/ Automatically select codepage while opening archive see https://github.com/icsharpcode/SharpZipLib/pull/280#issuecomment-433608324 Encoding used for string conversion. Setting this to 65001 (UTF-8) will also set the Language encoding flag to indicate UTF-8 encoded file names. Attempt to get the operating system default codepage, or failing that, to the fallback code page IBM 437. Get whether the default codepage is set to UTF-8. Setting this property to false will set the to Get OEM codepage from NetFX, which parses the NLP file with culture info table etc etc. But sometimes it yields the special value of 1 which is nicknamed CodePageNoOEM in sources (might also mean CP_OEMCP, but Encoding puts it so). This was observed on Ukranian and Hindu systems. Given this value, throws an . So replace it with , (IBM 437 which is the default code page in a default Windows installation console. Convert a portion of a byte array to a string using Data to convert to string Number of bytes to convert starting from index 0 data[0]..data[count - 1] converted to a string Convert a byte array to a string using Byte array to convert dataconverted to a string Convert a byte array to a string using The applicable general purpose bits flags Byte array to convert The number of bytes to convert. dataconverted to a string Convert a byte array to a string using Byte array to convert The applicable general purpose bits flags dataconverted to a string Convert a string to a byte array using String to convert to an array Converted array Convert a string to a byte array using The applicable general purpose bits flags String to convert to an array Converted array