org.apache.hadoop.io.compress.DecompressorStream Java Examples

The following examples show how to use org.apache.hadoop.io.compress.DecompressorStream. You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example #1
Source File: LobFile.java    From aliyun-maxcompute-data-collectors with Apache License 2.0 5 votes vote down vote up
@Override
/** {@inheritDoc} */
public InputStream readBlobRecord() throws IOException {
  if (!isRecordAvailable()) {
    // we're not currently aligned on a record-start.
    // Try to get the next one.
    if (!next()) {
      // No more records available.
      throw new EOFException("End of file reached.");
    }
  }

  // Ensure any previously-open user record stream is closed.
  closeUserStream();

  // Mark this record as consumed.
  this.isAligned = false;

  // The length of the stream we can return to the user is
  // the indexRecordLen minus the length of any per-record headers.
  // That includes the RecordStartMark, the entryId, and the claimedLen.
  long streamLen = this.indexRecordLen - RecordStartMark.START_MARK_LENGTH
      - WritableUtils.getVIntSize(this.curEntryId)
      - WritableUtils.getVIntSize(this.claimedRecordLen);
  LOG.debug("Yielding stream to user with length " + streamLen);
  this.userInputStream = new FixedLengthInputStream(this.dataIn, streamLen);
  if (this.codec != null) {
    // The user needs to decompress the data; wrap the InputStream.
    decompressor.reset();
    this.userInputStream = new DecompressorStream(
        this.userInputStream, decompressor);
  }
  return this.userInputStream;
}