Java Code Examples for org.apache.hadoop.mapreduce.lib.input.SplitLineReader

The following examples show how to use org.apache.hadoop.mapreduce.lib.input.SplitLineReader. These examples are extracted from open source projects. You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example 1
Source Project: hadoop   Source File: LineRecordReader.java    License: Apache License 2.0 5 votes vote down vote up
public LineRecordReader(InputStream in, long offset, long endOffset,
    int maxLineLength, byte[] recordDelimiter) {
  this.maxLineLength = maxLineLength;
  this.in = new SplitLineReader(in, recordDelimiter);
  this.start = offset;
  this.pos = offset;
  this.end = endOffset;    
  filePosition = null;
}
 
Example 2
Source Project: hadoop   Source File: LineRecordReader.java    License: Apache License 2.0 5 votes vote down vote up
public LineRecordReader(InputStream in, long offset, long endOffset, 
                        Configuration job, byte[] recordDelimiter)
  throws IOException{
  this.maxLineLength = job.getInt(org.apache.hadoop.mapreduce.lib.input.
    LineRecordReader.MAX_LINE_LENGTH, Integer.MAX_VALUE);
  this.in = new SplitLineReader(in, job, recordDelimiter);
  this.start = offset;
  this.pos = offset;
  this.end = endOffset;    
  filePosition = null;
}
 
Example 3
Source Project: big-c   Source File: LineRecordReader.java    License: Apache License 2.0 5 votes vote down vote up
public LineRecordReader(InputStream in, long offset, long endOffset,
    int maxLineLength, byte[] recordDelimiter) {
  this.maxLineLength = maxLineLength;
  this.in = new SplitLineReader(in, recordDelimiter);
  this.start = offset;
  this.pos = offset;
  this.end = endOffset;    
  filePosition = null;
}
 
Example 4
Source Project: big-c   Source File: LineRecordReader.java    License: Apache License 2.0 5 votes vote down vote up
public LineRecordReader(InputStream in, long offset, long endOffset, 
                        Configuration job, byte[] recordDelimiter)
  throws IOException{
  this.maxLineLength = job.getInt(org.apache.hadoop.mapreduce.lib.input.
    LineRecordReader.MAX_LINE_LENGTH, Integer.MAX_VALUE);
  this.in = new SplitLineReader(in, job, recordDelimiter);
  this.start = offset;
  this.pos = offset;
  this.end = endOffset;    
  filePosition = null;
}