Java Code Examples for org.apache.hadoop.fs.FSDataOutputStream#writeByte()

The following examples show how to use org.apache.hadoop.fs.FSDataOutputStream#writeByte() . You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example 1
Source File: TestAbandonBlock.java    From hadoop with Apache License 2.0 6 votes vote down vote up
@Test
/** Make sure that the quota is decremented correctly when a block is abandoned */
public void testQuotaUpdatedWhenBlockAbandoned() throws IOException {
  // Setting diskspace quota to 3MB
  fs.setQuota(new Path("/"), HdfsConstants.QUOTA_DONT_SET, 3 * 1024 * 1024);

  // Start writing a file with 2 replicas to ensure each datanode has one.
  // Block Size is 1MB.
  String src = FILE_NAME_PREFIX + "test_quota1";
  FSDataOutputStream fout = fs.create(new Path(src), true, 4096, (short)2, 1024 * 1024);
  for (int i = 0; i < 1024; i++) {
    fout.writeByte(123);
  }

  // Shutdown one datanode, causing the block abandonment.
  cluster.getDataNodes().get(0).shutdown();

  // Close the file, new block will be allocated with 2MB pending size.
  try {
    fout.close();
  } catch (QuotaExceededException e) {
    fail("Unexpected quota exception when closing fout");
  }
}
 
Example 2
Source File: TestRenameWithSnapshots.java    From hadoop with Apache License 2.0 6 votes vote down vote up
/**
 * Similar with testRenameUCFileInSnapshot, but do renaming first and then 
 * append file without closing it. Unit test for HDFS-5425.
 */
@Test
public void testAppendFileAfterRenameInSnapshot() throws Exception {
  final Path test = new Path("/test");
  final Path foo = new Path(test, "foo");
  final Path bar = new Path(foo, "bar");
  DFSTestUtil.createFile(hdfs, bar, BLOCKSIZE, REPL, SEED);
  SnapshotTestHelper.createSnapshot(hdfs, test, "s0");
  // rename bar --> bar2
  final Path bar2 = new Path(foo, "bar2");
  hdfs.rename(bar, bar2);
  // append file and keep it as underconstruction.
  FSDataOutputStream out = hdfs.append(bar2);
  out.writeByte(0);
  ((DFSOutputStream) out.getWrappedStream()).hsync(
      EnumSet.of(SyncFlag.UPDATE_LENGTH));

  // save namespace and restart
  restartClusterAndCheckImage(true);
}
 
Example 3
Source File: TestAbandonBlock.java    From big-c with Apache License 2.0 6 votes vote down vote up
@Test
/** Make sure that the quota is decremented correctly when a block is abandoned */
public void testQuotaUpdatedWhenBlockAbandoned() throws IOException {
  // Setting diskspace quota to 3MB
  fs.setQuota(new Path("/"), HdfsConstants.QUOTA_DONT_SET, 3 * 1024 * 1024);

  // Start writing a file with 2 replicas to ensure each datanode has one.
  // Block Size is 1MB.
  String src = FILE_NAME_PREFIX + "test_quota1";
  FSDataOutputStream fout = fs.create(new Path(src), true, 4096, (short)2, 1024 * 1024);
  for (int i = 0; i < 1024; i++) {
    fout.writeByte(123);
  }

  // Shutdown one datanode, causing the block abandonment.
  cluster.getDataNodes().get(0).shutdown();

  // Close the file, new block will be allocated with 2MB pending size.
  try {
    fout.close();
  } catch (QuotaExceededException e) {
    fail("Unexpected quota exception when closing fout");
  }
}
 
Example 4
Source File: TestRenameWithSnapshots.java    From big-c with Apache License 2.0 6 votes vote down vote up
/**
 * Similar with testRenameUCFileInSnapshot, but do renaming first and then 
 * append file without closing it. Unit test for HDFS-5425.
 */
@Test
public void testAppendFileAfterRenameInSnapshot() throws Exception {
  final Path test = new Path("/test");
  final Path foo = new Path(test, "foo");
  final Path bar = new Path(foo, "bar");
  DFSTestUtil.createFile(hdfs, bar, BLOCKSIZE, REPL, SEED);
  SnapshotTestHelper.createSnapshot(hdfs, test, "s0");
  // rename bar --> bar2
  final Path bar2 = new Path(foo, "bar2");
  hdfs.rename(bar, bar2);
  // append file and keep it as underconstruction.
  FSDataOutputStream out = hdfs.append(bar2);
  out.writeByte(0);
  ((DFSOutputStream) out.getWrappedStream()).hsync(
      EnumSet.of(SyncFlag.UPDATE_LENGTH));

  // save namespace and restart
  restartClusterAndCheckImage(true);
}
 
Example 5
Source File: LoadGenerator.java    From RDFS with Apache License 2.0 6 votes vote down vote up
/** Create a file with a length of <code>fileSize</code>.
 * The file is filled with 'a'.
 */
private void genFile(Path file, long fileSize) throws IOException {
  long startTime = System.currentTimeMillis();
  FSDataOutputStream out = fs.create(file, true, 
      getConf().getInt("io.file.buffer.size", 4096),
      (short)getConf().getInt("dfs.replication", 3),
      fs.getDefaultBlockSize());
  executionTime[CREATE] += (System.currentTimeMillis()-startTime);
  totalNumOfOps[CREATE]++;

  for (long i=0; i<fileSize; i++) {
    out.writeByte('a');
  }
  startTime = System.currentTimeMillis();
  out.close();
  executionTime[WRITE_CLOSE] += (System.currentTimeMillis()-startTime);
  totalNumOfOps[WRITE_CLOSE]++;
}
 
Example 6
Source File: LoadGenerator.java    From RDFS with Apache License 2.0 6 votes vote down vote up
/** Create a file with a length of <code>fileSize</code>.
 * The file is filled with 'a'.
 */
private void genFile(Path file, long fileSize) throws IOException {
  long startTime = System.currentTimeMillis();
  FSDataOutputStream out = fs.create(file, true,
      getConf().getInt("io.file.buffer.size", 4096),
      (short)getConf().getInt("dfs.replication", 3),
      fs.getDefaultBlockSize());
  executionTime[CREATE] += (System.currentTimeMillis()-startTime);
  totalNumOfOps[CREATE]++;

  for (long i=0; i<fileSize; i++) {
    out.writeByte('a');
  }
  startTime = System.currentTimeMillis();
  out.close();
  executionTime[WRITE_CLOSE] += (System.currentTimeMillis()-startTime);
  totalNumOfOps[WRITE_CLOSE]++;
}
 
Example 7
Source File: LoadGenerator.java    From hadoop-gpu with Apache License 2.0 6 votes vote down vote up
/** Create a file with a length of <code>fileSize</code>.
 * The file is filled with 'a'.
 */
private void genFile(Path file, long fileSize) throws IOException {
  long startTime = System.currentTimeMillis();
  FSDataOutputStream out = fs.create(file, true, 
      getConf().getInt("io.file.buffer.size", 4096),
      (short)getConf().getInt("dfs.replication", 3),
      fs.getDefaultBlockSize());
  executionTime[CREATE] += (System.currentTimeMillis()-startTime);
  totalNumOfOps[CREATE]++;

  for (long i=0; i<fileSize; i++) {
    out.writeByte('a');
  }
  startTime = System.currentTimeMillis();
  out.close();
  executionTime[WRITE_CLOSE] += (System.currentTimeMillis()-startTime);
  totalNumOfOps[WRITE_CLOSE]++;
}
 
Example 8
Source File: DataGenerator.java    From hadoop with Apache License 2.0 5 votes vote down vote up
/** Create a file with the name <code>file</code> and 
 * a length of <code>fileSize</code>. The file is filled with character 'a'.
 */
private void genFile(Path file, long fileSize) throws IOException {
  FSDataOutputStream out = fc.create(file,
      EnumSet.of(CreateFlag.CREATE, CreateFlag.OVERWRITE),
      CreateOpts.createParent(), CreateOpts.bufferSize(4096),
      CreateOpts.repFac((short) 3));
  for(long i=0; i<fileSize; i++) {
    out.writeByte('a');
  }
  out.close();
}
 
Example 9
Source File: DataGenerator.java    From big-c with Apache License 2.0 5 votes vote down vote up
/** Create a file with the name <code>file</code> and 
 * a length of <code>fileSize</code>. The file is filled with character 'a'.
 */
private void genFile(Path file, long fileSize) throws IOException {
  FSDataOutputStream out = fc.create(file,
      EnumSet.of(CreateFlag.CREATE, CreateFlag.OVERWRITE),
      CreateOpts.createParent(), CreateOpts.bufferSize(4096),
      CreateOpts.repFac((short) 3));
  for(long i=0; i<fileSize; i++) {
    out.writeByte('a');
  }
  out.close();
}
 
Example 10
Source File: TestUtils.java    From succinct with Apache License 2.0 5 votes vote down vote up
public static FSDataInputStream getStream(ByteBuffer buf) throws IOException {
  File tmpDir = Files.createTempDir();
  Path filePath = new Path(tmpDir.getAbsolutePath() + "/testOut");
  FileSystem fs = FileSystem.get(filePath.toUri(), new Configuration());
  FSDataOutputStream fOut = fs.create(filePath);
  buf.rewind();
  while (buf.hasRemaining()) {
    fOut.writeByte(buf.get());
  }
  fOut.close();
  buf.rewind();
  return fs.open(filePath);
}
 
Example 11
Source File: DataGenerator.java    From RDFS with Apache License 2.0 5 votes vote down vote up
/** Create a file with the name <code>file</code> and 
 * a length of <code>fileSize</code>. The file is filled with character 'a'.
 */
private void genFile(Path file, long fileSize) throws IOException {
  FSDataOutputStream out = fs.create(file, true, 
      getConf().getInt("io.file.buffer.size", 4096),
      (short)getConf().getInt("dfs.replication", 3),
      fs.getDefaultBlockSize());
  for(long i=0; i<fileSize; i++) {
    out.writeByte('a');
  }
  out.close();
}
 
Example 12
Source File: DataGenerator.java    From RDFS with Apache License 2.0 5 votes vote down vote up
/**
 * Create a file with the name <code>file</code> and a length of
 * <code>fileSize</code>. The file is filled with character 'a'.
 */
@SuppressWarnings("unused")
private void genFile(Path file, long fileSize) throws IOException {
	FSDataOutputStream out = fs.create(file, true,
			getConf().getInt("io.file.buffer.size", 4096),
			(short) getConf().getInt("dfs.replication", 3),
			fs.getDefaultBlockSize());
	for (long i = 0; i < fileSize; i++) {
		out.writeByte('a');
	}
	out.close();
}
 
Example 13
Source File: DataGenerator.java    From hadoop-gpu with Apache License 2.0 5 votes vote down vote up
/** Create a file with the name <code>file</code> and 
 * a length of <code>fileSize</code>. The file is filled with character 'a'.
 */
private void genFile(Path file, long fileSize) throws IOException {
  FSDataOutputStream out = fs.create(file, true, 
      getConf().getInt("io.file.buffer.size", 4096),
      (short)getConf().getInt("dfs.replication", 3),
      fs.getDefaultBlockSize());
  for(long i=0; i<fileSize; i++) {
    out.writeByte('a');
  }
  out.close();
}