org.apache.hadoop.hdfs.util.DataTransferThrottler Java Examples

The following examples show how to use org.apache.hadoop.hdfs.util.DataTransferThrottler. You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example #1
Source File: KeyValueContainerCheck.java    From hadoop-ozone with Apache License 2.0 6 votes vote down vote up
/**
 * full checks comprise scanning all metadata inside the container.
 * Including the KV database. These checks are intrusive, consume more
 * resources compared to fast checks and should only be done on Closed
 * or Quasi-closed Containers. Concurrency being limited to delete
 * workflows.
 * <p>
 * fullCheck is a superset of fastCheck
 *
 * @return true : integrity checks pass, false : otherwise.
 */
public boolean fullCheck(DataTransferThrottler throttler, Canceler canceler) {
  boolean valid;

  try {
    valid = fastCheck();
    if (valid) {
      scanData(throttler, canceler);
    }
  } catch (IOException e) {
    handleCorruption(e);
    valid = false;
  }

  return valid;
}
 
Example #2
Source File: BlockXCodingMerger.java    From RDFS with Apache License 2.0 6 votes vote down vote up
public BlockXCodingMerger(Block block, int namespaceId,
		DataInputStream[] childInputStreams, long offsetInBlock,
		long length, String[] childAddrs, String myAddr,
		DataTransferThrottler throttler,
		int mergerLevel) throws IOException{
	super();
	this.block = block;
	this.namespaceId = namespaceId;
	this.childInputStreams = childInputStreams;
	this.offsetInBlock = offsetInBlock;
	this.length = length;
	this.childAddrs = childAddrs;
	this.myAddr = myAddr;
	this.throttler = throttler;
	this.mergerLevel = mergerLevel;
	Configuration conf = new Configuration();
	this.packetSize = conf.getInt("raid.blockreconstruct.packetsize", 4096);
	this.bytesPerChecksum = conf.getInt("io.bytes.per.checksum", 512);
	this.checksum = DataChecksum.newDataChecksum(DataChecksum.CHECKSUM_CRC32,
			bytesPerChecksum, new PureJavaCrc32());
	this.checksumSize = checksum.getChecksumSize();
}
 
Example #3
Source File: TestBlockReplacement.java    From RDFS with Apache License 2.0 6 votes vote down vote up
public void testThrottler() throws IOException {
  Configuration conf = new Configuration();
  FileSystem.setDefaultUri(conf, "hdfs://localhost:0");
  long bandwidthPerSec = 1024*1024L;
  final long TOTAL_BYTES =6*bandwidthPerSec; 
  long bytesToSend = TOTAL_BYTES; 
  long start = Util.now();
  DataTransferThrottler throttler = new DataTransferThrottler(bandwidthPerSec);
  long totalBytes = 0L;
  long bytesSent = 1024*512L; // 0.5MB
  throttler.throttle(bytesSent);
  bytesToSend -= bytesSent;
  bytesSent = 1024*768L; // 0.75MB
  throttler.throttle(bytesSent);
  bytesToSend -= bytesSent;
  try {
    Thread.sleep(1000);
  } catch (InterruptedException ignored) {}
  throttler.throttle(bytesToSend);
  long end = Util.now();
  assertTrue(totalBytes*1000/(end-start)<=bandwidthPerSec);
}
 
Example #4
Source File: TestBlockReplacement.java    From big-c with Apache License 2.0 6 votes vote down vote up
@Test
public void testThrottler() throws IOException {
  Configuration conf = new HdfsConfiguration();
  FileSystem.setDefaultUri(conf, "hdfs://localhost:0");
  long bandwidthPerSec = 1024*1024L;
  final long TOTAL_BYTES =6*bandwidthPerSec; 
  long bytesToSend = TOTAL_BYTES; 
  long start = Time.monotonicNow();
  DataTransferThrottler throttler = new DataTransferThrottler(bandwidthPerSec);
  long totalBytes = 0L;
  long bytesSent = 1024*512L; // 0.5MB
  throttler.throttle(bytesSent);
  bytesToSend -= bytesSent;
  bytesSent = 1024*768L; // 0.75MB
  throttler.throttle(bytesSent);
  bytesToSend -= bytesSent;
  try {
    Thread.sleep(1000);
  } catch (InterruptedException ignored) {}
  throttler.throttle(bytesToSend);
  long end = Time.monotonicNow();
  assertTrue(totalBytes*1000/(end-start)<=bandwidthPerSec);
}
 
Example #5
Source File: TransferFsImage.java    From big-c with Apache License 2.0 6 votes vote down vote up
static MD5Hash handleUploadImageRequest(HttpServletRequest request,
    long imageTxId, Storage dstStorage, InputStream stream,
    long advertisedSize, DataTransferThrottler throttler) throws IOException {

  String fileName = NNStorage.getCheckpointImageFileName(imageTxId);

  List<File> dstFiles = dstStorage.getFiles(NameNodeDirType.IMAGE, fileName);
  if (dstFiles.isEmpty()) {
    throw new IOException("No targets in destination storage!");
  }

  MD5Hash advertisedDigest = parseMD5Header(request);
  MD5Hash hash = receiveFile(fileName, dstFiles, dstStorage, true,
      advertisedSize, advertisedDigest, fileName, stream, throttler);
  LOG.info("Downloaded file " + dstFiles.get(0).getName() + " size "
      + dstFiles.get(0).length() + " bytes.");
  return hash;
}
 
Example #6
Source File: TestBlockReplacement.java    From hadoop with Apache License 2.0 6 votes vote down vote up
@Test
public void testThrottler() throws IOException {
  Configuration conf = new HdfsConfiguration();
  FileSystem.setDefaultUri(conf, "hdfs://localhost:0");
  long bandwidthPerSec = 1024*1024L;
  final long TOTAL_BYTES =6*bandwidthPerSec; 
  long bytesToSend = TOTAL_BYTES; 
  long start = Time.monotonicNow();
  DataTransferThrottler throttler = new DataTransferThrottler(bandwidthPerSec);
  long totalBytes = 0L;
  long bytesSent = 1024*512L; // 0.5MB
  throttler.throttle(bytesSent);
  bytesToSend -= bytesSent;
  bytesSent = 1024*768L; // 0.75MB
  throttler.throttle(bytesSent);
  bytesToSend -= bytesSent;
  try {
    Thread.sleep(1000);
  } catch (InterruptedException ignored) {}
  throttler.throttle(bytesToSend);
  long end = Time.monotonicNow();
  assertTrue(totalBytes*1000/(end-start)<=bandwidthPerSec);
}
 
Example #7
Source File: TransferFsImage.java    From hadoop with Apache License 2.0 6 votes vote down vote up
static MD5Hash handleUploadImageRequest(HttpServletRequest request,
    long imageTxId, Storage dstStorage, InputStream stream,
    long advertisedSize, DataTransferThrottler throttler) throws IOException {

  String fileName = NNStorage.getCheckpointImageFileName(imageTxId);

  List<File> dstFiles = dstStorage.getFiles(NameNodeDirType.IMAGE, fileName);
  if (dstFiles.isEmpty()) {
    throw new IOException("No targets in destination storage!");
  }

  MD5Hash advertisedDigest = parseMD5Header(request);
  MD5Hash hash = receiveFile(fileName, dstFiles, dstStorage, true,
      advertisedSize, advertisedDigest, fileName, stream, throttler);
  LOG.info("Downloaded file " + dstFiles.get(0).getName() + " size "
      + dstFiles.get(0).length() + " bytes.");
  return hash;
}
 
Example #8
Source File: OMDBCheckpointServlet.java    From hadoop-ozone with Apache License 2.0 6 votes vote down vote up
@Override
public void init() throws ServletException {

  om = (OzoneManager) getServletContext()
      .getAttribute(OzoneConsts.OM_CONTEXT_ATTRIBUTE);

  if (om == null) {
    LOG.error("Unable to initialize OMDBCheckpointServlet. OM is null");
    return;
  }

  omDbStore = om.getMetadataManager().getStore();
  omMetrics = om.getMetrics();

  OzoneConfiguration configuration = om.getConfiguration();
  long transferBandwidth = configuration.getLongBytes(
      OMConfigKeys.OZONE_DB_CHECKPOINT_TRANSFER_RATE_KEY,
      OMConfigKeys.OZONE_DB_CHECKPOINT_TRANSFER_RATE_DEFAULT);

  if (transferBandwidth > 0) {
    throttler = new DataTransferThrottler(transferBandwidth);
  }
}
 
Example #9
Source File: TestKeyValueContainerCheck.java    From hadoop-ozone with Apache License 2.0 5 votes vote down vote up
/**
 * Sanity test, when there are no corruptions induced.
 */
@Test
public void testKeyValueContainerCheckNoCorruption() throws Exception {
  long containerID = 101;
  int deletedBlocks = 1;
  int normalBlocks = 3;
  int chunksPerBlock = 4;
  ContainerScrubberConfiguration c = conf.getObject(
      ContainerScrubberConfiguration.class);

  // test Closed Container
  createContainerWithBlocks(containerID, normalBlocks, deletedBlocks,
      chunksPerBlock);

  KeyValueContainerCheck kvCheck =
      new KeyValueContainerCheck(containerData.getMetadataPath(), conf,
          containerID);

  // first run checks on a Open Container
  boolean valid = kvCheck.fastCheck();
  assertTrue(valid);

  container.close();

  // next run checks on a Closed Container
  valid = kvCheck.fullCheck(new DataTransferThrottler(
      c.getBandwidthPerVolume()), null);
  assertTrue(valid);
}
 
Example #10
Source File: BlockXCodingMerger.java    From RDFS with Apache License 2.0 5 votes vote down vote up
public BufferBlockXCodingMerger(Block block, int namespaceId,
		DataInputStream[] childInputStreams, long offsetInBlock,
		long length, String[] childAddrs, String myAddr,
		DataTransferThrottler throttler,int mergerLevel,
		byte[] buffer, int offsetInBuffer) throws IOException {
	super(block, namespaceId, childInputStreams, offsetInBlock, length,
			childAddrs, myAddr, throttler, mergerLevel);
	this.buffer = buffer;
	this.offsetInBuffer = offsetInBuffer;
	this.currentOffsetInBlock = offsetInBlock;
}
 
Example #11
Source File: BlockXCodingMerger.java    From RDFS with Apache License 2.0 5 votes vote down vote up
public InternalBlockXCodingMerger(Block block, int namespaceId,
		DataInputStream[] childInputStreams, long offsetInBlock,
		long length, String[] childAddrs, String myAddr,
		DataTransferThrottler throttler,
		int mergerLevel, String parentAddr,
		DataOutputStream parentOut) throws IOException {
	super(block, namespaceId, childInputStreams, offsetInBlock, length,
			childAddrs, myAddr, throttler,
			mergerLevel);
	this.parentAddr = parentAddr;
	this.parentOut = parentOut;
}
 
Example #12
Source File: KeyValueContainer.java    From hadoop-ozone with Apache License 2.0 5 votes vote down vote up
public boolean scanData(DataTransferThrottler throttler, Canceler canceler) {
  if (!shouldScanData()) {
    throw new IllegalStateException("The checksum verification can not be" +
        " done for container in state "
        + containerData.getState());
  }

  long containerId = containerData.getContainerID();
  KeyValueContainerCheck checker =
      new KeyValueContainerCheck(containerData.getMetadataPath(), config,
          containerId);

  return checker.fullCheck(throttler, canceler);
}
 
Example #13
Source File: TestContainerScrubberMetrics.java    From hadoop-ozone with Apache License 2.0 5 votes vote down vote up
private void setupMockContainer(
    Container<ContainerData> c, boolean shouldScanData,
    boolean scanMetaDataSuccess, boolean scanDataSuccess) {
  ContainerData data = mock(ContainerData.class);
  when(data.getContainerID()).thenReturn(containerIdSeq.getAndIncrement());
  when(c.getContainerData()).thenReturn(data);
  when(c.shouldScanData()).thenReturn(shouldScanData);
  when(c.scanMetaData()).thenReturn(scanMetaDataSuccess);
  when(c.scanData(any(DataTransferThrottler.class), any(Canceler.class)))
      .thenReturn(scanDataSuccess);
}
 
Example #14
Source File: FSImage.java    From RDFS with Apache License 2.0 5 votes vote down vote up
/**
 * Constructor
 * @param conf Configuration
 */
FSImage(Configuration conf) throws IOException {
  this();
  setCheckpointDirectories(FSImage.getCheckpointDirs(conf, null),
      FSImage.getCheckpointEditsDirs(conf, null));
  long transferBandwidth = conf.getLong(
      HdfsConstants.DFS_IMAGE_TRANSFER_RATE_KEY,
      HdfsConstants.DFS_IMAGE_TRANSFER_RATE_DEFAULT);

  if (transferBandwidth > 0) {
    this.imageTransferThrottler = new DataTransferThrottler(transferBandwidth);
  }
}
 
Example #15
Source File: BlockSender.java    From big-c with Apache License 2.0 4 votes vote down vote up
private long doSendBlock(DataOutputStream out, OutputStream baseStream,
      DataTransferThrottler throttler) throws IOException {
  if (out == null) {
    throw new IOException( "out stream is null" );
  }
  initialOffset = offset;
  long totalRead = 0;
  OutputStream streamForSendChunks = out;
  
  lastCacheDropOffset = initialOffset;

  if (isLongRead() && blockInFd != null) {
    // Advise that this file descriptor will be accessed sequentially.
    NativeIO.POSIX.getCacheManipulator().posixFadviseIfPossible(
        block.getBlockName(), blockInFd, 0, 0,
        NativeIO.POSIX.POSIX_FADV_SEQUENTIAL);
  }
  
  // Trigger readahead of beginning of file if configured.
  manageOsCache();

  final long startTime = ClientTraceLog.isDebugEnabled() ? System.nanoTime() : 0;
  try {
    int maxChunksPerPacket;
    int pktBufSize = PacketHeader.PKT_MAX_HEADER_LEN;
    boolean transferTo = transferToAllowed && !verifyChecksum
        && baseStream instanceof SocketOutputStream
        && blockIn instanceof FileInputStream;
    if (transferTo) {
      FileChannel fileChannel = ((FileInputStream)blockIn).getChannel();
      blockInPosition = fileChannel.position();
      streamForSendChunks = baseStream;
      maxChunksPerPacket = numberOfChunks(TRANSFERTO_BUFFER_SIZE);
      
      // Smaller packet size to only hold checksum when doing transferTo
      pktBufSize += checksumSize * maxChunksPerPacket;
    } else {
      maxChunksPerPacket = Math.max(1,
          numberOfChunks(HdfsConstants.IO_FILE_BUFFER_SIZE));
      // Packet size includes both checksum and data
      pktBufSize += (chunkSize + checksumSize) * maxChunksPerPacket;
    }

    ByteBuffer pktBuf = ByteBuffer.allocate(pktBufSize);

    while (endOffset > offset && !Thread.currentThread().isInterrupted()) {
      manageOsCache();
      long len = sendPacket(pktBuf, maxChunksPerPacket, streamForSendChunks,
          transferTo, throttler);
      offset += len;
      totalRead += len + (numberOfChunks(len) * checksumSize);
      seqno++;
    }
    // If this thread was interrupted, then it did not send the full block.
    if (!Thread.currentThread().isInterrupted()) {
      try {
        // send an empty packet to mark the end of the block
        sendPacket(pktBuf, maxChunksPerPacket, streamForSendChunks, transferTo,
            throttler);
        out.flush();
      } catch (IOException e) { //socket error
        throw ioeToSocketException(e);
      }

      sentEntireByteRange = true;
    }
  } finally {
    if ((clientTraceFmt != null) && ClientTraceLog.isDebugEnabled()) {
      final long endTime = System.nanoTime();
      ClientTraceLog.debug(String.format(clientTraceFmt, totalRead,
          initialOffset, endTime - startTime));
    }
    close();
  }
  return totalRead;
}
 
Example #16
Source File: BlockXCodingMerger.java    From RDFS with Apache License 2.0 4 votes vote down vote up
public RootBlockXCodingMerger(Block block, int namespaceId,
		DataInputStream[] childInputStreams, long offsetInBlock,
		long length, boolean isRecovery, String[] childAddrs, String myAddr,
		DataTransferThrottler throttler, DataNode datanode, int mergerLevel) 
				throws IOException {
	super(block, namespaceId, childInputStreams, offsetInBlock, length,
			childAddrs, myAddr, throttler, mergerLevel);
	this.datanode = datanode;
	this.isRecovery = isRecovery;
	try {
		// Open local disk out
		streams = datanode.data.writeToBlock(namespaceId, this.block,
				this.isRecovery, false);
		replicaBeingWritten = datanode.data.getReplicaBeingWritten(
				namespaceId, this.block);
		this.finalized = false;
		if (streams != null) {
			this.out = streams.dataOut;
			this.cout = streams.checksumOut;
			this.checksumOut = new DataOutputStream(
					new BufferedOutputStream(streams.checksumOut,
							SMALL_BUFFER_SIZE));
			// If this block is for appends, then remove it from
			// periodic validation.
			if (datanode.blockScanner != null && isRecovery) {
				datanode.blockScanner.deleteBlock(namespaceId, block);
			}
		}
	} catch (BlockAlreadyExistsException bae) {
		throw bae;
	} catch (IOException ioe) {
		IOUtils.closeStream(this);
		cleanupBlock();

		// check if there is a disk error
		IOException cause = FSDataset.getCauseIfDiskError(ioe);
		LOG.warn("NTar:IOException in RootBlockXCodingMerger constructor. "
				+ "Cause is " + cause);
		
		if (cause != null) { // possible disk error
			ioe = cause;
			datanode.checkDiskError(ioe); // may throw an exception here
		}
		throw ioe;
	}
}
 
Example #17
Source File: DataBlockScanner.java    From RDFS with Apache License 2.0 4 votes vote down vote up
void init() throws IOException {
  // get the list of blocks and arrange them in random order
  Block arr[] = dataset.getBlockReport(namespaceId);
  Collections.shuffle(Arrays.asList(arr));
  
  blockInfoSet = new LightWeightLinkedSet<BlockScanInfo>();
  blockMap = new HashMap<Block, BlockScanInfo>();
  
  long scanTime = -1;
  for (Block block : arr) {
    BlockScanInfo info = new BlockScanInfo( block );
    info.lastScanTime = scanTime--; 
    //still keep 'info.lastScanType' to NONE.
    addBlockInfo(info);
  }

  /* Pick the first directory that has any existing scanner log.
   * otherwise, pick the first directory.
   */
  File dir = null;
  FSDataset.FSVolume[] volumes = dataset.volumes.getVolumes();
  for(FSDataset.FSVolume vol : volumes) { 
    File nsDir = vol.getNamespaceSlice(namespaceId).getDirectory();
    if (LogFileHandler.isFilePresent(nsDir, verificationLogFile)) {
      dir = nsDir;
      break;
    }
  }
  if (dir == null) {
    dir = volumes[0].getNamespaceSlice(namespaceId).getDirectory();
  }
  
  try {
    // max lines will be updated later during initialization.
    verificationLog = new LogFileHandler(dir, verificationLogFile, 100);
  } catch (IOException e) {
    LOG.warn("Could not open verfication log. " +
             "Verification times are not stored.");
  }
  
  synchronized (this) {
    throttler = new DataTransferThrottler(200, MAX_SCAN_RATE);
  }
}
 
Example #18
Source File: TransferFsImage.java    From RDFS with Apache License 2.0 4 votes vote down vote up
/**
 * A server-side method to respond to a getfile http request
 * Copies the contents of the local file into the output stream.
 */
static void getFileServer(OutputStream outstream, File localfile, DataTransferThrottler throttler) 
  throws IOException {
  byte buf[] = new byte[BUFFER_SIZE];
  FileInputStream infile = null;
  long totalReads = 0, totalSends = 0;
  try {
    infile = new FileInputStream(localfile);
    if (ErrorSimulator.getErrorSimulation(2)
        && localfile.getAbsolutePath().contains("secondary")) {
      // throw exception only when the secondary sends its image
      throw new IOException("If this exception is not caught by the " +
          "name-node fs image will be truncated.");
    }
    
    if (ErrorSimulator.getErrorSimulation(3)
        && localfile.getAbsolutePath().contains("fsimage")) {
        // Test sending image shorter than localfile
        long len = localfile.length();
        buf = new byte[(int)Math.min(len/2, BUFFER_SIZE)];
        // This will read at most half of the image
        // and the rest of the image will be sent over the wire
        infile.read(buf);
    }
    int num = 1;
    while (num > 0) {
      long startRead = System.currentTimeMillis();
      num = infile.read(buf);
      if (num <= 0) {
        break;
      }
      outstream.write(buf, 0, num);
      if (throttler != null) {
        throttler.throttle(num);
      }
    }
  } finally {
    if (infile != null) {
      infile.close();
    }
  }
}
 
Example #19
Source File: KeyValueContainerCheck.java    From hadoop-ozone with Apache License 2.0 4 votes vote down vote up
private void scanData(DataTransferThrottler throttler, Canceler canceler)
    throws IOException {
  /*
   * Check the integrity of the DB inside each container.
   * 1. iterate over each key (Block) and locate the chunks for the block
   * 2. garbage detection (TBD): chunks which exist in the filesystem,
   *    but not in the DB. This function will be implemented in HDDS-1202
   * 3. chunk checksum verification.
   */
  Preconditions.checkState(onDiskContainerData != null,
      "invoke loadContainerData prior to calling this function");

  File metaDir = new File(metadataPath);
  File dbFile = KeyValueContainerLocationUtil
      .getContainerDBFile(metaDir, containerID);

  if (!dbFile.exists() || !dbFile.canRead()) {
    String dbFileErrorMsg = "Unable to access DB File [" + dbFile.toString()
        + "] for Container [" + containerID + "] metadata path ["
        + metadataPath + "]";
    throw new IOException(dbFileErrorMsg);
  }

  onDiskContainerData.setDbFile(dbFile);

  ChunkLayOutVersion layout = onDiskContainerData.getLayOutVersion();

  try(ReferenceCountedDB db =
          BlockUtils.getDB(onDiskContainerData, checkConfig);
      KeyValueBlockIterator kvIter = new KeyValueBlockIterator(containerID,
          new File(onDiskContainerData.getContainerPath()))) {

    while(kvIter.hasNext()) {
      BlockData block = kvIter.nextBlock();
      for(ContainerProtos.ChunkInfo chunk : block.getChunks()) {
        File chunkFile = layout.getChunkFile(onDiskContainerData,
            block.getBlockID(), ChunkInfo.getFromProtoBuf(chunk));

        if (!chunkFile.exists()) {
          // concurrent mutation in Block DB? lookup the block again.
          byte[] bdata = db.getStore().get(
              Longs.toByteArray(block.getBlockID().getLocalID()));
          if (bdata != null) {
            throw new IOException("Missing chunk file "
                + chunkFile.getAbsolutePath());
          }
        } else if (chunk.getChecksumData().getType()
            != ContainerProtos.ChecksumType.NONE) {
          verifyChecksum(block, chunk, chunkFile, layout, throttler,
              canceler);
        }
      }
    }
  }
}
 
Example #20
Source File: KeyValueContainerCheck.java    From hadoop-ozone with Apache License 2.0 4 votes vote down vote up
private static void verifyChecksum(BlockData block,
    ContainerProtos.ChunkInfo chunk, File chunkFile,
    ChunkLayOutVersion layout,
    DataTransferThrottler throttler, Canceler canceler) throws IOException {
  ChecksumData checksumData =
      ChecksumData.getFromProtoBuf(chunk.getChecksumData());
  int checksumCount = checksumData.getChecksums().size();
  int bytesPerChecksum = checksumData.getBytesPerChecksum();
  Checksum cal = new Checksum(checksumData.getChecksumType(),
      bytesPerChecksum);
  ByteBuffer buffer = ByteBuffer.allocate(bytesPerChecksum);
  long bytesRead = 0;
  try (FileChannel channel = FileChannel.open(chunkFile.toPath(),
      ChunkUtils.READ_OPTIONS, ChunkUtils.NO_ATTRIBUTES)) {
    if (layout == ChunkLayOutVersion.FILE_PER_BLOCK) {
      channel.position(chunk.getOffset());
    }
    for (int i = 0; i < checksumCount; i++) {
      // limit last read for FILE_PER_BLOCK, to avoid reading next chunk
      if (layout == ChunkLayOutVersion.FILE_PER_BLOCK &&
          i == checksumCount - 1 &&
          chunk.getLen() % bytesPerChecksum != 0) {
        buffer.limit((int) (chunk.getLen() % bytesPerChecksum));
      }

      int v = channel.read(buffer);
      if (v == -1) {
        break;
      }
      bytesRead += v;
      buffer.flip();

      throttler.throttle(v, canceler);

      ByteString expected = checksumData.getChecksums().get(i);
      ByteString actual = cal.computeChecksum(buffer)
          .getChecksums().get(0);
      if (!expected.equals(actual)) {
        throw new OzoneChecksumException(String
            .format("Inconsistent read for chunk=%s" +
                " checksum item %d" +
                " expected checksum %s" +
                " actual checksum %s" +
                " for block %s",
                ChunkInfo.getFromProtoBuf(chunk),
                i,
                Arrays.toString(expected.toByteArray()),
                Arrays.toString(actual.toByteArray()),
                block.getBlockID()));
      }

    }
    if (bytesRead != chunk.getLen()) {
      throw new OzoneChecksumException(String
          .format("Inconsistent read for chunk=%s expected length=%d"
                  + " actual length=%d for block %s",
              chunk.getChunkName(),
              chunk.getLen(), bytesRead, block.getBlockID()));
    }
  }
}
 
Example #21
Source File: TestKeyValueContainerCheck.java    From hadoop-ozone with Apache License 2.0 4 votes vote down vote up
/**
 * Sanity test, when there are corruptions induced.
 */
@Test
public void testKeyValueContainerCheckCorruption() throws Exception {
  long containerID = 102;
  int deletedBlocks = 1;
  int normalBlocks = 3;
  int chunksPerBlock = 4;
  ContainerScrubberConfiguration sc = conf.getObject(
      ContainerScrubberConfiguration.class);

  // test Closed Container
  createContainerWithBlocks(containerID, normalBlocks, deletedBlocks,
      chunksPerBlock);

  container.close();

  KeyValueContainerCheck kvCheck =
      new KeyValueContainerCheck(containerData.getMetadataPath(), conf,
          containerID);

  File metaDir = new File(containerData.getMetadataPath());
  File dbFile = KeyValueContainerLocationUtil
      .getContainerDBFile(metaDir, containerID);
  containerData.setDbFile(dbFile);
  try (ReferenceCountedDB ignored =
          BlockUtils.getDB(containerData, conf);
      KeyValueBlockIterator kvIter = new KeyValueBlockIterator(containerID,
          new File(containerData.getContainerPath()))) {
    BlockData block = kvIter.nextBlock();
    assertFalse(block.getChunks().isEmpty());
    ContainerProtos.ChunkInfo c = block.getChunks().get(0);
    BlockID blockID = block.getBlockID();
    ChunkInfo chunkInfo = ChunkInfo.getFromProtoBuf(c);
    File chunkFile = chunkManagerTestInfo.getLayout()
        .getChunkFile(containerData, blockID, chunkInfo);
    long length = chunkFile.length();
    assertTrue(length > 0);
    // forcefully truncate the file to induce failure.
    try (RandomAccessFile file = new RandomAccessFile(chunkFile, "rws")) {
      file.setLength(length / 2);
    }
    assertEquals(length/2, chunkFile.length());
  }

  // metadata check should pass.
  boolean valid = kvCheck.fastCheck();
  assertTrue(valid);

  // checksum validation should fail.
  valid = kvCheck.fullCheck(new DataTransferThrottler(
          sc.getBandwidthPerVolume()), null);
  assertFalse(valid);
}
 
Example #22
Source File: TransferFsImage.java    From big-c with Apache License 2.0 4 votes vote down vote up
private static void copyFileToStream(OutputStream out, File localfile,
    FileInputStream infile, DataTransferThrottler throttler,
    Canceler canceler) throws IOException {
  byte buf[] = new byte[HdfsConstants.IO_FILE_BUFFER_SIZE];
  try {
    CheckpointFaultInjector.getInstance()
        .aboutToSendFile(localfile);

    if (CheckpointFaultInjector.getInstance().
          shouldSendShortFile(localfile)) {
        // Test sending image shorter than localfile
        long len = localfile.length();
        buf = new byte[(int)Math.min(len/2, HdfsConstants.IO_FILE_BUFFER_SIZE)];
        // This will read at most half of the image
        // and the rest of the image will be sent over the wire
        infile.read(buf);
    }
    int num = 1;
    while (num > 0) {
      if (canceler != null && canceler.isCancelled()) {
        throw new SaveNamespaceCancelledException(
          canceler.getCancellationReason());
      }
      num = infile.read(buf);
      if (num <= 0) {
        break;
      }
      if (CheckpointFaultInjector.getInstance()
            .shouldCorruptAByte(localfile)) {
        // Simulate a corrupted byte on the wire
        LOG.warn("SIMULATING A CORRUPT BYTE IN IMAGE TRANSFER!");
        buf[0]++;
      }
      
      out.write(buf, 0, num);
      if (throttler != null) {
        throttler.throttle(num, canceler);
      }
    }
  } catch (EofException e) {
    LOG.info("Connection closed by client");
    out = null; // so we don't close in the finally
  } finally {
    if (out != null) {
      out.close();
    }
  }
}
 
Example #23
Source File: TransferFsImage.java    From big-c with Apache License 2.0 4 votes vote down vote up
/**
 * A server-side method to respond to a getfile http request
 * Copies the contents of the local file into the output stream.
 */
public static void copyFileToStream(OutputStream out, File localfile,
    FileInputStream infile, DataTransferThrottler throttler)
  throws IOException {
  copyFileToStream(out, localfile, infile, throttler, null);
}
 
Example #24
Source File: BlockSender.java    From hadoop with Apache License 2.0 4 votes vote down vote up
private long doSendBlock(DataOutputStream out, OutputStream baseStream,
      DataTransferThrottler throttler) throws IOException {
  if (out == null) {
    throw new IOException( "out stream is null" );
  }
  initialOffset = offset;
  long totalRead = 0;
  OutputStream streamForSendChunks = out;
  
  lastCacheDropOffset = initialOffset;

  if (isLongRead() && blockInFd != null) {
    // Advise that this file descriptor will be accessed sequentially.
    NativeIO.POSIX.getCacheManipulator().posixFadviseIfPossible(
        block.getBlockName(), blockInFd, 0, 0,
        NativeIO.POSIX.POSIX_FADV_SEQUENTIAL);
  }
  
  // Trigger readahead of beginning of file if configured.
  manageOsCache();

  final long startTime = ClientTraceLog.isDebugEnabled() ? System.nanoTime() : 0;
  try {
    int maxChunksPerPacket;
    int pktBufSize = PacketHeader.PKT_MAX_HEADER_LEN;
    boolean transferTo = transferToAllowed && !verifyChecksum
        && baseStream instanceof SocketOutputStream
        && blockIn instanceof FileInputStream;
    if (transferTo) {
      FileChannel fileChannel = ((FileInputStream)blockIn).getChannel();
      blockInPosition = fileChannel.position();
      streamForSendChunks = baseStream;
      maxChunksPerPacket = numberOfChunks(TRANSFERTO_BUFFER_SIZE);
      
      // Smaller packet size to only hold checksum when doing transferTo
      pktBufSize += checksumSize * maxChunksPerPacket;
    } else {
      maxChunksPerPacket = Math.max(1,
          numberOfChunks(HdfsConstants.IO_FILE_BUFFER_SIZE));
      // Packet size includes both checksum and data
      pktBufSize += (chunkSize + checksumSize) * maxChunksPerPacket;
    }

    ByteBuffer pktBuf = ByteBuffer.allocate(pktBufSize);

    while (endOffset > offset && !Thread.currentThread().isInterrupted()) {
      manageOsCache();
      long len = sendPacket(pktBuf, maxChunksPerPacket, streamForSendChunks,
          transferTo, throttler);
      offset += len;
      totalRead += len + (numberOfChunks(len) * checksumSize);
      seqno++;
    }
    // If this thread was interrupted, then it did not send the full block.
    if (!Thread.currentThread().isInterrupted()) {
      try {
        // send an empty packet to mark the end of the block
        sendPacket(pktBuf, maxChunksPerPacket, streamForSendChunks, transferTo,
            throttler);
        out.flush();
      } catch (IOException e) { //socket error
        throw ioeToSocketException(e);
      }

      sentEntireByteRange = true;
    }
  } finally {
    if ((clientTraceFmt != null) && ClientTraceLog.isDebugEnabled()) {
      final long endTime = System.nanoTime();
      ClientTraceLog.debug(String.format(clientTraceFmt, totalRead,
          initialOffset, endTime - startTime));
    }
    close();
  }
  return totalRead;
}
 
Example #25
Source File: TransferFsImage.java    From hadoop with Apache License 2.0 4 votes vote down vote up
private static void copyFileToStream(OutputStream out, File localfile,
    FileInputStream infile, DataTransferThrottler throttler,
    Canceler canceler) throws IOException {
  byte buf[] = new byte[HdfsConstants.IO_FILE_BUFFER_SIZE];
  try {
    CheckpointFaultInjector.getInstance()
        .aboutToSendFile(localfile);

    if (CheckpointFaultInjector.getInstance().
          shouldSendShortFile(localfile)) {
        // Test sending image shorter than localfile
        long len = localfile.length();
        buf = new byte[(int)Math.min(len/2, HdfsConstants.IO_FILE_BUFFER_SIZE)];
        // This will read at most half of the image
        // and the rest of the image will be sent over the wire
        infile.read(buf);
    }
    int num = 1;
    while (num > 0) {
      if (canceler != null && canceler.isCancelled()) {
        throw new SaveNamespaceCancelledException(
          canceler.getCancellationReason());
      }
      num = infile.read(buf);
      if (num <= 0) {
        break;
      }
      if (CheckpointFaultInjector.getInstance()
            .shouldCorruptAByte(localfile)) {
        // Simulate a corrupted byte on the wire
        LOG.warn("SIMULATING A CORRUPT BYTE IN IMAGE TRANSFER!");
        buf[0]++;
      }
      
      out.write(buf, 0, num);
      if (throttler != null) {
        throttler.throttle(num, canceler);
      }
    }
  } catch (EofException e) {
    LOG.info("Connection closed by client");
    out = null; // so we don't close in the finally
  } finally {
    if (out != null) {
      out.close();
    }
  }
}
 
Example #26
Source File: TransferFsImage.java    From hadoop with Apache License 2.0 4 votes vote down vote up
/**
 * A server-side method to respond to a getfile http request
 * Copies the contents of the local file into the output stream.
 */
public static void copyFileToStream(OutputStream out, File localfile,
    FileInputStream infile, DataTransferThrottler throttler)
  throws IOException {
  copyFileToStream(out, localfile, infile, throttler, null);
}
 
Example #27
Source File: BlockSender.java    From big-c with Apache License 2.0 3 votes vote down vote up
/**
 * sendBlock() is used to read block and its metadata and stream the data to
 * either a client or to another datanode. 
 * 
 * @param out  stream to which the block is written to
 * @param baseStream optional. if non-null, <code>out</code> is assumed to 
 *        be a wrapper over this stream. This enables optimizations for
 *        sending the data, e.g. 
 *        {@link SocketOutputStream#transferToFully(FileChannel, 
 *        long, int)}.
 * @param throttler for sending data.
 * @return total bytes read, including checksum data.
 */
long sendBlock(DataOutputStream out, OutputStream baseStream, 
               DataTransferThrottler throttler) throws IOException {
  TraceScope scope =
      Trace.startSpan("sendBlock_" + block.getBlockId(), Sampler.NEVER);
  try {
    return doSendBlock(out, baseStream, throttler);
  } finally {
    scope.close();
  }
}
 
Example #28
Source File: BlockSender.java    From hadoop with Apache License 2.0 3 votes vote down vote up
/**
 * sendBlock() is used to read block and its metadata and stream the data to
 * either a client or to another datanode. 
 * 
 * @param out  stream to which the block is written to
 * @param baseStream optional. if non-null, <code>out</code> is assumed to 
 *        be a wrapper over this stream. This enables optimizations for
 *        sending the data, e.g. 
 *        {@link SocketOutputStream#transferToFully(FileChannel, 
 *        long, int)}.
 * @param throttler for sending data.
 * @return total bytes read, including checksum data.
 */
long sendBlock(DataOutputStream out, OutputStream baseStream, 
               DataTransferThrottler throttler) throws IOException {
  TraceScope scope =
      Trace.startSpan("sendBlock_" + block.getBlockId(), Sampler.NEVER);
  try {
    return doSendBlock(out, baseStream, throttler);
  } finally {
    scope.close();
  }
}
 
Example #29
Source File: Container.java    From hadoop-ozone with Apache License 2.0 2 votes vote down vote up
/**
 * Perform checksum verification for the container data.
 *
 * @param throttler A reference of {@link DataTransferThrottler} used to
 *                  perform I/O bandwidth throttling
 * @param canceler  A reference of {@link Canceler} used to cancel the
 *                  I/O bandwidth throttling (e.g. for shutdown purpose).
 * @return true if the checksum verification succeeds
 *         false otherwise
 */
boolean scanData(DataTransferThrottler throttler, Canceler canceler);
 
Example #30
Source File: BlockXCodingSender.java    From RDFS with Apache License 2.0 2 votes vote down vote up
/**
 * sendBlock() is used to read block and its metadata and stream the data to
 * either a client or to another datanode.
 * 
 * @param out
 *            stream to which the block is written to
 * @param baseStream
 *            optional. if non-null, <code>out</code> is assumed to be a
 *            wrapper over this stream. This enables optimizations for
 *            sending the data, e.g.
 *            {@link SocketOutputStream#transferToFully(FileChannel, long, int)}
 *            .
 * @param throttler
 *            for sending data.
 * @return total bytes reads, including crc.
 */
public long sendBlock(DataOutputStream out, OutputStream baseStream,
		DataTransferThrottler throttler) throws IOException {
	return sendBlock(out, baseStream, throttler, null);
}