Java Code Examples for org.apache.htrace.Trace#startSpan()

The following examples show how to use org.apache.htrace.Trace#startSpan() . You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example 1
Source File: BlockStorageLocationUtil.java    From hadoop with Apache License 2.0 6 votes vote down vote up
@Override
public HdfsBlocksMetadata call() throws Exception {
  HdfsBlocksMetadata metadata = null;
  // Create the RPC proxy and make the RPC
  ClientDatanodeProtocol cdp = null;
  TraceScope scope =
      Trace.startSpan("getHdfsBlocksMetadata", parentSpan);
  try {
    cdp = DFSUtil.createClientDatanodeProtocolProxy(datanode, configuration,
        timeout, connectToDnViaHostname);
    metadata = cdp.getHdfsBlocksMetadata(poolId, blockIds, dnTokens);
  } catch (IOException e) {
    // Bubble this up to the caller, handle with the Future
    throw e;
  } finally {
    scope.close();
    if (cdp != null) {
      RPC.stopProxy(cdp);
    }
  }
  return metadata;
}
 
Example 2
Source File: TracingExample.java    From accumulo-examples with Apache License 2.0 6 votes vote down vote up
private void createEntries(Opts opts) throws TableNotFoundException, AccumuloException {

    // Trace the write operation. Note, unless you flush the BatchWriter, you will not capture
    // the write operation as it is occurs asynchronously. You can optionally create additional
    // Spans
    // within a given Trace as seen below around the flush
    TraceScope scope = Trace.startSpan("Client Write", Sampler.ALWAYS);

    System.out.println("TraceID: " + Long.toHexString(scope.getSpan().getTraceId()));
    try (BatchWriter batchWriter = client.createBatchWriter(opts.getTableName())) {
      Mutation m = new Mutation("row");
      m.put("cf", "cq", "value");

      batchWriter.addMutation(m);
      // You can add timeline annotations to Spans which will be able to be viewed in the Monitor
      scope.getSpan().addTimelineAnnotation("Initiating Flush");
      batchWriter.flush();
    }
    scope.close();
  }
 
Example 3
Source File: DFSInotifyEventInputStream.java    From hadoop with Apache License 2.0 6 votes vote down vote up
/**
 * Returns the next batch of events in the stream, waiting indefinitely if
 * a new batch  is not immediately available.
 *
 * @throws IOException see {@link DFSInotifyEventInputStream#poll()}
 * @throws MissingEventsException see
 * {@link DFSInotifyEventInputStream#poll()}
 * @throws InterruptedException if the calling thread is interrupted
 */
public EventBatch take() throws IOException, InterruptedException,
    MissingEventsException {
  TraceScope scope = Trace.startSpan("inotifyTake", traceSampler);
  EventBatch next = null;
  try {
    int nextWaitMin = INITIAL_WAIT_MS;
    while ((next = poll()) == null) {
      // sleep for a random period between nextWaitMin and nextWaitMin * 2
      // to avoid stampedes at the NN if there are multiple clients
      int sleepTime = nextWaitMin + rng.nextInt(nextWaitMin);
      LOG.debug("take(): poll() returned null, sleeping for {} ms", sleepTime);
      Thread.sleep(sleepTime);
      // the maximum sleep is 2 minutes
      nextWaitMin = Math.min(60000, nextWaitMin * 2);
    }
  } finally {
    scope.close();
  }

  return next;
}
 
Example 4
Source File: DFSClient.java    From hadoop with Apache License 2.0 5 votes vote down vote up
/**
 * enable/disable restore failed storage.
 * 
 * @see ClientProtocol#restoreFailedStorage(String arg)
 */
boolean restoreFailedStorage(String arg)
    throws AccessControlException, IOException{
  TraceScope scope = Trace.startSpan("restoreFailedStorage", traceSampler);
  try {
    return namenode.restoreFailedStorage(arg);
  } finally {
    scope.close();
  }
}
 
Example 5
Source File: DFSClient.java    From big-c with Apache License 2.0 5 votes vote down vote up
/**
 * Disallow snapshot on a directory.
 * 
 * @see ClientProtocol#disallowSnapshot(String snapshotRoot)
 */
public void disallowSnapshot(String snapshotRoot) throws IOException {
  checkOpen();
  TraceScope scope = Trace.startSpan("disallowSnapshot", traceSampler);
  try {
    namenode.disallowSnapshot(snapshotRoot);
  } catch (RemoteException re) {
    throw re.unwrapRemoteException();
  } finally {
    scope.close();
  }
}
 
Example 6
Source File: DFSClient.java    From big-c with Apache License 2.0 5 votes vote down vote up
/**
 * @see ClientProtocol#finalizeUpgrade()
 */
public void finalizeUpgrade() throws IOException {
  TraceScope scope = Trace.startSpan("finalizeUpgrade", traceSampler);
  try {
    namenode.finalizeUpgrade();
  } finally {
    scope.close();
  }
}
 
Example 7
Source File: DFSClient.java    From hadoop with Apache License 2.0 5 votes vote down vote up
/**
 * Rolls the edit log on the active NameNode.
 * @return the txid of the new log segment 
 *
 * @see ClientProtocol#rollEdits()
 */
long rollEdits() throws AccessControlException, IOException {
  TraceScope scope = Trace.startSpan("rollEdits", traceSampler);
  try {
    return namenode.rollEdits();
  } catch(RemoteException re) {
    throw re.unwrapRemoteException(AccessControlException.class);
  } finally {
    scope.close();
  }
}
 
Example 8
Source File: DFSClient.java    From big-c with Apache License 2.0 5 votes vote down vote up
/**
 * Get the difference between two snapshots, or between a snapshot and the
 * current tree of a directory.
 * @see ClientProtocol#getSnapshotDiffReport(String, String, String)
 */
public SnapshotDiffReport getSnapshotDiffReport(String snapshotDir,
    String fromSnapshot, String toSnapshot) throws IOException {
  checkOpen();
  TraceScope scope = Trace.startSpan("getSnapshotDiffReport", traceSampler);
  try {
    return namenode.getSnapshotDiffReport(snapshotDir,
        fromSnapshot, toSnapshot);
  } catch(RemoteException re) {
    throw re.unwrapRemoteException();
  } finally {
    scope.close();
  }
}
 
Example 9
Source File: DFSInotifyEventInputStream.java    From hadoop with Apache License 2.0 5 votes vote down vote up
/**
 * Returns the next event batch in the stream, waiting up to the specified
 * amount of time for a new batch. Returns null if one is not available at the
 * end of the specified amount of time. The time before the method returns may
 * exceed the specified amount of time by up to the time required for an RPC
 * to the NameNode.
 *
 * @param time number of units of the given TimeUnit to wait
 * @param tu the desired TimeUnit
 * @throws IOException see {@link DFSInotifyEventInputStream#poll()}
 * @throws MissingEventsException
 * see {@link DFSInotifyEventInputStream#poll()}
 * @throws InterruptedException if the calling thread is interrupted
 */
public EventBatch poll(long time, TimeUnit tu) throws IOException,
    InterruptedException, MissingEventsException {
  TraceScope scope = Trace.startSpan("inotifyPollWithTimeout", traceSampler);
  EventBatch next = null;
  try {
    long initialTime = Time.monotonicNow();
    long totalWait = TimeUnit.MILLISECONDS.convert(time, tu);
    long nextWait = INITIAL_WAIT_MS;
    while ((next = poll()) == null) {
      long timeLeft = totalWait - (Time.monotonicNow() - initialTime);
      if (timeLeft <= 0) {
        LOG.debug("timed poll(): timed out");
        break;
      } else if (timeLeft < nextWait * 2) {
        nextWait = timeLeft;
      } else {
        nextWait *= 2;
      }
      LOG.debug("timed poll(): poll() returned null, sleeping for {} ms",
          nextWait);
      Thread.sleep(nextWait);
    }
  } finally {
    scope.close();
  }
  return next;
}
 
Example 10
Source File: DFSClient.java    From big-c with Apache License 2.0 5 votes vote down vote up
/**
 * Requests the namenode to tell all datanodes to use a new, non-persistent
 * bandwidth value for dfs.balance.bandwidthPerSec.
 * See {@link ClientProtocol#setBalancerBandwidth(long)} 
 * for more details.
 * 
 * @see ClientProtocol#setBalancerBandwidth(long)
 */
public void setBalancerBandwidth(long bandwidth) throws IOException {
  TraceScope scope = Trace.startSpan("setBalancerBandwidth", traceSampler);
  try {
    namenode.setBalancerBandwidth(bandwidth);
  } finally {
    scope.close();
  }
}
 
Example 11
Source File: DFSClient.java    From big-c with Apache License 2.0 5 votes vote down vote up
/**
 * Create one snapshot.
 * 
 * @param snapshotRoot The directory where the snapshot is to be taken
 * @param snapshotName Name of the snapshot
 * @return the snapshot path.
 * @see ClientProtocol#createSnapshot(String, String)
 */
public String createSnapshot(String snapshotRoot, String snapshotName)
    throws IOException {
  checkOpen();
  TraceScope scope = Trace.startSpan("createSnapshot", traceSampler);
  try {
    return namenode.createSnapshot(snapshotRoot, snapshotName);
  } catch(RemoteException re) {
    throw re.unwrapRemoteException();
  } finally {
    scope.close();
  }
}
 
Example 12
Source File: DFSClient.java    From big-c with Apache License 2.0 5 votes vote down vote up
/**
 * Rolls the edit log on the active NameNode.
 * @return the txid of the new log segment 
 *
 * @see ClientProtocol#rollEdits()
 */
long rollEdits() throws AccessControlException, IOException {
  TraceScope scope = Trace.startSpan("rollEdits", traceSampler);
  try {
    return namenode.rollEdits();
  } catch(RemoteException re) {
    throw re.unwrapRemoteException(AccessControlException.class);
  } finally {
    scope.close();
  }
}
 
Example 13
Source File: DFSClient.java    From hadoop with Apache License 2.0 5 votes vote down vote up
/**
 * Save namespace image.
 * 
 * @see ClientProtocol#saveNamespace()
 */
void saveNamespace() throws AccessControlException, IOException {
  TraceScope scope = Trace.startSpan("saveNamespace", traceSampler);
  try {
    namenode.saveNamespace();
  } catch(RemoteException re) {
    throw re.unwrapRemoteException(AccessControlException.class);
  } finally {
    scope.close();
  }
}
 
Example 14
Source File: PhoenixTransactionalIndexer.java    From phoenix with Apache License 2.0 5 votes vote down vote up
@Override
public void postBatchMutateIndispensably(ObserverContext<RegionCoprocessorEnvironment> c,
    MiniBatchOperationInProgress<Mutation> miniBatchOp, final boolean success) throws IOException {
    BatchMutateContext context = getBatchMutateContext(c);
    if (context == null || context.indexUpdates == null) {
        return;
    }
    // get the current span, or just use a null-span to avoid a bunch of if statements
    try (TraceScope scope = Trace.startSpan("Starting to write index updates")) {
        Span current = scope.getSpan();
        if (current == null) {
            current = NullSpan.INSTANCE;
        }

        if (success) { // if miniBatchOp was successfully written, write index updates
            if (!context.indexUpdates.isEmpty()) {
                this.writer.write(context.indexUpdates, false, context.clientVersion);
            }
            current.addTimelineAnnotation("Wrote index updates");
        }
    } catch (Throwable t) {
        String msg = "Failed to write index updates:" + context.indexUpdates;
        LOGGER.error(msg, t);
        ServerUtil.throwIOException(msg, t);
     } finally {
         removeBatchMutateContext(c);
     }
}
 
Example 15
Source File: DFSClient.java    From hadoop with Apache License 2.0 5 votes vote down vote up
/**
 * Get the difference between two snapshots, or between a snapshot and the
 * current tree of a directory.
 * @see ClientProtocol#getSnapshotDiffReport(String, String, String)
 */
public SnapshotDiffReport getSnapshotDiffReport(String snapshotDir,
    String fromSnapshot, String toSnapshot) throws IOException {
  checkOpen();
  TraceScope scope = Trace.startSpan("getSnapshotDiffReport", traceSampler);
  try {
    return namenode.getSnapshotDiffReport(snapshotDir,
        fromSnapshot, toSnapshot);
  } catch(RemoteException re) {
    throw re.unwrapRemoteException();
  } finally {
    scope.close();
  }
}
 
Example 16
Source File: DFSClient.java    From big-c with Apache License 2.0 5 votes vote down vote up
/**
 * Dumps DFS data structures into specified file.
 * 
 * @see ClientProtocol#metaSave(String)
 */
public void metaSave(String pathname) throws IOException {
  TraceScope scope = Trace.startSpan("metaSave", traceSampler);
  try {
    namenode.metaSave(pathname);
  } finally {
    scope.close();
  }
}
 
Example 17
Source File: DFSClient.java    From big-c with Apache License 2.0 4 votes vote down vote up
/**
 * Get block location information about a list of {@link HdfsBlockLocation}.
 * Used by {@link DistributedFileSystem#getFileBlockStorageLocations(List)} to
 * get {@link BlockStorageLocation}s for blocks returned by
 * {@link DistributedFileSystem#getFileBlockLocations(org.apache.hadoop.fs.FileStatus, long, long)}
 * .
 * 
 * This is done by making a round of RPCs to the associated datanodes, asking
 * the volume of each block replica. The returned array of
 * {@link BlockStorageLocation} expose this information as a
 * {@link VolumeId}.
 * 
 * @param blockLocations
 *          target blocks on which to query volume location information
 * @return volumeBlockLocations original block array augmented with additional
 *         volume location information for each replica.
 */
public BlockStorageLocation[] getBlockStorageLocations(
    List<BlockLocation> blockLocations) throws IOException,
    UnsupportedOperationException, InvalidBlockTokenException {
  if (!getConf().getHdfsBlocksMetadataEnabled) {
    throw new UnsupportedOperationException("Datanode-side support for " +
        "getVolumeBlockLocations() must also be enabled in the client " +
        "configuration.");
  }
  // Downcast blockLocations and fetch out required LocatedBlock(s)
  List<LocatedBlock> blocks = new ArrayList<LocatedBlock>();
  for (BlockLocation loc : blockLocations) {
    if (!(loc instanceof HdfsBlockLocation)) {
      throw new ClassCastException("DFSClient#getVolumeBlockLocations " +
          "expected to be passed HdfsBlockLocations");
    }
    HdfsBlockLocation hdfsLoc = (HdfsBlockLocation) loc;
    blocks.add(hdfsLoc.getLocatedBlock());
  }
  
  // Re-group the LocatedBlocks to be grouped by datanodes, with the values
  // a list of the LocatedBlocks on the datanode.
  Map<DatanodeInfo, List<LocatedBlock>> datanodeBlocks = 
      new LinkedHashMap<DatanodeInfo, List<LocatedBlock>>();
  for (LocatedBlock b : blocks) {
    for (DatanodeInfo info : b.getLocations()) {
      if (!datanodeBlocks.containsKey(info)) {
        datanodeBlocks.put(info, new ArrayList<LocatedBlock>());
      }
      List<LocatedBlock> l = datanodeBlocks.get(info);
      l.add(b);
    }
  }
      
  // Make RPCs to the datanodes to get volume locations for its replicas
  TraceScope scope =
    Trace.startSpan("getBlockStorageLocations", traceSampler);
  Map<DatanodeInfo, HdfsBlocksMetadata> metadatas;
  try {
    metadatas = BlockStorageLocationUtil.
        queryDatanodesForHdfsBlocksMetadata(conf, datanodeBlocks,
            getConf().getFileBlockStorageLocationsNumThreads,
            getConf().getFileBlockStorageLocationsTimeoutMs,
            getConf().connectToDnViaHostname);
    if (LOG.isTraceEnabled()) {
      LOG.trace("metadata returned: "
          + Joiner.on("\n").withKeyValueSeparator("=").join(metadatas));
    }
  } finally {
    scope.close();
  }
  
  // Regroup the returned VolumeId metadata to again be grouped by
  // LocatedBlock rather than by datanode
  Map<LocatedBlock, List<VolumeId>> blockVolumeIds = BlockStorageLocationUtil
      .associateVolumeIdsWithBlocks(blocks, metadatas);
  
  // Combine original BlockLocations with new VolumeId information
  BlockStorageLocation[] volumeBlockLocations = BlockStorageLocationUtil
      .convertToVolumeBlockLocations(blocks, blockVolumeIds);

  return volumeBlockLocations;
}
 
Example 18
Source File: DFSClient.java    From hadoop with Apache License 2.0 4 votes vote down vote up
/**
 * Get block location information about a list of {@link HdfsBlockLocation}.
 * Used by {@link DistributedFileSystem#getFileBlockStorageLocations(List)} to
 * get {@link BlockStorageLocation}s for blocks returned by
 * {@link DistributedFileSystem#getFileBlockLocations(org.apache.hadoop.fs.FileStatus, long, long)}
 * .
 * 
 * This is done by making a round of RPCs to the associated datanodes, asking
 * the volume of each block replica. The returned array of
 * {@link BlockStorageLocation} expose this information as a
 * {@link VolumeId}.
 * 
 * @param blockLocations
 *          target blocks on which to query volume location information
 * @return volumeBlockLocations original block array augmented with additional
 *         volume location information for each replica.
 */
public BlockStorageLocation[] getBlockStorageLocations(
    List<BlockLocation> blockLocations) throws IOException,
    UnsupportedOperationException, InvalidBlockTokenException {
  if (!getConf().getHdfsBlocksMetadataEnabled) {
    throw new UnsupportedOperationException("Datanode-side support for " +
        "getVolumeBlockLocations() must also be enabled in the client " +
        "configuration.");
  }
  // Downcast blockLocations and fetch out required LocatedBlock(s)
  List<LocatedBlock> blocks = new ArrayList<LocatedBlock>();
  for (BlockLocation loc : blockLocations) {
    if (!(loc instanceof HdfsBlockLocation)) {
      throw new ClassCastException("DFSClient#getVolumeBlockLocations " +
          "expected to be passed HdfsBlockLocations");
    }
    HdfsBlockLocation hdfsLoc = (HdfsBlockLocation) loc;
    blocks.add(hdfsLoc.getLocatedBlock());
  }
  
  // Re-group the LocatedBlocks to be grouped by datanodes, with the values
  // a list of the LocatedBlocks on the datanode.
  Map<DatanodeInfo, List<LocatedBlock>> datanodeBlocks = 
      new LinkedHashMap<DatanodeInfo, List<LocatedBlock>>();
  for (LocatedBlock b : blocks) {
    for (DatanodeInfo info : b.getLocations()) {
      if (!datanodeBlocks.containsKey(info)) {
        datanodeBlocks.put(info, new ArrayList<LocatedBlock>());
      }
      List<LocatedBlock> l = datanodeBlocks.get(info);
      l.add(b);
    }
  }
      
  // Make RPCs to the datanodes to get volume locations for its replicas
  TraceScope scope =
    Trace.startSpan("getBlockStorageLocations", traceSampler);
  Map<DatanodeInfo, HdfsBlocksMetadata> metadatas;
  try {
    metadatas = BlockStorageLocationUtil.
        queryDatanodesForHdfsBlocksMetadata(conf, datanodeBlocks,
            getConf().getFileBlockStorageLocationsNumThreads,
            getConf().getFileBlockStorageLocationsTimeoutMs,
            getConf().connectToDnViaHostname);
    if (LOG.isTraceEnabled()) {
      LOG.trace("metadata returned: "
          + Joiner.on("\n").withKeyValueSeparator("=").join(metadatas));
    }
  } finally {
    scope.close();
  }
  
  // Regroup the returned VolumeId metadata to again be grouped by
  // LocatedBlock rather than by datanode
  Map<LocatedBlock, List<VolumeId>> blockVolumeIds = BlockStorageLocationUtil
      .associateVolumeIdsWithBlocks(blocks, metadatas);
  
  // Combine original BlockLocations with new VolumeId information
  BlockStorageLocation[] volumeBlockLocations = BlockStorageLocationUtil
      .convertToVolumeBlockLocations(blocks, blockVolumeIds);

  return volumeBlockLocations;
}
 
Example 19
Source File: LockManager.java    From phoenix with Apache License 2.0 4 votes vote down vote up
/**
 * Lock the row or throw otherwise
 * @param rowKey the row key
 * @return RowLock used to eventually release the lock 
 * @throws TimeoutIOException if the lock could not be acquired within the
 * allowed rowLockWaitDuration and InterruptedException if interrupted while
 * waiting to acquire lock.
 */
public RowLock lockRow(ImmutableBytesPtr rowKey, int waitDuration) throws IOException {
    RowLockContext rowLockContext = null;
    RowLockImpl result = null;
    TraceScope traceScope = null;

    // If we're tracing start a span to show how long this took.
    if (Trace.isTracing()) {
        traceScope = Trace.startSpan("LockManager.getRowLock");
        traceScope.getSpan().addTimelineAnnotation("Getting a lock");
    }

    boolean success = false;
    try {
        // Keep trying until we have a lock or error out.
        // TODO: do we need to add a time component here?
        while (result == null) {

            // Try adding a RowLockContext to the lockedRows.
            // If we can add it then there's no other transactions currently running.
            rowLockContext = new RowLockContext(rowKey);
            RowLockContext existingContext = lockedRows.putIfAbsent(rowKey, rowLockContext);

            // if there was a running transaction then there's already a context.
            if (existingContext != null) {
                rowLockContext = existingContext;
            }

            result = rowLockContext.newRowLock();
        }
        if (!result.getLock().tryLock(waitDuration, TimeUnit.MILLISECONDS)) {
            if (traceScope != null) {
                traceScope.getSpan().addTimelineAnnotation("Failed to get row lock");
            }
            throw new TimeoutIOException("Timed out waiting for lock for row: " + rowKey);
        }
        rowLockContext.setThreadName(Thread.currentThread().getName());
        success = true;
        return result;
    } catch (InterruptedException ie) {
        LOGGER.warn("Thread interrupted waiting for lock on row: " + rowKey);
        InterruptedIOException iie = new InterruptedIOException();
        iie.initCause(ie);
        if (traceScope != null) {
            traceScope.getSpan().addTimelineAnnotation("Interrupted exception getting row lock");
        }
        Thread.currentThread().interrupt();
        throw iie;
    } finally {
        // On failure, clean up the counts just in case this was the thing keeping the context alive.
        if (!success && rowLockContext != null) rowLockContext.cleanUp();
        if (traceScope != null) {
            traceScope.close();
        }
    }
}
 
Example 20
Source File: BlockSender.java    From big-c with Apache License 2.0 3 votes vote down vote up
/**
 * sendBlock() is used to read block and its metadata and stream the data to
 * either a client or to another datanode. 
 * 
 * @param out  stream to which the block is written to
 * @param baseStream optional. if non-null, <code>out</code> is assumed to 
 *        be a wrapper over this stream. This enables optimizations for
 *        sending the data, e.g. 
 *        {@link SocketOutputStream#transferToFully(FileChannel, 
 *        long, int)}.
 * @param throttler for sending data.
 * @return total bytes read, including checksum data.
 */
long sendBlock(DataOutputStream out, OutputStream baseStream, 
               DataTransferThrottler throttler) throws IOException {
  TraceScope scope =
      Trace.startSpan("sendBlock_" + block.getBlockId(), Sampler.NEVER);
  try {
    return doSendBlock(out, baseStream, throttler);
  } finally {
    scope.close();
  }
}