Java Code Examples for org.apache.hadoop.hbase.client.Result#EMPTY_RESULT

The following examples show how to use org.apache.hadoop.hbase.client.Result#EMPTY_RESULT . You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example 1
Source File: ResultSerialization.java    From hbase with Apache License 2.0 6 votes vote down vote up
@Override
public Result deserialize(Result mutation) throws IOException {
  int totalBuffer = in.readInt();
  if (totalBuffer == 0) {
    return Result.EMPTY_RESULT;
  }
  byte[] buf = new byte[totalBuffer];
  readChunked(in, buf, 0, totalBuffer);
  List<Cell> kvs = new ArrayList<>();
  int offset = 0;
  while (offset < totalBuffer) {
    int keyLength = Bytes.toInt(buf, offset);
    offset += Bytes.SIZEOF_INT;
    kvs.add(new KeyValue(buf, offset, keyLength));
    offset += keyLength;
  }
  return Result.create(kvs);
}
 
Example 2
Source File: WriteHeavyIncrementObserver.java    From hbase with Apache License 2.0 6 votes vote down vote up
@Override
public Result preIncrement(ObserverContext<RegionCoprocessorEnvironment> c, Increment increment)
    throws IOException {
  byte[] row = increment.getRow();
  Put put = new Put(row);
  long ts = getUniqueTimestamp(row);
  for (Map.Entry<byte[], List<Cell>> entry : increment.getFamilyCellMap().entrySet()) {
    for (Cell cell : entry.getValue()) {
      put.add(CellBuilderFactory.create(CellBuilderType.SHALLOW_COPY).setRow(row)
          .setFamily(cell.getFamilyArray(), cell.getFamilyOffset(), cell.getFamilyLength())
          .setQualifier(cell.getQualifierArray(), cell.getQualifierOffset(),
            cell.getQualifierLength())
          .setValue(cell.getValueArray(), cell.getValueOffset(), cell.getValueLength())
          .setType(Cell.Type.Put).setTimestamp(ts).build());
    }
  }
  c.getEnvironment().getRegion().put(put);
  c.bypass();
  return Result.EMPTY_RESULT;
}
 
Example 3
Source File: TestHBObjectMapper.java    From hbase-orm with Apache License 2.0 5 votes vote down vote up
@Test
@SuppressWarnings("ConstantConditions")
public void testEmptyResults() {
    Result nullResult = null, blankResult = new Result(), emptyResult = Result.EMPTY_RESULT;
    Citizen nullCitizen = hbMapper.readValue(nullResult, Citizen.class);
    assertNull(nullCitizen, "Null Result object should return null");
    Citizen emptyCitizen = hbMapper.readValue(blankResult, Citizen.class);
    assertNull(emptyCitizen, "Empty Result object should return null");
    assertNull(hbMapper.readValue(emptyResult, Citizen.class));
}
 
Example 4
Source File: IndexRegionObserver.java    From phoenix with Apache License 2.0 5 votes vote down vote up
/**
 * We use an Increment to serialize the ON DUPLICATE KEY clause so that the HBase plumbing
 * sets up the necessary locks and mvcc to allow an atomic update. The Increment is not a
 * real increment, though, it's really more of a Put. We translate the Increment into a
 * list of mutations, at most a single Put and Delete that are the changes upon executing
 * the list of ON DUPLICATE KEY clauses for this row.
 */
@Override
public Result preIncrementAfterRowLock(final ObserverContext<RegionCoprocessorEnvironment> e,
        final Increment inc) throws IOException {
    long start = EnvironmentEdgeManager.currentTimeMillis();
    try {
        List<Mutation> mutations = this.builder.executeAtomicOp(inc);
        if (mutations == null) {
            return null;
        }

        // Causes the Increment to be ignored as we're committing the mutations
        // ourselves below.
        e.bypass();
        // ON DUPLICATE KEY IGNORE will return empty list if row already exists
        // as no action is required in that case.
        if (!mutations.isEmpty()) {
            Region region = e.getEnvironment().getRegion();
            // Otherwise, submit the mutations directly here
              region.batchMutate(mutations.toArray(new Mutation[0]));
        }
        return Result.EMPTY_RESULT;
    } catch (Throwable t) {
        throw ServerUtil.createIOException(
                "Unable to process ON DUPLICATE IGNORE for " + 
                e.getEnvironment().getRegion().getRegionInfo().getTable().getNameAsString() + 
                "(" + Bytes.toStringBinary(inc.getRow()) + ")", t);
    } finally {
        long duration = EnvironmentEdgeManager.currentTimeMillis() - start;
        if (duration >= slowIndexPrepareThreshold) {
            if (LOG.isDebugEnabled()) {
                LOG.debug(getCallTooSlowMessage("preIncrementAfterRowLock", duration, slowPreIncrementThreshold));
            }
            metricSource.incrementSlowDuplicateKeyCheckCalls();
        }
        metricSource.updateDuplicateKeyCheckTime(duration);
    }
}
 
Example 5
Source File: Indexer.java    From phoenix with Apache License 2.0 5 votes vote down vote up
/**
 * We use an Increment to serialize the ON DUPLICATE KEY clause so that the HBase plumbing
 * sets up the necessary locks and mvcc to allow an atomic update. The Increment is not a
 * real increment, though, it's really more of a Put. We translate the Increment into a
 * list of mutations, at most a single Put and Delete that are the changes upon executing
 * the list of ON DUPLICATE KEY clauses for this row.
 */
@Override
public Result preIncrementAfterRowLock(final ObserverContext<RegionCoprocessorEnvironment> e,
        final Increment inc) throws IOException {
    long start = EnvironmentEdgeManager.currentTimeMillis();
    try {
        List<Mutation> mutations = this.builder.executeAtomicOp(inc);
        if (mutations == null) {
            return null;
        }

        // Causes the Increment to be ignored as we're committing the mutations
        // ourselves below.
        e.bypass();
        // ON DUPLICATE KEY IGNORE will return empty list if row already exists
        // as no action is required in that case.
        if (!mutations.isEmpty()) {
            Region region = e.getEnvironment().getRegion();
            // Otherwise, submit the mutations directly here
              region.batchMutate(mutations.toArray(new Mutation[0]));
        }
        return Result.EMPTY_RESULT;
    } catch (Throwable t) {
        throw ServerUtil.createIOException(
                "Unable to process ON DUPLICATE IGNORE for " + 
                e.getEnvironment().getRegion().getRegionInfo().getTable().getNameAsString() + 
                "(" + Bytes.toStringBinary(inc.getRow()) + ")", t);
    } finally {
        long duration = EnvironmentEdgeManager.currentTimeMillis() - start;
        if (duration >= slowIndexPrepareThreshold) {
            if (LOGGER.isDebugEnabled()) {
                LOGGER.debug(getCallTooSlowMessage("preIncrementAfterRowLock",
                        duration, slowPreIncrementThreshold));
            }
            metricSource.incrementSlowDuplicateKeyCheckCalls();
        }
        metricSource.updateDuplicateKeyCheckTime(duration);
    }
}
 
Example 6
Source File: HBaseResultCoderTest.java    From beam with Apache License 2.0 4 votes vote down vote up
@Test
public void testResultEncoding() throws Exception {
  Result value = Result.EMPTY_RESULT;
  CoderProperties.structuralValueDecodeEncodeEqual(CODER, value);
}