Java Code Examples for org.apache.hadoop.hbase.TableName#getQualifierAsString()

The following examples show how to use org.apache.hadoop.hbase.TableName#getQualifierAsString() . You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example 1
Source File: ServerSideOperationsObserver.java    From geowave with Apache License 2.0 6 votes vote down vote up
public <T extends InternalScanner> T wrapScannerWithOps(
    final TableName tableName,
    final T scanner,
    final Scan scan,
    final ServerOpScope scope,
    final ScannerWrapperFactory<T> factory) {
  if (!tableName.isSystemTable()) {
    final String namespace = tableName.getNamespaceAsString();
    final String qualifier = tableName.getQualifierAsString();
    final Collection<HBaseServerOp> orderedServerOps =
        opStore.getOperations(namespace, qualifier, scope);
    if (!orderedServerOps.isEmpty()) {
      return factory.createScannerWrapper(orderedServerOps, scanner, scan);
    }
  }
  return scanner;
}
 
Example 2
Source File: ServerSideOperationsObserver.java    From geowave with Apache License 2.0 6 votes vote down vote up
@Override
public RegionScanner preScannerOpen(
    final ObserverContext<RegionCoprocessorEnvironment> e,
    final Scan scan,
    final RegionScanner s) throws IOException {
  if (opStore != null) {
    final TableName tableName = e.getEnvironment().getRegionInfo().getTable();
    if (!tableName.isSystemTable()) {
      final String namespace = tableName.getNamespaceAsString();
      final String qualifier = tableName.getQualifierAsString();
      final Collection<HBaseServerOp> serverOps =
          opStore.getOperations(namespace, qualifier, ServerOpScope.SCAN);
      for (final HBaseServerOp op : serverOps) {
        op.preScannerOpen(scan);
      }
    }
  }
  return super.preScannerOpen(e, scan, s);
}
 
Example 3
Source File: EnvUtils.java    From spliceengine with GNU Affero General Public License v3.0 5 votes vote down vote up
public static boolean isMetaOrNamespaceTable(TableName tableName) {
    String qualifier = tableName.getQualifierAsString();
    return "hbase:meta".equals(qualifier)
            || "meta".equals(qualifier)
            || "hbase:namespace".equals(qualifier)
            || "namespace".equals(qualifier)
            || ".META.".equals(qualifier);
}
 
Example 4
Source File: CommonFSUtils.java    From hbase with Apache License 2.0 5 votes vote down vote up
/**
 * For backward compatibility with HBASE-20734, where we store recovered edits in a wrong
 * directory without BASE_NAMESPACE_DIR. See HBASE-22617 for more details.
 * @deprecated For compatibility, will be removed in 4.0.0.
 */
@Deprecated
public static Path getWrongWALRegionDir(final Configuration conf, final TableName tableName,
    final String encodedRegionName) throws IOException {
  Path wrongTableDir = new Path(new Path(getWALRootDir(conf), tableName.getNamespaceAsString()),
    tableName.getQualifierAsString());
  return new Path(wrongTableDir, encodedRegionName);
}
 
Example 5
Source File: RestoreTool.java    From hbase with Apache License 2.0 5 votes vote down vote up
/**
 * return value represent path for:
 * ".../user/biadmin/backup1/default/t1_dn/backup_1396650096738/archive/data/default/t1_dn"
 * @param tableName table name
 * @return path to table archive
 * @throws IOException exception
 */
Path getTableArchivePath(TableName tableName) throws IOException {
  Path baseDir =
      new Path(HBackupFileSystem.getTableBackupPath(tableName, backupRootPath, backupId),
          HConstants.HFILE_ARCHIVE_DIRECTORY);
  Path dataDir = new Path(baseDir, HConstants.BASE_NAMESPACE_DIR);
  Path archivePath = new Path(dataDir, tableName.getNamespaceAsString());
  Path tableArchivePath = new Path(archivePath, tableName.getQualifierAsString());
  if (!fs.exists(tableArchivePath) || !fs.getFileStatus(tableArchivePath).isDirectory()) {
    LOG.debug("Folder tableArchivePath: " + tableArchivePath.toString() + " does not exists");
    tableArchivePath = null; // empty table has no archive
  }
  return tableArchivePath;
}
 
Example 6
Source File: SnapshotScannerHDFSAclHelper.java    From hbase with Apache License 2.0 4 votes vote down vote up
Path getMobTableDir(TableName tableName) {
  return new Path(getMobDataNsDir(tableName.getNamespaceAsString()),
      tableName.getQualifierAsString());
}
 
Example 7
Source File: RegionModeStrategy.java    From hbase with Apache License 2.0 4 votes vote down vote up
private Record createRecord(ServerMetrics serverMetrics, RegionMetrics regionMetrics,
  long lastReportTimestamp) {

  Record.Builder builder = Record.builder();

  String regionName = regionMetrics.getNameAsString();
  builder.put(Field.REGION_NAME, regionName);

  String namespaceName = "";
  String tableName = "";
  String region = "";
  String startKey = "";
  String startCode = "";
  String replicaId = "";
  try {
    byte[][] elements = RegionInfo.parseRegionName(regionMetrics.getRegionName());
    TableName tn = TableName.valueOf(elements[0]);
    namespaceName = tn.getNamespaceAsString();
    tableName = tn.getQualifierAsString();
    startKey = Bytes.toStringBinary(elements[1]);
    startCode = Bytes.toString(elements[2]);
    replicaId = elements.length == 4 ?
      Integer.valueOf(Bytes.toString(elements[3])).toString() : "";
    region = RegionInfo.encodeRegionName(regionMetrics.getRegionName());
  } catch (IOException ignored) {
  }

  builder.put(Field.NAMESPACE, namespaceName);
  builder.put(Field.TABLE, tableName);
  builder.put(Field.START_CODE, startCode);
  builder.put(Field.REPLICA_ID, replicaId);
  builder.put(Field.REGION, region);
  builder.put(Field.START_KEY, startKey);
  builder.put(Field.REGION_SERVER, serverMetrics.getServerName().toShortString());
  builder.put(Field.LONG_REGION_SERVER, serverMetrics.getServerName().getServerName());

  RequestCountPerSecond requestCountPerSecond = requestCountPerSecondMap.get(regionName);
  if (requestCountPerSecond == null) {
    requestCountPerSecond = new RequestCountPerSecond();
    requestCountPerSecondMap.put(regionName, requestCountPerSecond);
  }
  requestCountPerSecond.refresh(lastReportTimestamp, regionMetrics.getReadRequestCount(),
    regionMetrics.getFilteredReadRequestCount(), regionMetrics.getWriteRequestCount());

  builder.put(Field.READ_REQUEST_COUNT_PER_SECOND,
    requestCountPerSecond.getReadRequestCountPerSecond());
  builder.put(Field.FILTERED_READ_REQUEST_COUNT_PER_SECOND,
      requestCountPerSecond.getFilteredReadRequestCountPerSecond());
  builder.put(Field.WRITE_REQUEST_COUNT_PER_SECOND,
    requestCountPerSecond.getWriteRequestCountPerSecond());
  builder.put(Field.REQUEST_COUNT_PER_SECOND,
    requestCountPerSecond.getRequestCountPerSecond());

  builder.put(Field.STORE_FILE_SIZE, regionMetrics.getStoreFileSize());
  builder.put(Field.UNCOMPRESSED_STORE_FILE_SIZE, regionMetrics.getUncompressedStoreFileSize());
  builder.put(Field.NUM_STORE_FILES, regionMetrics.getStoreFileCount());
  builder.put(Field.MEM_STORE_SIZE, regionMetrics.getMemStoreSize());
  builder.put(Field.LOCALITY, regionMetrics.getDataLocality());

  long compactingCellCount = regionMetrics.getCompactingCellCount();
  long compactedCellCount = regionMetrics.getCompactedCellCount();
  float compactionProgress = 0;
  if  (compactedCellCount > 0) {
    compactionProgress = 100 * ((float) compactedCellCount / compactingCellCount);
  }

  builder.put(Field.COMPACTING_CELL_COUNT, compactingCellCount);
  builder.put(Field.COMPACTED_CELL_COUNT, compactedCellCount);
  builder.put(Field.COMPACTION_PROGRESS, compactionProgress);

  FastDateFormat df = FastDateFormat.getInstance("yyyy-MM-dd HH:mm:ss");
  long lastMajorCompactionTimestamp = regionMetrics.getLastMajorCompactionTimestamp();

  builder.put(Field.LAST_MAJOR_COMPACTION_TIME,
    lastMajorCompactionTimestamp == 0 ? "" : df.format(lastMajorCompactionTimestamp));

  return builder.build();
}
 
Example 8
Source File: SnapshotScannerHDFSAclHelper.java    From hbase with Apache License 2.0 4 votes vote down vote up
Path getTmpTableDir(TableName tableName) {
  return new Path(getTmpNsDir(tableName.getNamespaceAsString()),
      tableName.getQualifierAsString());
}
 
Example 9
Source File: SnapshotScannerHDFSAclHelper.java    From hbase with Apache License 2.0 4 votes vote down vote up
Path getArchiveTableDir(TableName tableName) {
  return new Path(getArchiveNsDir(tableName.getNamespaceAsString()),
      tableName.getQualifierAsString());
}
 
Example 10
Source File: SnapshotScannerHDFSAclHelper.java    From hbase with Apache License 2.0 4 votes vote down vote up
Path getDataTableDir(TableName tableName) {
  return new Path(getDataNsDir(tableName.getNamespaceAsString()),
      tableName.getQualifierAsString());
}
 
Example 11
Source File: TestBackupBase.java    From hbase with Apache License 2.0 4 votes vote down vote up
@Override
public void execute() throws IOException {
  // Get the stage ID to fail on
  try (Admin admin = conn.getAdmin()) {
    // Begin BACKUP
    beginBackup(backupManager, backupInfo);
    failStageIf(Stage.stage_0);
    String savedStartCode;
    boolean firstBackup;
    // do snapshot for full table backup
    savedStartCode = backupManager.readBackupStartCode();
    firstBackup = savedStartCode == null || Long.parseLong(savedStartCode) == 0L;
    if (firstBackup) {
      // This is our first backup. Let's put some marker to system table so that we can hold the
      // logs while we do the backup.
      backupManager.writeBackupStartCode(0L);
    }
    failStageIf(Stage.stage_1);
    // We roll log here before we do the snapshot. It is possible there is duplicate data
    // in the log that is already in the snapshot. But if we do it after the snapshot, we
    // could have data loss.
    // A better approach is to do the roll log on each RS in the same global procedure as
    // the snapshot.
    LOG.info("Execute roll log procedure for full backup ...");

    Map<String, String> props = new HashMap<>();
    props.put("backupRoot", backupInfo.getBackupRootDir());
    admin.execProcedure(LogRollMasterProcedureManager.ROLLLOG_PROCEDURE_SIGNATURE,
      LogRollMasterProcedureManager.ROLLLOG_PROCEDURE_NAME, props);
    failStageIf(Stage.stage_2);
    newTimestamps = backupManager.readRegionServerLastLogRollResult();
    if (firstBackup) {
      // Updates registered log files
      // We record ALL old WAL files as registered, because
      // this is a first full backup in the system and these
      // files are not needed for next incremental backup
      List<String> logFiles = BackupUtils.getWALFilesOlderThan(conf, newTimestamps);
      backupManager.recordWALFiles(logFiles);
    }

    // SNAPSHOT_TABLES:
    backupInfo.setPhase(BackupPhase.SNAPSHOT);
    for (TableName tableName : tableList) {
      String snapshotName =
          "snapshot_" + Long.toString(EnvironmentEdgeManager.currentTime()) + "_"
              + tableName.getNamespaceAsString() + "_" + tableName.getQualifierAsString();

      snapshotTable(admin, tableName, snapshotName);
      backupInfo.setSnapshotName(tableName, snapshotName);
    }
    failStageIf(Stage.stage_3);
    // SNAPSHOT_COPY:
    // do snapshot copy
    LOG.debug("snapshot copy for " + backupId);
    snapshotCopy(backupInfo);
    // Updates incremental backup table set
    backupManager.addIncrementalBackupTableSet(backupInfo.getTables());

    // BACKUP_COMPLETE:
    // set overall backup status: complete. Here we make sure to complete the backup.
    // After this checkpoint, even if entering cancel process, will let the backup finished
    backupInfo.setState(BackupState.COMPLETE);
    // The table list in backupInfo is good for both full backup and incremental backup.
    // For incremental backup, it contains the incremental backup table set.
    backupManager.writeRegionServerLogTimestamp(backupInfo.getTables(), newTimestamps);

    HashMap<TableName, HashMap<String, Long>> newTableSetTimestampMap =
        backupManager.readLogTimestampMap();

    Long newStartCode =
        BackupUtils.getMinValue(BackupUtils
            .getRSLogTimestampMins(newTableSetTimestampMap));
    backupManager.writeBackupStartCode(newStartCode);
    failStageIf(Stage.stage_4);
    // backup complete
    completeBackup(conn, backupInfo, backupManager, BackupType.FULL, conf);

  } catch (Exception e) {

    if(autoRestoreOnFailure) {
      failBackup(conn, backupInfo, backupManager, e, "Unexpected BackupException : ",
        BackupType.FULL, conf);
    }
    throw new IOException(e);
  }
}
 
Example 12
Source File: IncrementalTableBackupClient.java    From hbase with Apache License 2.0 4 votes vote down vote up
@SuppressWarnings("unchecked")
protected Map<byte[], List<Path>>[] handleBulkLoad(List<TableName> sTableList)
        throws IOException {
  Map<byte[], List<Path>>[] mapForSrc = new Map[sTableList.size()];
  List<String> activeFiles = new ArrayList<>();
  List<String> archiveFiles = new ArrayList<>();
  Pair<Map<TableName, Map<String, Map<String, List<Pair<String, Boolean>>>>>, List<byte[]>> pair =
          backupManager.readBulkloadRows(sTableList);
  Map<TableName, Map<String, Map<String, List<Pair<String, Boolean>>>>> map = pair.getFirst();
  FileSystem tgtFs;
  try {
    tgtFs = FileSystem.get(new URI(backupInfo.getBackupRootDir()), conf);
  } catch (URISyntaxException use) {
    throw new IOException("Unable to get FileSystem", use);
  }
  Path rootdir = CommonFSUtils.getRootDir(conf);
  Path tgtRoot = new Path(new Path(backupInfo.getBackupRootDir()), backupId);

  for (Map.Entry<TableName, Map<String, Map<String, List<Pair<String, Boolean>>>>> tblEntry :
    map.entrySet()) {
    TableName srcTable = tblEntry.getKey();

    int srcIdx = getIndex(srcTable, sTableList);
    if (srcIdx < 0) {
      LOG.warn("Couldn't find " + srcTable + " in source table List");
      continue;
    }
    if (mapForSrc[srcIdx] == null) {
      mapForSrc[srcIdx] = new TreeMap<>(Bytes.BYTES_COMPARATOR);
    }
    Path tblDir = CommonFSUtils.getTableDir(rootdir, srcTable);
    Path tgtTable = new Path(new Path(tgtRoot, srcTable.getNamespaceAsString()),
        srcTable.getQualifierAsString());
    for (Map.Entry<String,Map<String,List<Pair<String, Boolean>>>> regionEntry :
      tblEntry.getValue().entrySet()){
      String regionName = regionEntry.getKey();
      Path regionDir = new Path(tblDir, regionName);
      // map from family to List of hfiles
      for (Map.Entry<String,List<Pair<String, Boolean>>> famEntry :
        regionEntry.getValue().entrySet()) {
        String fam = famEntry.getKey();
        Path famDir = new Path(regionDir, fam);
        List<Path> files;
        if (!mapForSrc[srcIdx].containsKey(Bytes.toBytes(fam))) {
          files = new ArrayList<>();
          mapForSrc[srcIdx].put(Bytes.toBytes(fam), files);
        } else {
          files = mapForSrc[srcIdx].get(Bytes.toBytes(fam));
        }
        Path archiveDir = HFileArchiveUtil.getStoreArchivePath(conf, srcTable, regionName, fam);
        String tblName = srcTable.getQualifierAsString();
        Path tgtFam = new Path(new Path(tgtTable, regionName), fam);
        if (!tgtFs.mkdirs(tgtFam)) {
          throw new IOException("couldn't create " + tgtFam);
        }
        for (Pair<String, Boolean> fileWithState : famEntry.getValue()) {
          String file = fileWithState.getFirst();
          int idx = file.lastIndexOf("/");
          String filename = file;
          if (idx > 0) {
            filename = file.substring(idx+1);
          }
          Path p = new Path(famDir, filename);
          Path tgt = new Path(tgtFam, filename);
          Path archive = new Path(archiveDir, filename);
          if (fs.exists(p)) {
            if (LOG.isTraceEnabled()) {
              LOG.trace("found bulk hfile " + file + " in " + famDir + " for " + tblName);
            }
            if (LOG.isTraceEnabled()) {
              LOG.trace("copying " + p + " to " + tgt);
            }
            activeFiles.add(p.toString());
          } else if (fs.exists(archive)){
            LOG.debug("copying archive " + archive + " to " + tgt);
            archiveFiles.add(archive.toString());
          }
          files.add(tgt);
        }
      }
    }
  }

  copyBulkLoadedFiles(activeFiles, archiveFiles);
  backupManager.deleteBulkLoadedRows(pair.getSecond());
  return mapForSrc;
}
 
Example 13
Source File: FullTableBackupClient.java    From hbase with Apache License 2.0 4 votes vote down vote up
/**
 * Backup request execution.
 *
 * @throws IOException if the execution of the backup fails
 */
@Override
public void execute() throws IOException {
  try (Admin admin = conn.getAdmin()) {
    // Begin BACKUP
    beginBackup(backupManager, backupInfo);
    String savedStartCode;
    boolean firstBackup;
    // do snapshot for full table backup

    savedStartCode = backupManager.readBackupStartCode();
    firstBackup = savedStartCode == null || Long.parseLong(savedStartCode) == 0L;
    if (firstBackup) {
      // This is our first backup. Let's put some marker to system table so that we can hold the
      // logs while we do the backup.
      backupManager.writeBackupStartCode(0L);
    }
    // We roll log here before we do the snapshot. It is possible there is duplicate data
    // in the log that is already in the snapshot. But if we do it after the snapshot, we
    // could have data loss.
    // A better approach is to do the roll log on each RS in the same global procedure as
    // the snapshot.
    LOG.info("Execute roll log procedure for full backup ...");

    Map<String, String> props = new HashMap<>();
    props.put("backupRoot", backupInfo.getBackupRootDir());
    admin.execProcedure(LogRollMasterProcedureManager.ROLLLOG_PROCEDURE_SIGNATURE,
      LogRollMasterProcedureManager.ROLLLOG_PROCEDURE_NAME, props);

    newTimestamps = backupManager.readRegionServerLastLogRollResult();
    if (firstBackup) {
      // Updates registered log files
      // We record ALL old WAL files as registered, because
      // this is a first full backup in the system and these
      // files are not needed for next incremental backup
      List<String> logFiles = BackupUtils.getWALFilesOlderThan(conf, newTimestamps);
      backupManager.recordWALFiles(logFiles);
    }

    // SNAPSHOT_TABLES:
    backupInfo.setPhase(BackupPhase.SNAPSHOT);
    for (TableName tableName : tableList) {
      String snapshotName =
          "snapshot_" + Long.toString(EnvironmentEdgeManager.currentTime()) + "_"
              + tableName.getNamespaceAsString() + "_" + tableName.getQualifierAsString();

      snapshotTable(admin, tableName, snapshotName);
      backupInfo.setSnapshotName(tableName, snapshotName);
    }

    // SNAPSHOT_COPY:
    // do snapshot copy
    LOG.debug("snapshot copy for " + backupId);
    snapshotCopy(backupInfo);
    // Updates incremental backup table set
    backupManager.addIncrementalBackupTableSet(backupInfo.getTables());

    // BACKUP_COMPLETE:
    // set overall backup status: complete. Here we make sure to complete the backup.
    // After this checkpoint, even if entering cancel process, will let the backup finished
    backupInfo.setState(BackupState.COMPLETE);
    // The table list in backupInfo is good for both full backup and incremental backup.
    // For incremental backup, it contains the incremental backup table set.
    backupManager.writeRegionServerLogTimestamp(backupInfo.getTables(), newTimestamps);

    HashMap<TableName, HashMap<String, Long>> newTableSetTimestampMap =
        backupManager.readLogTimestampMap();

    Long newStartCode =
        BackupUtils.getMinValue(BackupUtils
            .getRSLogTimestampMins(newTableSetTimestampMap));
    backupManager.writeBackupStartCode(newStartCode);

    // backup complete
    completeBackup(conn, backupInfo, backupManager, BackupType.FULL, conf);
  } catch (Exception e) {
    failBackup(conn, backupInfo, backupManager, e, "Unexpected BackupException : ",
      BackupType.FULL, conf);
    throw new IOException(e);
  }
}
 
Example 14
Source File: BackupUtils.java    From hbase with Apache License 2.0 4 votes vote down vote up
public static String getFileNameCompatibleString(TableName table) {
  return table.getNamespaceAsString() + "-" + table.getQualifierAsString();
}
 
Example 15
Source File: HBaseStorageService.java    From antsdb with GNU Lesser General Public License v3.0 4 votes vote down vote up
private void init() throws Exception {
    // create antsdb namespaces and tables if they are missing
    
    setup();
    
    // load checkpoint
    
    this.cp = new CheckPoint(TableName.valueOf(this.sysns, TABLE_SYNC_PARAM), this.isMutable);
    this.cp.readFromHBase(getConnection());
    
    // load system tables
    
    Admin admin = this.hbaseConnection.getAdmin();
    TableName[] tables = admin.listTableNamesByNamespace(this.sysns);
    for (TableName i:tables) {
        String name = i.getQualifierAsString();
        if (!name.startsWith("x")) {
            continue;
        }
        int id = Integer.parseInt(name.substring(1), 16);
        SysMetaRow meta = new SysMetaRow(id);
        meta.setNamespace(Orca.SYSNS);
        meta.setTableName(name);
        meta.setType(TableType.DATA);
        HBaseTable table = new HBaseTable(this, meta);
        this.tableById.put(id, table);
    }
    
    // validations
    
    if (this.cp.serverId != this.humpback.getServerId()) {
        throw new OrcaHBaseException("hbase is currently linked to a different antsdb instance {}", cp.serverId);
    }
    if (this.cp.getCurrentSp() > this.humpback.getSpaceManager().getAllocationPointer()) {
        throw new OrcaHBaseException("hbase synchronization pointer is ahead of local log");
    }
    
    // update checkpoint
    
    if (this.isMutable) {
        this.cp.setActive(true);
        this.cp.updateHBase(getConnection());
    }
    
    // misc
    
    this.replicationHandler = new HBaseReplicationHandler(this.humpback, this);
}
 
Example 16
Source File: CommonFSUtils.java    From hbase with Apache License 2.0 3 votes vote down vote up
/**
 * Returns the Table directory under the WALRootDir for the specified table name
 * @param conf configuration used to get the WALRootDir
 * @param tableName Table to get the directory for
 * @return a path to the WAL table directory for the specified table
 * @throws IOException if there is an exception determining the WALRootDir
 */
public static Path getWALTableDir(final Configuration conf, final TableName tableName)
    throws IOException {
  Path baseDir = new Path(getWALRootDir(conf), HConstants.BASE_NAMESPACE_DIR);
  return new Path(new Path(baseDir, tableName.getNamespaceAsString()),
    tableName.getQualifierAsString());
}
 
Example 17
Source File: BackupUtils.java    From hbase with Apache License 2.0 3 votes vote down vote up
/**
 * Given the backup root dir, backup id and the table name, return the backup image location,
 * which is also where the backup manifest file is. return value look like:
 * "hdfs://backup.hbase.org:9000/user/biadmin/backup1/backup_1396650096738/default/t1_dn/"
 * @param backupRootDir backup root directory
 * @param backupId backup id
 * @param tableName table name
 * @return backupPath String for the particular table
 */
public static String getTableBackupDir(String backupRootDir, String backupId,
        TableName tableName) {
  return backupRootDir + Path.SEPARATOR + backupId + Path.SEPARATOR
      + tableName.getNamespaceAsString() + Path.SEPARATOR + tableName.getQualifierAsString()
      + Path.SEPARATOR;
}
 
Example 18
Source File: HBackupFileSystem.java    From hbase with Apache License 2.0 3 votes vote down vote up
/**
 * Given the backup root dir, backup id and the table name, return the backup image location,
 * which is also where the backup manifest file is. return value look like:
 * "hdfs://backup.hbase.org:9000/user/biadmin/backup/backup_1396650096738/default/t1_dn/", where
 * "hdfs://backup.hbase.org:9000/user/biadmin/backup" is a backup root directory
 * @param backupRootDir backup root directory
 * @param backupId backup id
 * @param tableName table name
 * @return backupPath String for the particular table
 */
public static String
    getTableBackupDir(String backupRootDir, String backupId, TableName tableName) {
  return backupRootDir + Path.SEPARATOR + backupId + Path.SEPARATOR
      + tableName.getNamespaceAsString() + Path.SEPARATOR + tableName.getQualifierAsString()
      + Path.SEPARATOR;
}
 
Example 19
Source File: HBCKFsUtils.java    From hbase-operator-tools with Apache License 2.0 2 votes vote down vote up
/**
 * Returns the {@link org.apache.hadoop.fs.Path} object representing the table directory under
 * path rootdir
 *
 * COPIED from CommonFSUtils.getTableDir
 *
 * @param rootdir qualified path of HBase root directory
 * @param tableName name of table
 * @return {@link org.apache.hadoop.fs.Path} for table
 */
public static Path getTableDir(Path rootdir, final TableName tableName) {
  return new Path(getNamespaceDir(rootdir, tableName.getNamespaceAsString()),
    tableName.getQualifierAsString());
}
 
Example 20
Source File: CommonFSUtils.java    From hbase with Apache License 2.0 2 votes vote down vote up
/**
 * Returns the {@link org.apache.hadoop.fs.Path} object representing the table directory under
 * path rootdir
 *
 * @param rootdir qualified path of HBase root directory
 * @param tableName name of table
 * @return {@link org.apache.hadoop.fs.Path} for table
 */
public static Path getTableDir(Path rootdir, final TableName tableName) {
  return new Path(getNamespaceDir(rootdir, tableName.getNamespaceAsString()),
      tableName.getQualifierAsString());
}