Java Code Examples for org.apache.hadoop.hbase.util.FSUtils#setVersion()

The following examples show how to use org.apache.hadoop.hbase.util.FSUtils#setVersion() . You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example 1
Source File: HBaseTestClusterUtil.java    From tajo with Apache License 2.0 5 votes vote down vote up
/**
 * Creates an hbase rootdir in user home directory.  Also creates hbase
 * version file.  Normally you won't make use of this method.  Root hbasedir
 * is created for you as part of mini cluster startup.  You'd only use this
 * method if you were doing manual operation.
 * @return Fully qualified path to hbase root dir
 * @throws java.io.IOException
 */
public Path createRootDir() throws IOException {
  FileSystem fs = FileSystem.get(this.conf);
  Path hbaseRootdir = getDefaultRootDirPath();
  FSUtils.setRootDir(this.conf, hbaseRootdir);
  fs.mkdirs(hbaseRootdir);
  FSUtils.setVersion(fs, hbaseRootdir);
  return hbaseRootdir;
}
 
Example 2
Source File: MasterFileSystem.java    From hbase with Apache License 2.0 4 votes vote down vote up
/**
 * Get the rootdir. Make sure its wholesome and exists before returning.
 * @return hbase.rootdir (after checks for existence and bootstrapping if needed populating the
 *         directory with necessary bootup files).
 */
private void checkRootDir(final Path rd, final Configuration c, final FileSystem fs)
  throws IOException {
  int threadWakeFrequency = c.getInt(HConstants.THREAD_WAKE_FREQUENCY, 10 * 1000);
  // If FS is in safe mode wait till out of it.
  FSUtils.waitOnSafeMode(c, threadWakeFrequency);

  // Filesystem is good. Go ahead and check for hbase.rootdir.
  FileStatus status;
  try {
    status = fs.getFileStatus(rd);
  } catch (FileNotFoundException e) {
    status = null;
  }
  int versionFileWriteAttempts = c.getInt(HConstants.VERSION_FILE_WRITE_ATTEMPTS,
    HConstants.DEFAULT_VERSION_FILE_WRITE_ATTEMPTS);
  try {
    if (status == null) {
      if (!fs.mkdirs(rd)) {
        throw new IOException("Can not create configured '" + HConstants.HBASE_DIR + "' " + rd);
      }
      // DFS leaves safe mode with 0 DNs when there are 0 blocks.
      // We used to handle this by checking the current DN count and waiting until
      // it is nonzero. With security, the check for datanode count doesn't work --
      // it is a privileged op. So instead we adopt the strategy of the jobtracker
      // and simply retry file creation during bootstrap indefinitely. As soon as
      // there is one datanode it will succeed. Permission problems should have
      // already been caught by mkdirs above.
      FSUtils.setVersion(fs, rd, threadWakeFrequency, versionFileWriteAttempts);
    } else {
      if (!status.isDirectory()) {
        throw new IllegalArgumentException(
          "Configured '" + HConstants.HBASE_DIR + "' " + rd + " is not a directory.");
      }
      // as above
      FSUtils.checkVersion(fs, rd, true, threadWakeFrequency, versionFileWriteAttempts);
    }
  } catch (DeserializationException de) {
    LOG.error(HBaseMarkers.FATAL, "Please fix invalid configuration for '{}' {}",
      HConstants.HBASE_DIR, rd, de);
    throw new IOException(de);
  } catch (IllegalArgumentException iae) {
    LOG.error(HBaseMarkers.FATAL, "Please fix invalid configuration for '{}' {}",
      HConstants.HBASE_DIR, rd, iae);
    throw iae;
  }
  // Make sure cluster ID exists
  if (!FSUtils.checkClusterIdExists(fs, rd, threadWakeFrequency)) {
    FSUtils.setClusterId(fs, rd, new ClusterId(), threadWakeFrequency);
  }
  clusterId = FSUtils.getClusterId(fs, rd);
}
 
Example 3
Source File: HBaseService.java    From kite with Apache License 2.0 4 votes vote down vote up
/**
 * Configure the HBase cluster before launching it
 * 
 * @param config
 *          already created Hadoop configuration we'll further configure for
 *          HDFS
 * @param zkClientPort
 *          The client port zookeeper is listening on
 * @param hdfsFs
 *          The HDFS FileSystem this HBase cluster will run on top of
 * @param bindIP
 *          The IP Address to force bind all sockets on. If null, will use
 *          defaults
 * @param masterPort
 *          The port the master listens on
 * @param regionserverPort
 *          The port the regionserver listens on
 * @return The updated Configuration object.
 * @throws IOException
 */
private static Configuration configureHBaseCluster(Configuration config,
    int zkClientPort, FileSystem hdfsFs, String bindIP, int masterPort,
    int regionserverPort) throws IOException {
  // Configure the zookeeper port
  config
      .set(HConstants.ZOOKEEPER_CLIENT_PORT, Integer.toString(zkClientPort));
  // Initialize HDFS path configurations required by HBase
  Path hbaseDir = new Path(hdfsFs.makeQualified(hdfsFs.getHomeDirectory()),
      "hbase");
  FSUtils.setRootDir(config, hbaseDir);
  hdfsFs.mkdirs(hbaseDir);
  config.set("fs.defaultFS", hdfsFs.getUri().toString());
  config.set("fs.default.name", hdfsFs.getUri().toString());
  FSUtils.setVersion(hdfsFs, hbaseDir);

  // Configure the bind addresses and ports. If running in Openshift, we only
  // have permission to bind to the private IP address, accessible through an
  // environment variable.
  logger.info("HBase force binding to ip: " + bindIP);
  config.set("hbase.master.ipc.address", bindIP);
  config.set(HConstants.MASTER_PORT, Integer.toString(masterPort));
  config.set("hbase.regionserver.ipc.address", bindIP);
  config
      .set(HConstants.REGIONSERVER_PORT, Integer.toString(regionserverPort));
  config.set(HConstants.ZOOKEEPER_QUORUM, bindIP);

  // By default, the HBase master and regionservers will report to zookeeper
  // that its hostname is what it determines by reverse DNS lookup, and not
  // what we use as the bind address. This means when we set the bind
  // address, daemons won't actually be able to connect to eachother if they
  // are different. Here, we do something that's illegal in 48 states - use
  // reflection to override a private static final field in the DNS class
  // that is a cachedHostname. This way, we are forcing the hostname that
  // reverse dns finds. This may not be compatible with newer versions of
  // Hadoop.
  try {
    Field cachedHostname = DNS.class.getDeclaredField("cachedHostname");
    cachedHostname.setAccessible(true);
    Field modifiersField = Field.class.getDeclaredField("modifiers");
    modifiersField.setAccessible(true);
    modifiersField.setInt(cachedHostname, cachedHostname.getModifiers()
        & ~Modifier.FINAL);
    cachedHostname.set(null, bindIP);
  } catch (Exception e) {
    // Reflection can throw so many checked exceptions. Let's wrap in an
    // IOException.
    throw new IOException(e);
  }

  // By setting the info ports to -1 for, we won't launch the master or
  // regionserver info web interfaces
  config.set(HConstants.MASTER_INFO_PORT, "-1");
  config.set(HConstants.REGIONSERVER_INFO_PORT, "-1");
  return config;
}
 
Example 4
Source File: HBaseTestingUtility.java    From hbase with Apache License 2.0 3 votes vote down vote up
/**
 * Creates an hbase rootdir in user home directory.  Also creates hbase
 * version file.  Normally you won't make use of this method.  Root hbasedir
 * is created for you as part of mini cluster startup.  You'd only use this
 * method if you were doing manual operation.
 * @param create This flag decides whether to get a new
 * root or data directory path or not, if it has been fetched already.
 * Note : Directory will be made irrespective of whether path has been fetched or not.
 * If directory already exists, it will be overwritten
 * @return Fully qualified path to hbase root dir
 * @throws IOException
 */
public Path createRootDir(boolean create) throws IOException {
  FileSystem fs = FileSystem.get(this.conf);
  Path hbaseRootdir = getDefaultRootDirPath(create);
  CommonFSUtils.setRootDir(this.conf, hbaseRootdir);
  fs.mkdirs(hbaseRootdir);
  FSUtils.setVersion(fs, hbaseRootdir);
  return hbaseRootdir;
}