Java Code Examples for org.apache.hadoop.hdfs.server.common.Storage#STORAGE_DIR_CURRENT

The following examples show how to use org.apache.hadoop.hdfs.server.common.Storage#STORAGE_DIR_CURRENT . You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example 1
Source File: TestDFSStorageStateRecovery.java    From hadoop with Apache License 2.0 6 votes vote down vote up
/**
 * For block pool, verify that the current and/or previous exist as indicated
 * by the method parameters.  If previous exists, verify that
 * it hasn't been modified by comparing the checksum of all it's
 * containing files with their original checksum.  It is assumed that
 * the server has recovered.
 * @param baseDirs directories pointing to block pool storage
 * @param bpid block pool Id
 * @param currentShouldExist current directory exists under storage
 * @param currentShouldExist previous directory exists under storage
 */
void checkResultBlockPool(String[] baseDirs, boolean currentShouldExist,
    boolean previousShouldExist) throws IOException
{
  if (currentShouldExist) {
    for (int i = 0; i < baseDirs.length; i++) {
      File bpCurDir = new File(baseDirs[i], Storage.STORAGE_DIR_CURRENT);
      assertEquals(UpgradeUtilities.checksumContents(DATA_NODE, bpCurDir,
              false), UpgradeUtilities.checksumMasterBlockPoolContents());
    }
  }
  if (previousShouldExist) {
    for (int i = 0; i < baseDirs.length; i++) {
      File bpPrevDir = new File(baseDirs[i], Storage.STORAGE_DIR_PREVIOUS);
      assertTrue(bpPrevDir.isDirectory());
      assertEquals(
                   UpgradeUtilities.checksumContents(DATA_NODE, bpPrevDir,
                   false), UpgradeUtilities.checksumMasterBlockPoolContents());
    }
  }
}
 
Example 2
Source File: TestBlockPoolSliceStorage.java    From big-c with Apache License 2.0 6 votes vote down vote up
public void getRestoreDirectoryForBlockFile(String fileName, int nestingLevel) {
  BlockPoolSliceStorage storage = makeBlockPoolStorage();
  final String blockFileSubdir = makeRandomBlockFileSubdir(nestingLevel);
  final String blockFileName = fileName;

  String deletedFilePath =
      storage.getSingularStorageDir().getRoot() + File.separator +
      BlockPoolSliceStorage.TRASH_ROOT_DIR +
      blockFileSubdir + blockFileName;

  String expectedRestorePath =
      storage.getSingularStorageDir().getRoot() + File.separator +
          Storage.STORAGE_DIR_CURRENT +
          blockFileSubdir.substring(0, blockFileSubdir.length() - 1);

  LOG.info("Generated deleted file path " + deletedFilePath);
  assertThat(storage.getRestoreDirectory(new File(deletedFilePath)),
             is(expectedRestorePath));

}
 
Example 3
Source File: TestBlockPoolSliceStorage.java    From big-c with Apache License 2.0 6 votes vote down vote up
/**
 * Test conversion from a block file path to its target trash
 * directory.
 */
public void getTrashDirectoryForBlockFile(String fileName, int nestingLevel) {
  final String blockFileSubdir = makeRandomBlockFileSubdir(nestingLevel);
  final String blockFileName = fileName;

  String testFilePath =
      storage.getSingularStorageDir().getRoot() + File.separator +
          Storage.STORAGE_DIR_CURRENT +
          blockFileSubdir + blockFileName;

  String expectedTrashPath =
      storage.getSingularStorageDir().getRoot() + File.separator +
          BlockPoolSliceStorage.TRASH_ROOT_DIR +
          blockFileSubdir.substring(0, blockFileSubdir.length() - 1);

  LOG.info("Got subdir " + blockFileSubdir);
  LOG.info("Generated file path " + testFilePath);
  assertThat(storage.getTrashDirectory(new File(testFilePath)), is(expectedTrashPath));
}
 
Example 4
Source File: TestStartup.java    From big-c with Apache License 2.0 6 votes vote down vote up
/**
 * Corrupts the MD5 sum of the fsimage.
 * 
 * @param corruptAll
 *          whether to corrupt one or all of the MD5 sums in the configured
 *          namedirs
 * @throws IOException
 */
private void corruptFSImageMD5(boolean corruptAll) throws IOException {
  List<URI> nameDirs = (List<URI>)FSNamesystem.getNamespaceDirs(config);
  // Corrupt the md5 files in all the namedirs
  for (URI uri: nameDirs) {
    // Directory layout looks like:
    // test/data/dfs/nameN/current/{fsimage,edits,...}
    File nameDir = new File(uri.getPath());
    File dfsDir = nameDir.getParentFile();
    assertEquals(dfsDir.getName(), "dfs"); // make sure we got right dir
    // Set the md5 file to all zeros
    File imageFile = new File(nameDir,
        Storage.STORAGE_DIR_CURRENT + "/"
        + NNStorage.getImageFileName(0));
    MD5FileUtils.saveMD5File(imageFile, new MD5Hash(new byte[16]));
    // Only need to corrupt one if !corruptAll
    if (!corruptAll) {
      break;
    }
  }
}
 
Example 5
Source File: TestDFSStorageStateRecovery.java    From big-c with Apache License 2.0 6 votes vote down vote up
/**
 * For block pool, verify that the current and/or previous exist as indicated
 * by the method parameters.  If previous exists, verify that
 * it hasn't been modified by comparing the checksum of all it's
 * containing files with their original checksum.  It is assumed that
 * the server has recovered.
 * @param baseDirs directories pointing to block pool storage
 * @param bpid block pool Id
 * @param currentShouldExist current directory exists under storage
 * @param currentShouldExist previous directory exists under storage
 */
void checkResultBlockPool(String[] baseDirs, boolean currentShouldExist,
    boolean previousShouldExist) throws IOException
{
  if (currentShouldExist) {
    for (int i = 0; i < baseDirs.length; i++) {
      File bpCurDir = new File(baseDirs[i], Storage.STORAGE_DIR_CURRENT);
      assertEquals(UpgradeUtilities.checksumContents(DATA_NODE, bpCurDir,
              false), UpgradeUtilities.checksumMasterBlockPoolContents());
    }
  }
  if (previousShouldExist) {
    for (int i = 0; i < baseDirs.length; i++) {
      File bpPrevDir = new File(baseDirs[i], Storage.STORAGE_DIR_PREVIOUS);
      assertTrue(bpPrevDir.isDirectory());
      assertEquals(
                   UpgradeUtilities.checksumContents(DATA_NODE, bpPrevDir,
                   false), UpgradeUtilities.checksumMasterBlockPoolContents());
    }
  }
}
 
Example 6
Source File: TestBlockPoolSliceStorage.java    From hadoop with Apache License 2.0 6 votes vote down vote up
public void getRestoreDirectoryForBlockFile(String fileName, int nestingLevel) {
  BlockPoolSliceStorage storage = makeBlockPoolStorage();
  final String blockFileSubdir = makeRandomBlockFileSubdir(nestingLevel);
  final String blockFileName = fileName;

  String deletedFilePath =
      storage.getSingularStorageDir().getRoot() + File.separator +
      BlockPoolSliceStorage.TRASH_ROOT_DIR +
      blockFileSubdir + blockFileName;

  String expectedRestorePath =
      storage.getSingularStorageDir().getRoot() + File.separator +
          Storage.STORAGE_DIR_CURRENT +
          blockFileSubdir.substring(0, blockFileSubdir.length() - 1);

  LOG.info("Generated deleted file path " + deletedFilePath);
  assertThat(storage.getRestoreDirectory(new File(deletedFilePath)),
             is(expectedRestorePath));

}
 
Example 7
Source File: TestBlockPoolSliceStorage.java    From hadoop with Apache License 2.0 6 votes vote down vote up
/**
 * Test conversion from a block file path to its target trash
 * directory.
 */
public void getTrashDirectoryForBlockFile(String fileName, int nestingLevel) {
  final String blockFileSubdir = makeRandomBlockFileSubdir(nestingLevel);
  final String blockFileName = fileName;

  String testFilePath =
      storage.getSingularStorageDir().getRoot() + File.separator +
          Storage.STORAGE_DIR_CURRENT +
          blockFileSubdir + blockFileName;

  String expectedTrashPath =
      storage.getSingularStorageDir().getRoot() + File.separator +
          BlockPoolSliceStorage.TRASH_ROOT_DIR +
          blockFileSubdir.substring(0, blockFileSubdir.length() - 1);

  LOG.info("Got subdir " + blockFileSubdir);
  LOG.info("Generated file path " + testFilePath);
  assertThat(storage.getTrashDirectory(new File(testFilePath)), is(expectedTrashPath));
}
 
Example 8
Source File: TestStartup.java    From hadoop with Apache License 2.0 6 votes vote down vote up
/**
 * Corrupts the MD5 sum of the fsimage.
 * 
 * @param corruptAll
 *          whether to corrupt one or all of the MD5 sums in the configured
 *          namedirs
 * @throws IOException
 */
private void corruptFSImageMD5(boolean corruptAll) throws IOException {
  List<URI> nameDirs = (List<URI>)FSNamesystem.getNamespaceDirs(config);
  // Corrupt the md5 files in all the namedirs
  for (URI uri: nameDirs) {
    // Directory layout looks like:
    // test/data/dfs/nameN/current/{fsimage,edits,...}
    File nameDir = new File(uri.getPath());
    File dfsDir = nameDir.getParentFile();
    assertEquals(dfsDir.getName(), "dfs"); // make sure we got right dir
    // Set the md5 file to all zeros
    File imageFile = new File(nameDir,
        Storage.STORAGE_DIR_CURRENT + "/"
        + NNStorage.getImageFileName(0));
    MD5FileUtils.saveMD5File(imageFile, new MD5Hash(new byte[16]));
    // Only need to corrupt one if !corruptAll
    if (!corruptAll) {
      break;
    }
  }
}
 
Example 9
Source File: UpgradeUtilities.java    From hadoop with Apache License 2.0 5 votes vote down vote up
public static void createBlockPoolVersionFile(File bpDir,
    StorageInfo version, String bpid) throws IOException {
  // Create block pool version files
  if (DataNodeLayoutVersion.supports(
      LayoutVersion.Feature.FEDERATION, version.layoutVersion)) {
    File bpCurDir = new File(bpDir, Storage.STORAGE_DIR_CURRENT);
    BlockPoolSliceStorage bpStorage = new BlockPoolSliceStorage(version,
        bpid);
    File versionFile = new File(bpCurDir, "VERSION");
    StorageDirectory sd = new StorageDirectory(bpDir);
    bpStorage.writeProperties(versionFile, sd);
  }
}
 
Example 10
Source File: UpgradeUtilities.java    From big-c with Apache License 2.0 5 votes vote down vote up
public static void createBlockPoolVersionFile(File bpDir,
    StorageInfo version, String bpid) throws IOException {
  // Create block pool version files
  if (DataNodeLayoutVersion.supports(
      LayoutVersion.Feature.FEDERATION, version.layoutVersion)) {
    File bpCurDir = new File(bpDir, Storage.STORAGE_DIR_CURRENT);
    BlockPoolSliceStorage bpStorage = new BlockPoolSliceStorage(version,
        bpid);
    File versionFile = new File(bpCurDir, "VERSION");
    StorageDirectory sd = new StorageDirectory(bpDir);
    bpStorage.writeProperties(versionFile, sd);
  }
}
 
Example 11
Source File: TestStorageRestore.java    From RDFS with Apache License 2.0 5 votes vote down vote up
/**
 *  check if files exist/not exist
 */
public void checkFiles(boolean valid) {
  //look at the valid storage
  File fsImg1 = new File(path1, Storage.STORAGE_DIR_CURRENT + "/" + NameNodeFile.IMAGE.getName());
  File fsImg2 = new File(path2, Storage.STORAGE_DIR_CURRENT + "/" + NameNodeFile.IMAGE.getName());
  File fsImg3 = new File(path3, Storage.STORAGE_DIR_CURRENT + "/" + NameNodeFile.IMAGE.getName());

  File fsEdits1 = new File(path1, Storage.STORAGE_DIR_CURRENT + "/" + NameNodeFile.EDITS.getName());
  File fsEdits2 = new File(path2, Storage.STORAGE_DIR_CURRENT + "/" + NameNodeFile.EDITS.getName());
  File fsEdits3 = new File(path3, Storage.STORAGE_DIR_CURRENT + "/" + NameNodeFile.EDITS.getName());

  this.printStorages(cluster.getNameNode().getFSImage());
  
  LOG.info("++++ image files = "+fsImg1.getAbsolutePath() + "," + fsImg2.getAbsolutePath() + ","+ fsImg3.getAbsolutePath());
  LOG.info("++++ edits files = "+fsEdits1.getAbsolutePath() + "," + fsEdits2.getAbsolutePath() + ","+ fsEdits3.getAbsolutePath());
  LOG.info("checkFiles compares lengths: img1=" + fsImg1.length()  + ",img2=" + fsImg2.length()  + ",img3=" + fsImg3.length());
  LOG.info("checkFiles compares lengths: edits1=" + fsEdits1.length()  + ",edits2=" + fsEdits2.length()  + ",edits3=" + fsEdits3.length());
  
  if(valid) {
    assertTrue(fsImg1.exists());
    assertTrue(fsImg2.exists());
    assertFalse(fsImg3.exists());
    assertTrue(fsEdits1.exists());
    assertTrue(fsEdits2.exists());
    assertTrue(fsEdits3.exists());
    
   // should be the same
    assertTrue(fsImg1.length() == fsImg2.length());
    assertTrue(fsEdits1.length() == fsEdits2.length());
    assertTrue(fsEdits1.length() == fsEdits3.length());
  } else {
    // should be different
    assertTrue(fsEdits2.length() != fsEdits1.length());
    assertTrue(fsEdits2.length() != fsEdits3.length());
  }
}
 
Example 12
Source File: CreateEditsLog.java    From big-c with Apache License 2.0 4 votes vote down vote up
/**
 * @param args arguments
 * @throws IOException 
 */
public static void main(String[] args)  throws IOException {
  long startingBlockId = 1;
  int numFiles = 0;
  short replication = 1;
  int numBlocksPerFile = 0;
  long blockSize = 10;

  if (args.length == 0) {
    printUsageExit();
  }

  for (int i = 0; i < args.length; i++) { // parse command line
    if (args[i].equals("-h"))
      printUsageExit();
    if (args[i].equals("-f")) {
     if (i + 3 >= args.length || args[i+1].startsWith("-") || 
         args[i+2].startsWith("-") || args[i+3].startsWith("-")) {
       printUsageExit(
           "Missing num files, starting block and/or number of blocks");
     }
     numFiles = Integer.parseInt(args[++i]);
     startingBlockId = Integer.parseInt(args[++i]);
     numBlocksPerFile = Integer.parseInt(args[++i]);
     if (numFiles <=0 || numBlocksPerFile <= 0) {
       printUsageExit("numFiles and numBlocksPerFile most be greater than 0");
     }
    } else if (args[i].equals("-l")) {
      if (i + 1 >= args.length) {
        printUsageExit(
            "Missing block length");
      }
      blockSize = Long.parseLong(args[++i]);
    } else if (args[i].equals("-r") || args[i+1].startsWith("-")) {
      if (i + 1 >= args.length) {
        printUsageExit(
            "Missing replication factor");
      }
      replication = Short.parseShort(args[++i]);
    } else if (args[i].equals("-d")) {
      if (i + 1 >= args.length || args[i+1].startsWith("-")) {
        printUsageExit("Missing edits logs directory");
      }
      edits_dir = args[++i];
    } else {
      printUsageExit();
    }
  }
  

  File editsLogDir = new File(edits_dir);
  File subStructureDir = new File(edits_dir + "/" + 
      Storage.STORAGE_DIR_CURRENT);
  if ( !editsLogDir.exists() ) {
    if ( !editsLogDir.mkdir()) {
      System.out.println("cannot create " + edits_dir);
      System.exit(-1);
    }
  }
  if ( !subStructureDir.exists() ) {
    if ( !subStructureDir.mkdir()) {
      System.out.println("cannot create subdirs of " + edits_dir);
      System.exit(-1);
    }
  }
  

  FileNameGenerator nameGenerator = new FileNameGenerator(BASE_PATH, 100);
  FSEditLog editLog = FSImageTestUtil.createStandaloneEditLog(editsLogDir);
  editLog.openForWrite();
  addFiles(editLog, numFiles, replication, numBlocksPerFile, startingBlockId,
           blockSize, nameGenerator);
  editLog.logSync();
  editLog.close();
}
 
Example 13
Source File: CreateEditsLog.java    From hadoop with Apache License 2.0 4 votes vote down vote up
/**
 * @param args arguments
 * @throws IOException 
 */
public static void main(String[] args)  throws IOException {
  long startingBlockId = 1;
  int numFiles = 0;
  short replication = 1;
  int numBlocksPerFile = 0;
  long blockSize = 10;

  if (args.length == 0) {
    printUsageExit();
  }

  for (int i = 0; i < args.length; i++) { // parse command line
    if (args[i].equals("-h"))
      printUsageExit();
    if (args[i].equals("-f")) {
     if (i + 3 >= args.length || args[i+1].startsWith("-") || 
         args[i+2].startsWith("-") || args[i+3].startsWith("-")) {
       printUsageExit(
           "Missing num files, starting block and/or number of blocks");
     }
     numFiles = Integer.parseInt(args[++i]);
     startingBlockId = Integer.parseInt(args[++i]);
     numBlocksPerFile = Integer.parseInt(args[++i]);
     if (numFiles <=0 || numBlocksPerFile <= 0) {
       printUsageExit("numFiles and numBlocksPerFile most be greater than 0");
     }
    } else if (args[i].equals("-l")) {
      if (i + 1 >= args.length) {
        printUsageExit(
            "Missing block length");
      }
      blockSize = Long.parseLong(args[++i]);
    } else if (args[i].equals("-r") || args[i+1].startsWith("-")) {
      if (i + 1 >= args.length) {
        printUsageExit(
            "Missing replication factor");
      }
      replication = Short.parseShort(args[++i]);
    } else if (args[i].equals("-d")) {
      if (i + 1 >= args.length || args[i+1].startsWith("-")) {
        printUsageExit("Missing edits logs directory");
      }
      edits_dir = args[++i];
    } else {
      printUsageExit();
    }
  }
  

  File editsLogDir = new File(edits_dir);
  File subStructureDir = new File(edits_dir + "/" + 
      Storage.STORAGE_DIR_CURRENT);
  if ( !editsLogDir.exists() ) {
    if ( !editsLogDir.mkdir()) {
      System.out.println("cannot create " + edits_dir);
      System.exit(-1);
    }
  }
  if ( !subStructureDir.exists() ) {
    if ( !subStructureDir.mkdir()) {
      System.out.println("cannot create subdirs of " + edits_dir);
      System.exit(-1);
    }
  }
  

  FileNameGenerator nameGenerator = new FileNameGenerator(BASE_PATH, 100);
  FSEditLog editLog = FSImageTestUtil.createStandaloneEditLog(editsLogDir);
  editLog.openForWrite();
  addFiles(editLog, numFiles, replication, numBlocksPerFile, startingBlockId,
           blockSize, nameGenerator);
  editLog.logSync();
  editLog.close();
}
 
Example 14
Source File: CreateEditsLog.java    From RDFS with Apache License 2.0 4 votes vote down vote up
/**
 * @param args
 * @throws IOException 
 */
public static void main(String[] args) throws IOException {



  long startingBlockId = 1;
  int numFiles = 0;
  short replication = 1;
  int numBlocksPerFile = 0;

  if (args.length == 0) {
    printUsageExit();
  }

  for (int i = 0; i < args.length; i++) { // parse command line
    if (args[i].equals("-h"))
      printUsageExit();
    if (args[i].equals("-f")) {
     if (i + 3 >= args.length || args[i+1].startsWith("-") || 
         args[i+2].startsWith("-") || args[i+3].startsWith("-")) {
       printUsageExit(
           "Missing num files, starting block and/or number of blocks");
     }
     numFiles = Integer.parseInt(args[++i]);
     startingBlockId = Integer.parseInt(args[++i]);
     numBlocksPerFile = Integer.parseInt(args[++i]);
     if (numFiles <=0 || numBlocksPerFile <= 0) {
       printUsageExit("numFiles and numBlocksPerFile most be greater than 0");
     }
    } else if (args[i].equals("-r") || args[i+1].startsWith("-")) {
      if (i + 1 >= args.length) {
        printUsageExit(
            "Missing num files, starting block and/or number of blocks");
      }
      replication = Short.parseShort(args[++i]);
    } else if (args[i].equals("-d")) {
      if (i + 1 >= args.length || args[i+1].startsWith("-")) {
        printUsageExit("Missing edits logs directory");
      }
      edits_dir = args[++i];
    } else {
      printUsageExit();
    }
  }
  

  File editsLogDir = new File(edits_dir);
  File subStructureDir = new File(edits_dir + "/" + 
      Storage.STORAGE_DIR_CURRENT);
  if ( !editsLogDir.exists() ) {
    if ( !editsLogDir.mkdir()) {
      System.out.println("cannot create " + edits_dir);
      System.exit(-1);
    }
  }
  if ( !subStructureDir.exists() ) {
    if ( !subStructureDir.mkdir()) {
      System.out.println("cannot create subdirs of " + edits_dir);
      System.exit(-1);
    }
  }

  FSImage fsImage = new FSImage(new File(edits_dir));
  FileNameGenerator nameGenerator = new FileNameGenerator(BASE_PATH, 100);


  FSEditLog editLog = fsImage.getEditLog();
  editLog.createEditLogFile(fsImage.getFsEditName());
  editLog.open();
  addFiles(editLog, numFiles, replication, numBlocksPerFile, startingBlockId,
           nameGenerator);
  editLog.logSync();
  editLog.close();
}
 
Example 15
Source File: CreateEditsLog.java    From hadoop-gpu with Apache License 2.0 4 votes vote down vote up
/**
 * @param args
 * @throws IOException 
 */
public static void main(String[] args) throws IOException {



  long startingBlockId = 1;
  int numFiles = 0;
  short replication = 1;
  int numBlocksPerFile = 0;

  if (args.length == 0) {
    printUsageExit();
  }

  for (int i = 0; i < args.length; i++) { // parse command line
    if (args[i].equals("-h"))
      printUsageExit();
    if (args[i].equals("-f")) {
     if (i + 3 >= args.length || args[i+1].startsWith("-") || 
         args[i+2].startsWith("-") || args[i+3].startsWith("-")) {
       printUsageExit(
           "Missing num files, starting block and/or number of blocks");
     }
     numFiles = Integer.parseInt(args[++i]);
     startingBlockId = Integer.parseInt(args[++i]);
     numBlocksPerFile = Integer.parseInt(args[++i]);
     if (numFiles <=0 || numBlocksPerFile <= 0) {
       printUsageExit("numFiles and numBlocksPerFile most be greater than 0");
     }
    } else if (args[i].equals("-r") || args[i+1].startsWith("-")) {
      if (i + 1 >= args.length) {
        printUsageExit(
            "Missing num files, starting block and/or number of blocks");
      }
      replication = Short.parseShort(args[++i]);
    } else if (args[i].equals("-d")) {
      if (i + 1 >= args.length || args[i+1].startsWith("-")) {
        printUsageExit("Missing edits logs directory");
      }
      edits_dir = args[++i];
    } else {
      printUsageExit();
    }
  }
  

  File editsLogDir = new File(edits_dir);
  File subStructureDir = new File(edits_dir + "/" + 
      Storage.STORAGE_DIR_CURRENT);
  if ( !editsLogDir.exists() ) {
    if ( !editsLogDir.mkdir()) {
      System.out.println("cannot create " + edits_dir);
      System.exit(-1);
    }
  }
  if ( !subStructureDir.exists() ) {
    if ( !subStructureDir.mkdir()) {
      System.out.println("cannot create subdirs of " + edits_dir);
      System.exit(-1);
    }
  }

  FSImage fsImage = new FSImage(new File(edits_dir));
  FileNameGenerator nameGenerator = new FileNameGenerator(BASE_PATH, 100);


  FSEditLog editLog = fsImage.getEditLog();
  editLog.createEditLogFile(fsImage.getFsEditName());
  editLog.open();
  addFiles(editLog, numFiles, replication, numBlocksPerFile, startingBlockId,
           nameGenerator);
  editLog.logSync();
  editLog.close();
}
 
Example 16
Source File: MiniDFSCluster.java    From big-c with Apache License 2.0 2 votes vote down vote up
/**
 * Get current directory corresponding to the datanode as defined in
 * (@link Storage#STORAGE_DIR_CURRENT}
 * @param storageDir the storage directory of a datanode.
 * @return the datanode current directory
 */
public static String getDNCurrentDir(File storageDir) {
  return storageDir + "/" + Storage.STORAGE_DIR_CURRENT + "/";
}
 
Example 17
Source File: MiniDFSCluster.java    From hadoop with Apache License 2.0 2 votes vote down vote up
/**
 * Get current directory corresponding to the datanode as defined in
 * (@link Storage#STORAGE_DIR_CURRENT}
 * @param storageDir the storage directory of a datanode.
 * @return the datanode current directory
 */
public static String getDNCurrentDir(File storageDir) {
  return storageDir + "/" + Storage.STORAGE_DIR_CURRENT + "/";
}