Java Code Examples for org.apache.hadoop.fs.LocalFileSystem#copyToLocalFile()

The following examples show how to use org.apache.hadoop.fs.LocalFileSystem#copyToLocalFile() . You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example 1
Source File: UpgradeUtilities.java    From hadoop with Apache License 2.0 6 votes vote down vote up
/**
 * Simulate the {@link DFSConfigKeys#DFS_DATANODE_DATA_DIR_KEY} of a 
 * populated DFS filesystem.
 * This method populates for each parent directory, <code>parent/dirName</code>
 * with the content of block pool storage directory that comes from a singleton
 * datanode master (that contains version and block files). If the destination
 * directory does not exist, it will be created.  If the directory already 
 * exists, it will first be deleted.
 * 
 * @param parents parent directory where {@code dirName} is created
 * @param dirName directory under which storage directory is created
 * @param bpid block pool id for which the storage directory is created.
 * @return the array of created directories
 */
public static File[] createBlockPoolStorageDirs(String[] parents,
    String dirName, String bpid) throws Exception {
  File[] retVal = new File[parents.length];
  Path bpCurDir = new Path(MiniDFSCluster.getBPDir(datanodeStorage,
      bpid, Storage.STORAGE_DIR_CURRENT));
  for (int i = 0; i < parents.length; i++) {
    File newDir = new File(parents[i] + "/current/" + bpid, dirName);
    createEmptyDirs(new String[] {newDir.toString()});
    LocalFileSystem localFS = FileSystem.getLocal(new HdfsConfiguration());
    localFS.copyToLocalFile(bpCurDir,
                            new Path(newDir.toString()),
                            false);
    retVal[i] = newDir;
  }
  return retVal;
}
 
Example 2
Source File: UpgradeUtilities.java    From big-c with Apache License 2.0 6 votes vote down vote up
/**
 * Simulate the {@link DFSConfigKeys#DFS_DATANODE_DATA_DIR_KEY} of a 
 * populated DFS filesystem.
 * This method populates for each parent directory, <code>parent/dirName</code>
 * with the content of block pool storage directory that comes from a singleton
 * datanode master (that contains version and block files). If the destination
 * directory does not exist, it will be created.  If the directory already 
 * exists, it will first be deleted.
 * 
 * @param parents parent directory where {@code dirName} is created
 * @param dirName directory under which storage directory is created
 * @param bpid block pool id for which the storage directory is created.
 * @return the array of created directories
 */
public static File[] createBlockPoolStorageDirs(String[] parents,
    String dirName, String bpid) throws Exception {
  File[] retVal = new File[parents.length];
  Path bpCurDir = new Path(MiniDFSCluster.getBPDir(datanodeStorage,
      bpid, Storage.STORAGE_DIR_CURRENT));
  for (int i = 0; i < parents.length; i++) {
    File newDir = new File(parents[i] + "/current/" + bpid, dirName);
    createEmptyDirs(new String[] {newDir.toString()});
    LocalFileSystem localFS = FileSystem.getLocal(new HdfsConfiguration());
    localFS.copyToLocalFile(bpCurDir,
                            new Path(newDir.toString()),
                            false);
    retVal[i] = newDir;
  }
  return retVal;
}
 
Example 3
Source File: UpgradeUtilities.java    From RDFS with Apache License 2.0 6 votes vote down vote up
public static File[] createFederatedDatanodeDirs(String[] parents,
    String dirName, int namespaceId) throws IOException {
  File[] retVal = new File[parents.length];
  for (int i = 0; i < parents.length; i++) {
    File nsDir = new File(new File(parents[i], "current"), "NS-"
        + namespaceId);
    File newDir = new File(nsDir, dirName);
    File srcDir = new File(new File(datanodeStorage, "current"), "NS-"
        + namespaceId);

    LocalFileSystem localFS = FileSystem.getLocal(new Configuration());
    localFS.copyToLocalFile(new Path(srcDir.toString(), "current"), new Path(
        newDir.toString()), false);
    retVal[i] = new File(parents[i], "current");
  }
  return retVal;
}
 
Example 4
Source File: UpgradeUtilities.java    From hadoop with Apache License 2.0 5 votes vote down vote up
/**
 * Simulate the {@link DFSConfigKeys#DFS_NAMENODE_NAME_DIR_KEY} of a populated 
 * DFS filesystem.
 * This method populates for each parent directory, <code>parent/dirName</code>
 * with the content of namenode storage directory that comes from a singleton
 * namenode master (that contains edits, fsimage, version and time files). 
 * If the destination directory does not exist, it will be created.  
 * If the directory already exists, it will first be deleted.
 *
 * @param parents parent directory where {@code dirName} is created
 * @param dirName directory under which storage directory is created
 * @return the array of created directories
 */
public static File[] createNameNodeStorageDirs(String[] parents,
    String dirName) throws Exception {
  File[] retVal = new File[parents.length];
  for (int i = 0; i < parents.length; i++) {
    File newDir = new File(parents[i], dirName);
    createEmptyDirs(new String[] {newDir.toString()});
    LocalFileSystem localFS = FileSystem.getLocal(new HdfsConfiguration());
    localFS.copyToLocalFile(new Path(namenodeStorage.toString(), "current"),
                            new Path(newDir.toString()),
                            false);
    retVal[i] = newDir;
  }
  return retVal;
}
 
Example 5
Source File: UpgradeUtilities.java    From hadoop with Apache License 2.0 5 votes vote down vote up
/**
 * Simulate the {@link DFSConfigKeys#DFS_DATANODE_DATA_DIR_KEY} of a 
 * populated DFS filesystem.
 * This method populates for each parent directory, <code>parent/dirName</code>
 * with the content of datanode storage directory that comes from a singleton
 * datanode master (that contains version and block files). If the destination
 * directory does not exist, it will be created.  If the directory already 
 * exists, it will first be deleted.
 * 
 * @param parents parent directory where {@code dirName} is created
 * @param dirName directory under which storage directory is created
 * @return the array of created directories
 */
public static File[] createDataNodeStorageDirs(String[] parents,
    String dirName) throws Exception {
  File[] retVal = new File[parents.length];
  for (int i = 0; i < parents.length; i++) {
    File newDir = new File(parents[i], dirName);
    createEmptyDirs(new String[] {newDir.toString()});
    LocalFileSystem localFS = FileSystem.getLocal(new HdfsConfiguration());
    localFS.copyToLocalFile(new Path(datanodeStorage.toString(), "current"),
                            new Path(newDir.toString()),
                            false);
    retVal[i] = newDir;
  }
  return retVal;
}
 
Example 6
Source File: UpgradeUtilities.java    From big-c with Apache License 2.0 5 votes vote down vote up
/**
 * Simulate the {@link DFSConfigKeys#DFS_NAMENODE_NAME_DIR_KEY} of a populated 
 * DFS filesystem.
 * This method populates for each parent directory, <code>parent/dirName</code>
 * with the content of namenode storage directory that comes from a singleton
 * namenode master (that contains edits, fsimage, version and time files). 
 * If the destination directory does not exist, it will be created.  
 * If the directory already exists, it will first be deleted.
 *
 * @param parents parent directory where {@code dirName} is created
 * @param dirName directory under which storage directory is created
 * @return the array of created directories
 */
public static File[] createNameNodeStorageDirs(String[] parents,
    String dirName) throws Exception {
  File[] retVal = new File[parents.length];
  for (int i = 0; i < parents.length; i++) {
    File newDir = new File(parents[i], dirName);
    createEmptyDirs(new String[] {newDir.toString()});
    LocalFileSystem localFS = FileSystem.getLocal(new HdfsConfiguration());
    localFS.copyToLocalFile(new Path(namenodeStorage.toString(), "current"),
                            new Path(newDir.toString()),
                            false);
    retVal[i] = newDir;
  }
  return retVal;
}
 
Example 7
Source File: UpgradeUtilities.java    From big-c with Apache License 2.0 5 votes vote down vote up
/**
 * Simulate the {@link DFSConfigKeys#DFS_DATANODE_DATA_DIR_KEY} of a 
 * populated DFS filesystem.
 * This method populates for each parent directory, <code>parent/dirName</code>
 * with the content of datanode storage directory that comes from a singleton
 * datanode master (that contains version and block files). If the destination
 * directory does not exist, it will be created.  If the directory already 
 * exists, it will first be deleted.
 * 
 * @param parents parent directory where {@code dirName} is created
 * @param dirName directory under which storage directory is created
 * @return the array of created directories
 */
public static File[] createDataNodeStorageDirs(String[] parents,
    String dirName) throws Exception {
  File[] retVal = new File[parents.length];
  for (int i = 0; i < parents.length; i++) {
    File newDir = new File(parents[i], dirName);
    createEmptyDirs(new String[] {newDir.toString()});
    LocalFileSystem localFS = FileSystem.getLocal(new HdfsConfiguration());
    localFS.copyToLocalFile(new Path(datanodeStorage.toString(), "current"),
                            new Path(newDir.toString()),
                            false);
    retVal[i] = newDir;
  }
  return retVal;
}
 
Example 8
Source File: UpgradeUtilities.java    From RDFS with Apache License 2.0 5 votes vote down vote up
public static void createFederatedNameNodeStorageDirs(String[] parents) 
    throws Exception {
  LocalFileSystem localFS = FileSystem.getLocal(new Configuration());
  for (int i = 0; i < parents.length; i++) {
    File newDir = new File(parents[i]);
    createEmptyDirs(new String[] {newDir.toString()});
    localFS.copyToLocalFile(new Path(namenodeStorage.toString()),
        new Path(newDir.toString()),
        false);
  }
}
 
Example 9
Source File: UpgradeUtilities.java    From RDFS with Apache License 2.0 5 votes vote down vote up
public static File[] createStorageDirs(NodeType nodeType, String[] parents, String dirName,
    File srcFile) throws Exception {
  File[] retVal = new File[parents.length];
  for (int i = 0; i < parents.length; i++) {
    File newDir = new File(parents[i], dirName);
    createEmptyDirs(new String[] {newDir.toString()});
    LocalFileSystem localFS = FileSystem.getLocal(new Configuration());
    switch (nodeType) {
    case NAME_NODE:
      localFS.copyToLocalFile(new Path(srcFile.toString(), "current"),
                              new Path(newDir.toString()),
                              false);
      Path newImgDir = new Path(newDir.getParent(), "image");
      if (!localFS.exists(newImgDir))
        localFS.copyToLocalFile(
            new Path(srcFile.toString(), "image"),
            newImgDir,
            false);
      break;
    case DATA_NODE:
      localFS.copyToLocalFile(new Path(srcFile.toString(), "current"),
                              new Path(newDir.toString()),
                              false);
      Path newStorageFile = new Path(newDir.getParent(), "storage");
      if (!localFS.exists(newStorageFile))
        localFS.copyToLocalFile(
            new Path(srcFile.toString(), "storage"),
            newStorageFile,
            false);
      break;
    }
    retVal[i] = newDir;
  }
  return retVal;
}
 
Example 10
Source File: UpgradeUtilities.java    From hadoop-gpu with Apache License 2.0 5 votes vote down vote up
/**
 * Simulate the <code>dfs.name.dir</code> or <code>dfs.data.dir</code>
 * of a populated DFS filesystem.
 *
 * This method creates and populates the directory specified by
 *  <code>parent/dirName</code>, for each parent directory.
 * The contents of the new directories will be
 * appropriate for the given node type.  If the directory does not
 * exist, it will be created.  If the directory already exists, it
 * will first be deleted.
 *
 * By default, a singleton master populated storage
 * directory is created for a Namenode (contains edits, fsimage,
 * version, and time files) and a Datanode (contains version and
 * block files).  These directories are then
 * copied by this method to create new storage
 * directories of the appropriate type (Namenode or Datanode).
 *
 * @return the array of created directories
 */
public static File[] createStorageDirs(NodeType nodeType, String[] parents, String dirName) throws Exception {
  File[] retVal = new File[parents.length];
  for (int i = 0; i < parents.length; i++) {
    File newDir = new File(parents[i], dirName);
    createEmptyDirs(new String[] {newDir.toString()});
    LocalFileSystem localFS = FileSystem.getLocal(new Configuration());
    switch (nodeType) {
    case NAME_NODE:
      localFS.copyToLocalFile(new Path(namenodeStorage.toString(), "current"),
                              new Path(newDir.toString()),
                              false);
      Path newImgDir = new Path(newDir.getParent(), "image");
      if (!localFS.exists(newImgDir))
        localFS.copyToLocalFile(
            new Path(namenodeStorage.toString(), "image"),
            newImgDir,
            false);
      break;
    case DATA_NODE:
      localFS.copyToLocalFile(new Path(datanodeStorage.toString(), "current"),
                              new Path(newDir.toString()),
                              false);
      Path newStorageFile = new Path(newDir.getParent(), "storage");
      if (!localFS.exists(newStorageFile))
        localFS.copyToLocalFile(
            new Path(datanodeStorage.toString(), "storage"),
            newStorageFile,
            false);
      break;
    }
    retVal[i] = newDir;
  }
  return retVal;
}