org.apache.hadoop.io.retry.Idempotent Java Examples

The following examples show how to use org.apache.hadoop.io.retry.Idempotent. You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example #1
Source File: TestAnnotations.java    From hadoop with Apache License 2.0 5 votes vote down vote up
@Test
public void checkAnnotations() {
  Method[] methods = NamenodeProtocols.class.getMethods();
  for (Method m : methods) {
    Assert.assertTrue(
        "Idempotent or AtMostOnce annotation is not present " + m,
        m.isAnnotationPresent(Idempotent.class)
            || m.isAnnotationPresent(AtMostOnce.class));
  }
}
 
Example #2
Source File: ResourceManagerAdministrationProtocol.java    From hadoop with Apache License 2.0 4 votes vote down vote up
@Public
@Evolving
@Idempotent
public ReplaceLabelsOnNodeResponse replaceLabelsOnNode(
    ReplaceLabelsOnNodeRequest request) throws YarnException, IOException;
 
Example #3
Source File: DatanodeProtocol.java    From hadoop with Apache License 2.0 4 votes vote down vote up
@Idempotent
public NamespaceInfo versionRequest() throws IOException;
 
Example #4
Source File: ResourceManagerAdministrationProtocol.java    From hadoop with Apache License 2.0 4 votes vote down vote up
@Public
@Stable
@Idempotent
public RefreshServiceAclsResponse refreshServiceAcls(
    RefreshServiceAclsRequest request)
throws YarnException, IOException;
 
Example #5
Source File: DatanodeProtocol.java    From hadoop with Apache License 2.0 4 votes vote down vote up
/**
 * Commit block synchronization in lease recovery
 */
@Idempotent
public void commitBlockSynchronization(ExtendedBlock block,
    long newgenerationstamp, long newlength,
    boolean closeFile, boolean deleteblock, DatanodeID[] newtargets,
    String[] newtargetstorages) throws IOException;
 
Example #6
Source File: ClientProtocol.java    From hadoop with Apache License 2.0 4 votes vote down vote up
/**
 * Get a report on the current datanode storages.
 */
@Idempotent
public DatanodeStorageReport[] getDatanodeStorageReport(
    HdfsConstants.DatanodeReportType type) throws IOException;
 
Example #7
Source File: ResourceManagerAdministrationProtocol.java    From hadoop with Apache License 2.0 4 votes vote down vote up
@Public
@Stable
@Idempotent
public RefreshUserToGroupsMappingsResponse refreshUserToGroupsMappings(
    RefreshUserToGroupsMappingsRequest request)
throws StandbyException, YarnException, IOException;
 
Example #8
Source File: ApplicationClientProtocol.java    From hadoop with Apache License 2.0 3 votes vote down vote up
/**
 * <p>The interface used by clients to get information about <em>queues</em>
 * from the <code>ResourceManager</code>.</p>
 * 
 * <p>The client, via {@link GetQueueInfoRequest}, can ask for details such
 * as used/total resources, child queues, running applications etc.</p>
 *
 * <p> In secure mode,the <code>ResourceManager</code> verifies access before
 * providing the information.</p> 
 * 
 * @param request request to get queue information
 * @return queue information
 * @throws YarnException
 * @throws IOException
 */
@Public
@Stable
@Idempotent
public GetQueueInfoResponse getQueueInfo(
    GetQueueInfoRequest request) 
throws YarnException, IOException;
 
Example #9
Source File: ApplicationBaseProtocol.java    From hadoop with Apache License 2.0 3 votes vote down vote up
/**
 * The interface used by clients to get a report of an Application Attempt
 * from the <code>ResourceManager</code> or
 * <code>ApplicationHistoryServer</code>
 * <p>
 * The client, via {@link GetApplicationAttemptReportRequest} provides the
 * {@link ApplicationAttemptId} of the application attempt.
 * <p>
 * In secure mode,the <code>ResourceManager</code> or
 * <code>ApplicationHistoryServer</code> verifies access to the method before
 * accepting the request.
 * <p>
 * The <code>ResourceManager</code> or <code>ApplicationHistoryServer</code>
 * responds with a {@link GetApplicationAttemptReportResponse} which includes
 * the {@link ApplicationAttemptReport} for the application attempt.
 * <p>
 * If the user does not have <code>VIEW_APP</code> access then the following
 * fields in the report will be set to stubbed values:
 * <ul>
 *   <li>host</li>
 *   <li>RPC port</li>
 *   <li>client token</li>
 *   <li>diagnostics - set to "N/A"</li>
 *   <li>tracking URL</li>
 * </ul>
 *
 * @param request
 *          request for an application attempt report
 * @return application attempt report
 * @throws YarnException
 * @throws IOException
 */
@Public
@Unstable
@Idempotent
public GetApplicationAttemptReportResponse getApplicationAttemptReport(
    GetApplicationAttemptReportRequest request) throws YarnException,
    IOException;
 
Example #10
Source File: DatanodeProtocol.java    From hadoop with Apache License 2.0 3 votes vote down vote up
/**
 * sendHeartbeat() tells the NameNode that the DataNode is still
 * alive and well.  Includes some status info, too. 
 * It also gives the NameNode a chance to return 
 * an array of "DatanodeCommand" objects in HeartbeatResponse.
 * A DatanodeCommand tells the DataNode to invalidate local block(s), 
 * or to copy them to other DataNodes, etc.
 * @param registration datanode registration information
 * @param reports utilization report per storage
 * @param xmitsInProgress number of transfers from this datanode to others
 * @param xceiverCount number of active transceiver threads
 * @param failedVolumes number of failed volumes
 * @param volumeFailureSummary info about volume failures
 * @throws IOException on error
 */
@Idempotent
public HeartbeatResponse sendHeartbeat(DatanodeRegistration registration,
                                     StorageReport[] reports,
                                     long dnCacheCapacity,
                                     long dnCacheUsed,
                                     int xmitsInProgress,
                                     int xceiverCount,
                                     int failedVolumes,
                                     VolumeFailureSummary volumeFailureSummary)
    throws IOException;
 
Example #11
Source File: ApplicationClientProtocol.java    From hadoop with Apache License 2.0 3 votes vote down vote up
/**
  * <p>The interface used by clients to get information about <em>queue 
  * acls</em> for <em>current user</em> from the <code>ResourceManager</code>.
  * </p>
  * 
  * <p>The <code>ResourceManager</code> responds with queue acls for all
  * existing queues.</p>
  * 
  * @param request request to get queue acls for <em>current user</em>
  * @return queue acls for <em>current user</em>
  * @throws YarnException
  * @throws IOException
  */
 @Public
 @Stable
@Idempotent
 public GetQueueUserAclsInfoResponse getQueueUserAcls(
     GetQueueUserAclsInfoRequest request) 
 throws YarnException, IOException;
 
Example #12
Source File: ApplicationClientProtocol.java    From hadoop with Apache License 2.0 3 votes vote down vote up
/**
 * <p>The interface used by clients to get a report of all nodes
 * in the cluster from the <code>ResourceManager</code>.</p>
 * 
 * <p>The <code>ResourceManager</code> responds with a 
 * {@link GetClusterNodesResponse} which includes the 
 * {@link NodeReport} for all the nodes in the cluster.</p>
 * 
 * @param request request for report on all nodes
 * @return report on all nodes
 * @throws YarnException
 * @throws IOException
 */
@Public
@Stable
@Idempotent
public GetClusterNodesResponse getClusterNodes(
    GetClusterNodesRequest request) 
throws YarnException, IOException;
 
Example #13
Source File: ClientProtocol.java    From hadoop with Apache License 2.0 3 votes vote down vote up
/** 
 * Get a datanode for an existing pipeline.
 * 
 * @param src the file being written
 * @param fileId the ID of the file being written
 * @param blk the block being written
 * @param existings the existing nodes in the pipeline
 * @param excludes the excluded nodes
 * @param numAdditionalNodes number of additional datanodes
 * @param clientName the name of the client
 * 
 * @return the located block.
 * 
 * @throws AccessControlException If access is denied
 * @throws FileNotFoundException If file <code>src</code> is not found
 * @throws SafeModeException create not allowed in safemode
 * @throws UnresolvedLinkException If <code>src</code> contains a symlink
 * @throws IOException If an I/O error occurred
 */
@Idempotent
public LocatedBlock getAdditionalDatanode(final String src,
    final long fileId, final ExtendedBlock blk,
    final DatanodeInfo[] existings,
    final String[] existingStorageIDs,
    final DatanodeInfo[] excludes,
    final int numAdditionalNodes, final String clientName
    ) throws AccessControlException, FileNotFoundException,
        SafeModeException, UnresolvedLinkException, IOException;
 
Example #14
Source File: ApplicationClientProtocol.java    From hadoop with Apache License 2.0 3 votes vote down vote up
/**
 * <p>The interface used by clients to obtain a new {@link ApplicationId} for 
 * submitting new applications.</p>
 * 
 * <p>The <code>ResourceManager</code> responds with a new, monotonically
 * increasing, {@link ApplicationId} which is used by the client to submit
 * a new application.</p>
 *
 * <p>The <code>ResourceManager</code> also responds with details such 
 * as maximum resource capabilities in the cluster as specified in
 * {@link GetNewApplicationResponse}.</p>
 *
 * @param request request to get a new <code>ApplicationId</code>
 * @return response containing the new <code>ApplicationId</code> to be used
 * to submit an application
 * @throws YarnException
 * @throws IOException
 * @see #submitApplication(SubmitApplicationRequest)
 */
@Public
@Stable
@Idempotent
public GetNewApplicationResponse getNewApplication(
    GetNewApplicationRequest request)
throws YarnException, IOException;
 
Example #15
Source File: ResourceManagerAdministrationProtocol.java    From hadoop with Apache License 2.0 3 votes vote down vote up
/**
 * <p>The interface used by admin to update nodes' resources to the
 * <code>ResourceManager</code> </p>.
 * 
 * <p>The admin client is required to provide details such as a map from 
 * {@link NodeId} to {@link ResourceOption} required to update resources on 
 * a list of <code>RMNode</code> in <code>ResourceManager</code> etc.
 * via the {@link UpdateNodeResourceRequest}.</p>
 * 
 * @param request request to update resource for a node in cluster.
 * @return (empty) response on accepting update.
 * @throws YarnException
 * @throws IOException
 */
@Public
@Evolving
@Idempotent
public UpdateNodeResourceResponse updateNodeResource(
    UpdateNodeResourceRequest request) 
throws YarnException, IOException;
 
Example #16
Source File: ClientProtocol.java    From hadoop with Apache License 2.0 3 votes vote down vote up
/**
 * Create a directory (or hierarchy of directories) with the given
 * name and permission.
 *
 * @param src The path of the directory being created
 * @param masked The masked permission of the directory being created
 * @param createParent create missing parent directory if true
 *
 * @return True if the operation success.
 *
 * @throws AccessControlException If access is denied
 * @throws FileAlreadyExistsException If <code>src</code> already exists
 * @throws FileNotFoundException If parent of <code>src</code> does not exist
 *           and <code>createParent</code> is false
 * @throws NSQuotaExceededException If file creation violates quota restriction
 * @throws ParentNotDirectoryException If parent of <code>src</code> 
 *           is not a directory
 * @throws SafeModeException create not allowed in safemode
 * @throws UnresolvedLinkException If <code>src</code> contains a symlink
 * @throws SnapshotAccessControlException if path is in RO snapshot
 * @throws IOException If an I/O error occurred.
 *
 * RunTimeExceptions:
 * @throws InvalidPathException If <code>src</code> is invalid
 */
@Idempotent
public boolean mkdirs(String src, FsPermission masked, boolean createParent)
    throws AccessControlException, FileAlreadyExistsException,
    FileNotFoundException, NSQuotaExceededException,
    ParentNotDirectoryException, SafeModeException, UnresolvedLinkException,
    SnapshotAccessControlException, IOException;
 
Example #17
Source File: ClientProtocol.java    From hadoop with Apache License 2.0 2 votes vote down vote up
/**
 * Fully replaces ACL of files and directories, discarding all existing
 * entries.
 */
@Idempotent
public void setAcl(String src, List<AclEntry> aclSpec) throws IOException;
 
Example #18
Source File: ClientProtocol.java    From hadoop with Apache License 2.0 2 votes vote down vote up
/**
 * Get the block size for the given file.
 * @param filename The name of the file
 * @return The number of bytes in each block
 * @throws IOException
 * @throws UnresolvedLinkException if the path contains a symlink. 
 */
@Idempotent
public long getPreferredBlockSize(String filename) 
    throws IOException, UnresolvedLinkException;
 
Example #19
Source File: ZKFCProtocol.java    From hadoop with Apache License 2.0 2 votes vote down vote up
/**
 * Request that this service yield from the active node election for the
 * specified time period.
 * 
 * If the node is not currently active, it simply prevents any attempts
 * to become active for the specified time period. Otherwise, it first
 * tries to transition the local service to standby state, and then quits
 * the election.
 * 
 * If the attempt to transition to standby succeeds, then the ZKFC receiving
 * this RPC will delete its own breadcrumb node in ZooKeeper. Thus, the
 * next node to become active will not run any fencing process. Otherwise,
 * the breadcrumb will be left, such that the next active will fence this
 * node.
 * 
 * After the specified time period elapses, the node will attempt to re-join
 * the election, provided that its service is healthy.
 * 
 * If the node has previously been instructed to cede active, and is still
 * within the specified time period, the later command's time period will
 * take precedence, resetting the timer.
 * 
 * A call to cedeActive which specifies a 0 or negative time period will
 * allow the target node to immediately rejoin the election, so long as
 * it is healthy.
 *  
 * @param millisToCede period for which the node should not attempt to
 * become active
 * @throws IOException if the operation fails
 * @throws AccessControlException if the operation is disallowed
 */
@Idempotent
public void cedeActive(int millisToCede)
    throws IOException, AccessControlException;
 
Example #20
Source File: RefreshUserMappingsProtocol.java    From hadoop with Apache License 2.0 2 votes vote down vote up
/**
 * Refresh user to group mappings.
 * @throws IOException
 */
@Idempotent
public void refreshUserToGroupsMappings() throws IOException;
 
Example #21
Source File: ClientProtocol.java    From hadoop with Apache License 2.0 2 votes vote down vote up
/**
 * Tells the namenode to reread the hosts and exclude files. 
 * @throws IOException
 */
@Idempotent
public void refreshNodes() throws IOException;
 
Example #22
Source File: ClientProtocol.java    From hadoop with Apache License 2.0 2 votes vote down vote up
/**
 * Gets the ACLs of files and directories.
 */
@Idempotent
public AclStatus getAclStatus(String src) throws IOException;
 
Example #23
Source File: ClientProtocol.java    From hadoop with Apache License 2.0 2 votes vote down vote up
/**
 * Roll the edit log.
 * Requires superuser privileges.
 * 
 * @throws AccessControlException if the superuser privilege is violated
 * @throws IOException if log roll fails
 * @return the txid of the new segment
 */
@Idempotent
public long rollEdits() throws AccessControlException, IOException;
 
Example #24
Source File: ClientProtocol.java    From hadoop with Apache License 2.0 2 votes vote down vote up
/**
 * Enter, leave or get safe mode.
 * <p>
 * Safe mode is a name node state when it
 * <ol><li>does not accept changes to name space (read-only), and</li>
 * <li>does not replicate or delete blocks.</li></ol>
 * 
 * <p>
 * Safe mode is entered automatically at name node startup.
 * Safe mode can also be entered manually using
 * {@link #setSafeMode(HdfsConstants.SafeModeAction,boolean) setSafeMode(SafeModeAction.SAFEMODE_ENTER,false)}.
 * <p>
 * At startup the name node accepts data node reports collecting
 * information about block locations.
 * In order to leave safe mode it needs to collect a configurable
 * percentage called threshold of blocks, which satisfy the minimal 
 * replication condition.
 * The minimal replication condition is that each block must have at least
 * <tt>dfs.namenode.replication.min</tt> replicas.
 * When the threshold is reached the name node extends safe mode
 * for a configurable amount of time
 * to let the remaining data nodes to check in before it
 * will start replicating missing blocks.
 * Then the name node leaves safe mode.
 * <p>
 * If safe mode is turned on manually using
 * {@link #setSafeMode(HdfsConstants.SafeModeAction,boolean) setSafeMode(SafeModeAction.SAFEMODE_ENTER,false)}
 * then the name node stays in safe mode until it is manually turned off
 * using {@link #setSafeMode(HdfsConstants.SafeModeAction,boolean) setSafeMode(SafeModeAction.SAFEMODE_LEAVE,false)}.
 * Current state of the name node can be verified using
 * {@link #setSafeMode(HdfsConstants.SafeModeAction,boolean) setSafeMode(SafeModeAction.SAFEMODE_GET,false)}
 * <h4>Configuration parameters:</h4>
 * <tt>dfs.safemode.threshold.pct</tt> is the threshold parameter.<br>
 * <tt>dfs.safemode.extension</tt> is the safe mode extension parameter.<br>
 * <tt>dfs.namenode.replication.min</tt> is the minimal replication parameter.
 * 
 * <h4>Special cases:</h4>
 * The name node does not enter safe mode at startup if the threshold is 
 * set to 0 or if the name space is empty.<br>
 * If the threshold is set to 1 then all blocks need to have at least 
 * minimal replication.<br>
 * If the threshold value is greater than 1 then the name node will not be 
 * able to turn off safe mode automatically.<br>
 * Safe mode can always be turned off manually.
 * 
 * @param action  <ul> <li>0 leave safe mode;</li>
 *                <li>1 enter safe mode;</li>
 *                <li>2 get safe mode state.</li></ul>
 * @param isChecked If true then action will be done only in ActiveNN.
 * 
 * @return <ul><li>0 if the safe mode is OFF or</li> 
 *         <li>1 if the safe mode is ON.</li></ul>
 *                   
 * @throws IOException
 */
@Idempotent
public boolean setSafeMode(HdfsConstants.SafeModeAction action, boolean isChecked) 
    throws IOException;
 
Example #25
Source File: ClientProtocol.java    From hadoop with Apache License 2.0 2 votes vote down vote up
/**
 * Get the file info for a specific file or directory.
 * @param src The string representation of the path to the file
 *
 * @return object containing information regarding the file
 *         or null if file not found
 * @throws AccessControlException permission denied
 * @throws FileNotFoundException file <code>src</code> is not found
 * @throws UnresolvedLinkException if the path contains a symlink. 
 * @throws IOException If an I/O error occurred        
 */
@Idempotent
public HdfsFileStatus getFileInfo(String src) throws AccessControlException,
    FileNotFoundException, UnresolvedLinkException, IOException;
 
Example #26
Source File: ClientProtocol.java    From hadoop with Apache License 2.0 2 votes vote down vote up
/**
 * Get a report on the system's current datanodes.
 * One DatanodeInfo object is returned for each DataNode.
 * Return live datanodes if type is LIVE; dead datanodes if type is DEAD;
 * otherwise all datanodes if type is ALL.
 */
@Idempotent
public DatanodeInfo[] getDatanodeReport(HdfsConstants.DatanodeReportType type)
    throws IOException;
 
Example #27
Source File: ZKFCProtocol.java    From hadoop with Apache License 2.0 2 votes vote down vote up
/**
 * Request that this node try to become active through a graceful failover.
 * 
 * If the node is already active, this is a no-op and simply returns success
 * without taking any further action.
 * 
 * If the node is not healthy, it will throw an exception indicating that it
 * is not able to become active.
 * 
 * If the node is healthy and not active, it will try to initiate a graceful
 * failover to become active, returning only when it has successfully become
 * active. See {@link ZKFailoverController#gracefulFailoverToYou()} for the
 * implementation details.
 * 
 * If the node fails to successfully coordinate the failover, throws an
 * exception indicating the reason for failure.
 * 
 * @throws IOException if graceful failover fails
 * @throws AccessControlException if the operation is disallowed
 */
@Idempotent
public void gracefulFailover()
    throws IOException, AccessControlException;
 
Example #28
Source File: ClientProtocol.java    From hadoop with Apache License 2.0 2 votes vote down vote up
/**
 * Start lease recovery.
 * Lightweight NameNode operation to trigger lease recovery
 * 
 * @param src path of the file to start lease recovery
 * @param clientName name of the current client
 * @return true if the file is already closed
 * @throws IOException
 */
@Idempotent
public boolean recoverLease(String src, String clientName) throws IOException;
 
Example #29
Source File: RefreshCallQueueProtocol.java    From hadoop with Apache License 2.0 2 votes vote down vote up
/**
 * Refresh the callqueue.
 * @throws IOException
 */
@Idempotent
void refreshCallQueue() throws IOException;
 
Example #30
Source File: ClientProtocol.java    From hadoop with Apache License 2.0 2 votes vote down vote up
/**
 * Get listing of all the snapshottable directories
 * 
 * @return Information about all the current snapshottable directory
 * @throws IOException If an I/O error occurred
 */
@Idempotent
public SnapshottableDirectoryStatus[] getSnapshottableDirListing()
    throws IOException;