Java Code Examples for org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled()

The following are Jave code examples for showing how to use isSecurityEnabled() of the org.apache.hadoop.security.UserGroupInformation class. You can vote up the examples you like. Your votes will be used in our system to get more good examples.
+ Save this method
Example 1
Project: hadoop-oss   File: HttpServer2.java   View Source Code Vote up 6 votes
/**
 * Add an internal servlet in the server, specifying whether or not to
 * protect with Kerberos authentication.
 * Note: This method is to be used for adding servlets that facilitate
 * internal communication and not for user facing functionality. For
 +   * servlets added using this method, filters (except internal Kerberos
 * filters) are not enabled.
 *
 * @param name The name of the servlet (can be passed as null)
 * @param pathSpec The path spec for the servlet
 * @param clazz The servlet class
 * @param requireAuth Require Kerberos authenticate to access servlet
 */
public void addInternalServlet(String name, String pathSpec,
    Class<? extends HttpServlet> clazz, boolean requireAuth) {
  ServletHolder holder = new ServletHolder(clazz);
  if (name != null) {
    holder.setName(name);
  }
  webAppContext.addServlet(holder, pathSpec);

  if(requireAuth && UserGroupInformation.isSecurityEnabled()) {
     LOG.info("Adding Kerberos (SPNEGO) filter to " + name);
     ServletHandler handler = webAppContext.getServletHandler();
     FilterMapping fmap = new FilterMapping();
     fmap.setPathSpec(pathSpec);
     fmap.setFilterName(SPNEGO_FILTER);
     fmap.setDispatches(Handler.ALL);
     handler.addFilterMapping(fmap);
  }
}
 
Example 2
Project: hadoop   File: TestRMWebappAuthentication.java   View Source Code Vote up 6 votes
@Test
public void testSimpleAuth() throws Exception {

  rm.start();

  // ensure users can access web pages
  // this should work for secure and non-secure clusters
  URL url = new URL("http://localhost:8088/cluster");
  HttpURLConnection conn = (HttpURLConnection) url.openConnection();
  try {
    conn.getInputStream();
    assertEquals(Status.OK.getStatusCode(), conn.getResponseCode());
  } catch (Exception e) {
    fail("Fetching url failed");
  }

  if (UserGroupInformation.isSecurityEnabled()) {
    testAnonymousKerberosUser();
  } else {
    testAnonymousSimpleUser();
  }

  rm.stop();
}
 
Example 3
Project: hadoop   File: HttpServer.java   View Source Code Vote up 6 votes
/**
 * Add an internal servlet in the server, specifying whether or not to
 * protect with Kerberos authentication. 
 * Note: This method is to be used for adding servlets that facilitate
 * internal communication and not for user facing functionality. For
 +   * servlets added using this method, filters (except internal Kerberos
 * filters) are not enabled. 
 * 
 * @param name The name of the servlet (can be passed as null)
 * @param pathSpec The path spec for the servlet
 * @param clazz The servlet class
 * @param requireAuth Require Kerberos authenticate to access servlet
 */
public void addInternalServlet(String name, String pathSpec, 
    Class<? extends HttpServlet> clazz, boolean requireAuth) {
  ServletHolder holder = new ServletHolder(clazz);
  if (name != null) {
    holder.setName(name);
  }
  webAppContext.addServlet(holder, pathSpec);

  if(requireAuth && UserGroupInformation.isSecurityEnabled()) {
     LOG.info("Adding Kerberos (SPNEGO) filter to " + name);
     ServletHandler handler = webAppContext.getServletHandler();
     FilterMapping fmap = new FilterMapping();
     fmap.setPathSpec(pathSpec);
     fmap.setFilterName(SPNEGO_FILTER);
     fmap.setDispatches(Handler.ALL);
     handler.addFilterMapping(fmap);
  }
}
 
Example 4
Project: hadoop   File: HttpServer2.java   View Source Code Vote up 6 votes
/**
 * Add an internal servlet in the server, specifying whether or not to
 * protect with Kerberos authentication.
 * Note: This method is to be used for adding servlets that facilitate
 * internal communication and not for user facing functionality. For
 +   * servlets added using this method, filters (except internal Kerberos
 * filters) are not enabled.
 *
 * @param name The name of the servlet (can be passed as null)
 * @param pathSpec The path spec for the servlet
 * @param clazz The servlet class
 * @param requireAuth Require Kerberos authenticate to access servlet
 */
public void addInternalServlet(String name, String pathSpec,
    Class<? extends HttpServlet> clazz, boolean requireAuth) {
  ServletHolder holder = new ServletHolder(clazz);
  if (name != null) {
    holder.setName(name);
  }
  webAppContext.addServlet(holder, pathSpec);

  if(requireAuth && UserGroupInformation.isSecurityEnabled()) {
     LOG.info("Adding Kerberos (SPNEGO) filter to " + name);
     ServletHandler handler = webAppContext.getServletHandler();
     FilterMapping fmap = new FilterMapping();
     fmap.setPathSpec(pathSpec);
     fmap.setFilterName(SPNEGO_FILTER);
     fmap.setDispatches(Handler.ALL);
     handler.addFilterMapping(fmap);
  }
}
 
Example 5
Project: hadoop   File: GetJournalEditServlet.java   View Source Code Vote up 5 votes
private boolean checkRequestorOrSendError(Configuration conf,
    HttpServletRequest request, HttpServletResponse response)
        throws IOException {
  if (UserGroupInformation.isSecurityEnabled()
      && !isValidRequestor(request, conf)) {
    response.sendError(HttpServletResponse.SC_FORBIDDEN,
        "Only Namenode and another JournalNode may access this servlet");
    LOG.warn("Received non-NN/JN request for edits from "
        + request.getRemoteHost());
    return false;
  }
  return true;
}
 
Example 6
Project: hadoop   File: LinuxContainerExecutor.java   View Source Code Vote up 5 votes
void verifyUsernamePattern(String user) {
  if (!UserGroupInformation.isSecurityEnabled() &&
      !nonsecureLocalUserPattern.matcher(user).matches()) {
    throw new IllegalArgumentException("Invalid user name '" + user + "'," +
        " it must match '" + nonsecureLocalUserPattern.pattern() + "'");
  }
}
 
Example 7
Project: hadoop   File: HSAdminServer.java   View Source Code Vote up 5 votes
@Override
protected void serviceStart() throws Exception {
  if (UserGroupInformation.isSecurityEnabled()) {
    loginUGI = UserGroupInformation.getLoginUser();
  } else {
    loginUGI = UserGroupInformation.getCurrentUser();
  }
  clientRpcServer.start();
}
 
Example 8
Project: hadoop   File: TestAMAuthorization.java   View Source Code Vote up 5 votes
@Test
public void testAuthorizedAccess() throws Exception {
  MyContainerManager containerManager = new MyContainerManager();
  rm =
      new MockRMWithAMS(conf, containerManager);
  rm.start();

  MockNM nm1 = rm.registerNode("localhost:1234", 5120);

  Map<ApplicationAccessType, String> acls =
      new HashMap<ApplicationAccessType, String>(2);
  acls.put(ApplicationAccessType.VIEW_APP, "*");
  RMApp app = rm.submitApp(1024, "appname", "appuser", acls);

  nm1.nodeHeartbeat(true);

  int waitCount = 0;
  while (containerManager.containerTokens == null && waitCount++ < 20) {
    LOG.info("Waiting for AM Launch to happen..");
    Thread.sleep(1000);
  }
  Assert.assertNotNull(containerManager.containerTokens);

  RMAppAttempt attempt = app.getCurrentAppAttempt();
  ApplicationAttemptId applicationAttemptId = attempt.getAppAttemptId();
  waitForLaunchedState(attempt);

  // Create a client to the RM.
  final Configuration conf = rm.getConfig();
  final YarnRPC rpc = YarnRPC.create(conf);

  UserGroupInformation currentUser = UserGroupInformation
      .createRemoteUser(applicationAttemptId.toString());
  Credentials credentials = containerManager.getContainerCredentials();
  final InetSocketAddress rmBindAddress =
      rm.getApplicationMasterService().getBindAddress();
  Token<? extends TokenIdentifier> amRMToken =
      MockRMWithAMS.setupAndReturnAMRMToken(rmBindAddress,
        credentials.getAllTokens());
  currentUser.addToken(amRMToken);
  ApplicationMasterProtocol client = currentUser
      .doAs(new PrivilegedAction<ApplicationMasterProtocol>() {
        @Override
        public ApplicationMasterProtocol run() {
          return (ApplicationMasterProtocol) rpc.getProxy(ApplicationMasterProtocol.class, rm
            .getApplicationMasterService().getBindAddress(), conf);
        }
      });

  RegisterApplicationMasterRequest request = Records
      .newRecord(RegisterApplicationMasterRequest.class);
  RegisterApplicationMasterResponse response =
      client.registerApplicationMaster(request);
  Assert.assertNotNull(response.getClientToAMTokenMasterKey());
  if (UserGroupInformation.isSecurityEnabled()) {
    Assert
      .assertTrue(response.getClientToAMTokenMasterKey().array().length > 0);
  }
  Assert.assertEquals("Register response has bad ACLs", "*",
      response.getApplicationACLs().get(ApplicationAccessType.VIEW_APP));
}
 
Example 9
Project: hadoop   File: TestRMAppAttemptTransitions.java   View Source Code Vote up 5 votes
private void verifyTokenCount(ApplicationAttemptId appAttemptId, int count) {
  verify(amRMTokenManager, times(count)).applicationMasterFinished(appAttemptId);
  if (UserGroupInformation.isSecurityEnabled()) {
    verify(clientToAMTokenManager, times(count)).unRegisterApplication(appAttemptId);
    if (count > 0) {
      assertNull(applicationAttempt.createClientToken("client"));
    }
  }
}
 
Example 10
Project: hadoop   File: ClientRMService.java   View Source Code Vote up 5 votes
private boolean isAllowedDelegationTokenOp() throws IOException {
  if (UserGroupInformation.isSecurityEnabled()) {
    return EnumSet.of(AuthenticationMethod.KERBEROS,
                      AuthenticationMethod.KERBEROS_SSL,
                      AuthenticationMethod.CERTIFICATE)
        .contains(UserGroupInformation.getCurrentUser()
                .getRealAuthenticationMethod());
  } else {
    return true;
  }
}
 
Example 11
Project: Mastering-Apache-Storm   File: HdfsSecurityUtil.java   View Source Code Vote up 5 votes
public static void login(Map conf, Configuration hdfsConfig)
		throws IOException {
	if (UserGroupInformation.isSecurityEnabled()) {
		String keytab = (String) conf.get(STORM_KEYTAB_FILE_KEY);
		if (keytab != null) {
			hdfsConfig.set(STORM_KEYTAB_FILE_KEY, keytab);
		}
		String userName = (String) conf.get(STORM_USER_NAME_KEY);
		if (userName != null) {
			hdfsConfig.set(STORM_USER_NAME_KEY, userName);
		}
		SecurityUtil.login(hdfsConfig, STORM_KEYTAB_FILE_KEY,
				STORM_USER_NAME_KEY);
	}
}
 
Example 12
Project: hadoop   File: SecureDataNodeStarter.java   View Source Code Vote up 4 votes
/**
 * Acquire privileged resources (i.e., the privileged ports) for the data
 * node. The privileged resources consist of the port of the RPC server and
 * the port of HTTP (not HTTPS) server.
 */
@VisibleForTesting
public static SecureResources getSecureResources(Configuration conf)
    throws Exception {
  HttpConfig.Policy policy = DFSUtil.getHttpPolicy(conf);
  boolean isSecure = UserGroupInformation.isSecurityEnabled();

  // Obtain secure port for data streaming to datanode
  InetSocketAddress streamingAddr  = DataNode.getStreamingAddr(conf);
  int socketWriteTimeout = conf.getInt(
      DFSConfigKeys.DFS_DATANODE_SOCKET_WRITE_TIMEOUT_KEY,
      HdfsServerConstants.WRITE_TIMEOUT);

  ServerSocket ss = (socketWriteTimeout > 0) ? 
      ServerSocketChannel.open().socket() : new ServerSocket();
  ss.bind(streamingAddr, 0);

  // Check that we got the port we need
  if (ss.getLocalPort() != streamingAddr.getPort()) {
    throw new RuntimeException(
        "Unable to bind on specified streaming port in secure "
            + "context. Needed " + streamingAddr.getPort() + ", got "
            + ss.getLocalPort());
  }

  if (!SecurityUtil.isPrivilegedPort(ss.getLocalPort()) && isSecure) {
    throw new RuntimeException(
      "Cannot start secure datanode with unprivileged RPC ports");
  }

  System.err.println("Opened streaming server at " + streamingAddr);

  // Bind a port for the web server. The code intends to bind HTTP server to
  // privileged port only, as the client can authenticate the server using
  // certificates if they are communicating through SSL.
  final ServerSocketChannel httpChannel;
  if (policy.isHttpEnabled()) {
    httpChannel = ServerSocketChannel.open();
    InetSocketAddress infoSocAddr = DataNode.getInfoAddr(conf);
    httpChannel.socket().bind(infoSocAddr);
    InetSocketAddress localAddr = (InetSocketAddress) httpChannel.socket()
      .getLocalSocketAddress();

    if (localAddr.getPort() != infoSocAddr.getPort()) {
      throw new RuntimeException("Unable to bind on specified info port in secure " +
          "context. Needed " + streamingAddr.getPort() + ", got " + ss.getLocalPort());
    }
    System.err.println("Successfully obtained privileged resources (streaming port = "
        + ss + " ) (http listener port = " + localAddr.getPort() +")");

    if (localAddr.getPort() > 1023 && isSecure) {
      throw new RuntimeException(
          "Cannot start secure datanode with unprivileged HTTP ports");
    }
    System.err.println("Opened info server at " + infoSocAddr);
  } else {
    httpChannel = null;
  }

  return new SecureResources(ss, httpChannel);
}
 
Example 13
Project: hadoop   File: WebHdfsFileSystem.java   View Source Code Vote up 4 votes
@Override
public synchronized void initialize(URI uri, Configuration conf
    ) throws IOException {
  super.initialize(uri, conf);
  setConf(conf);
  /** set user pattern based on configuration file */
  UserParam.setUserPattern(conf.get(
      DFSConfigKeys.DFS_WEBHDFS_USER_PATTERN_KEY,
      DFSConfigKeys.DFS_WEBHDFS_USER_PATTERN_DEFAULT));

  connectionFactory = URLConnectionFactory
      .newDefaultURLConnectionFactory(conf);

  ugi = UserGroupInformation.getCurrentUser();
  this.uri = URI.create(uri.getScheme() + "://" + uri.getAuthority());
  this.nnAddrs = resolveNNAddr();

  boolean isHA = HAUtil.isClientFailoverConfigured(conf, this.uri);
  boolean isLogicalUri = isHA && HAUtil.isLogicalUri(conf, this.uri);
  // In non-HA or non-logical URI case, the code needs to call
  // getCanonicalUri() in order to handle the case where no port is
  // specified in the URI
  this.tokenServiceName = isLogicalUri ?
      HAUtil.buildTokenServiceForLogicalUri(uri, getScheme())
      : SecurityUtil.buildTokenService(getCanonicalUri());

  if (!isHA) {
    this.retryPolicy =
        RetryUtils.getDefaultRetryPolicy(
            conf,
            DFSConfigKeys.DFS_HTTP_CLIENT_RETRY_POLICY_ENABLED_KEY,
            DFSConfigKeys.DFS_HTTP_CLIENT_RETRY_POLICY_ENABLED_DEFAULT,
            DFSConfigKeys.DFS_HTTP_CLIENT_RETRY_POLICY_SPEC_KEY,
            DFSConfigKeys.DFS_HTTP_CLIENT_RETRY_POLICY_SPEC_DEFAULT,
            SafeModeException.class);
  } else {

    int maxFailoverAttempts = conf.getInt(
        DFSConfigKeys.DFS_HTTP_CLIENT_FAILOVER_MAX_ATTEMPTS_KEY,
        DFSConfigKeys.DFS_HTTP_CLIENT_FAILOVER_MAX_ATTEMPTS_DEFAULT);
    int maxRetryAttempts = conf.getInt(
        DFSConfigKeys.DFS_HTTP_CLIENT_RETRY_MAX_ATTEMPTS_KEY,
        DFSConfigKeys.DFS_HTTP_CLIENT_RETRY_MAX_ATTEMPTS_DEFAULT);
    int failoverSleepBaseMillis = conf.getInt(
        DFSConfigKeys.DFS_HTTP_CLIENT_FAILOVER_SLEEPTIME_BASE_KEY,
        DFSConfigKeys.DFS_HTTP_CLIENT_FAILOVER_SLEEPTIME_BASE_DEFAULT);
    int failoverSleepMaxMillis = conf.getInt(
        DFSConfigKeys.DFS_HTTP_CLIENT_FAILOVER_SLEEPTIME_MAX_KEY,
        DFSConfigKeys.DFS_HTTP_CLIENT_FAILOVER_SLEEPTIME_MAX_DEFAULT);

    this.retryPolicy = RetryPolicies
        .failoverOnNetworkException(RetryPolicies.TRY_ONCE_THEN_FAIL,
            maxFailoverAttempts, maxRetryAttempts, failoverSleepBaseMillis,
            failoverSleepMaxMillis);
  }

  this.workingDir = getHomeDirectory();
  this.canRefreshDelegationToken = UserGroupInformation.isSecurityEnabled();
  this.disallowFallbackToInsecureCluster = !conf.getBoolean(
      CommonConfigurationKeys.IPC_CLIENT_FALLBACK_TO_SIMPLE_AUTH_ALLOWED_KEY,
      CommonConfigurationKeys.IPC_CLIENT_FALLBACK_TO_SIMPLE_AUTH_ALLOWED_DEFAULT);
  this.delegationToken = null;
}
 
Example 14
Project: hadoop   File: EditLogFileInputStream.java   View Source Code Vote up 4 votes
public URLLog(URLConnectionFactory connectionFactory, URL url) {
  this.connectionFactory = connectionFactory;
  this.isSpnegoEnabled = UserGroupInformation.isSecurityEnabled();
  this.url = url;
}
 
Example 15
Project: hadoop   File: SaslDataTransferServer.java   View Source Code Vote up 4 votes
/**
 * Receives SASL negotiation from a peer on behalf of a server.
 *
 * @param peer connection peer
 * @param underlyingOut connection output stream
 * @param underlyingIn connection input stream
 * @param int xferPort data transfer port of DataNode accepting connection
 * @param datanodeId ID of DataNode accepting connection
 * @return new pair of streams, wrapped after SASL negotiation
 * @throws IOException for any error
 */
public IOStreamPair receive(Peer peer, OutputStream underlyingOut,
    InputStream underlyingIn, int xferPort, DatanodeID datanodeId)
    throws IOException {
  if (dnConf.getEncryptDataTransfer()) {
    LOG.debug(
      "SASL server doing encrypted handshake for peer = {}, datanodeId = {}",
      peer, datanodeId);
    return getEncryptedStreams(peer, underlyingOut, underlyingIn);
  } else if (!UserGroupInformation.isSecurityEnabled()) {
    LOG.debug(
      "SASL server skipping handshake in unsecured configuration for "
      + "peer = {}, datanodeId = {}", peer, datanodeId);
    return new IOStreamPair(underlyingIn, underlyingOut);
  } else if (SecurityUtil.isPrivilegedPort(xferPort)) {
    LOG.debug(
      "SASL server skipping handshake in secured configuration for "
      + "peer = {}, datanodeId = {}", peer, datanodeId);
    return new IOStreamPair(underlyingIn, underlyingOut);
  } else if (dnConf.getSaslPropsResolver() != null) {
    LOG.debug(
      "SASL server doing general handshake for peer = {}, datanodeId = {}",
      peer, datanodeId);
    return getSaslStreams(peer, underlyingOut, underlyingIn);
  } else if (dnConf.getIgnoreSecurePortsForTesting()) {
    // It's a secured cluster using non-privileged ports, but no SASL.  The
    // only way this can happen is if the DataNode has
    // ignore.secure.ports.for.testing configured, so this is a rare edge case.
    LOG.debug(
      "SASL server skipping handshake in secured configuration with no SASL "
      + "protection configured for peer = {}, datanodeId = {}",
      peer, datanodeId);
    return new IOStreamPair(underlyingIn, underlyingOut);
  } else {
    // The error message here intentionally does not mention
    // ignore.secure.ports.for.testing.  That's intended for dev use only.
    // This code path is not expected to execute ever, because DataNode startup
    // checks for invalid configuration and aborts.
    throw new IOException(String.format("Cannot create a secured " +
      "connection if DataNode listens on unprivileged port (%d) and no " +
      "protection is defined in configuration property %s.",
      datanodeId.getXferPort(), DFS_DATA_TRANSFER_PROTECTION_KEY));
  }
}
 
Example 16
Project: hadoop   File: SecureIOUtils.java   View Source Code Vote up 3 votes
/**
 * Open the given File for read access, verifying the expected user/group
 * constraints if security is enabled.
 *
 * Note that this function provides no additional checks if Hadoop
 * security is disabled, since doing the checks would be too expensive
 * when native libraries are not available.
 *
 * @param f the file that we are trying to open
 * @param expectedOwner the expected user owner for the file
 * @param expectedGroup the expected group owner for the file
 * @throws IOException if an IO Error occurred, or security is enabled and
 * the user/group does not match
 */
public static FileInputStream openForRead(File f, String expectedOwner, 
    String expectedGroup) throws IOException {
  if (!UserGroupInformation.isSecurityEnabled()) {
    return new FileInputStream(f);
  }
  return forceSecureOpenForRead(f, expectedOwner, expectedGroup);
}
 
Example 17
Project: hadoop   File: SecureIOUtils.java   View Source Code Vote up 3 votes
/**
 * Open the given File for random read access, verifying the expected user/
 * group constraints if security is enabled.
 * 
 * Note that this function provides no additional security checks if hadoop
 * security is disabled, since doing the checks would be too expensive when
 * native libraries are not available.
 * 
 * @param f file that we are trying to open
 * @param mode mode in which we want to open the random access file
 * @param expectedOwner the expected user owner for the file
 * @param expectedGroup the expected group owner for the file
 * @throws IOException if an IO error occurred or if the user/group does
 * not match when security is enabled.
 */
public static RandomAccessFile openForRandomRead(File f,
    String mode, String expectedOwner, String expectedGroup)
    throws IOException {
  if (!UserGroupInformation.isSecurityEnabled()) {
    return new RandomAccessFile(f, mode);
  }
  return forceSecureOpenForRandomRead(f, mode, expectedOwner, expectedGroup);
}
 
Example 18
Project: hadoop-oss   File: SecureIOUtils.java   View Source Code Vote up 3 votes
/**
 * Open the given File for random read access, verifying the expected user/
 * group constraints if security is enabled.
 * 
 * Note that this function provides no additional security checks if hadoop
 * security is disabled, since doing the checks would be too expensive when
 * native libraries are not available.
 * 
 * @param f file that we are trying to open
 * @param mode mode in which we want to open the random access file
 * @param expectedOwner the expected user owner for the file
 * @param expectedGroup the expected group owner for the file
 * @throws IOException if an IO error occurred or if the user/group does
 * not match when security is enabled.
 */
public static RandomAccessFile openForRandomRead(File f,
    String mode, String expectedOwner, String expectedGroup)
    throws IOException {
  if (!UserGroupInformation.isSecurityEnabled()) {
    return new RandomAccessFile(f, mode);
  }
  return forceSecureOpenForRandomRead(f, mode, expectedOwner, expectedGroup);
}
 
Example 19
Project: hadoop-oss   File: SecureIOUtils.java   View Source Code Vote up 3 votes
/**
 * Opens the {@link FSDataInputStream} on the requested file on local file
 * system, verifying the expected user/group constraints if security is
 * enabled.
 * @param file absolute path of the file
 * @param expectedOwner the expected user owner for the file
 * @param expectedGroup the expected group owner for the file
 * @throws IOException if an IO Error occurred or the user/group does not
 * match if security is enabled
 */
public static FSDataInputStream openFSDataInputStream(File file,
    String expectedOwner, String expectedGroup) throws IOException {
  if (!UserGroupInformation.isSecurityEnabled()) {
    return rawFilesystem.open(new Path(file.getAbsolutePath()));
  }
  return forceSecureOpenFSDataInputStream(file, expectedOwner, expectedGroup);
}
 
Example 20
Project: hadoop-oss   File: SecureIOUtils.java   View Source Code Vote up 3 votes
/**
 * Open the given File for read access, verifying the expected user/group
 * constraints if security is enabled.
 *
 * Note that this function provides no additional checks if Hadoop
 * security is disabled, since doing the checks would be too expensive
 * when native libraries are not available.
 *
 * @param f the file that we are trying to open
 * @param expectedOwner the expected user owner for the file
 * @param expectedGroup the expected group owner for the file
 * @throws IOException if an IO Error occurred, or security is enabled and
 * the user/group does not match
 */
public static FileInputStream openForRead(File f, String expectedOwner, 
    String expectedGroup) throws IOException {
  if (!UserGroupInformation.isSecurityEnabled()) {
    return new FileInputStream(f);
  }
  return forceSecureOpenForRead(f, expectedOwner, expectedGroup);
}