Java Code Examples for org.apache.hadoop.metrics2.util.MBeans#register()
The following examples show how to use
org.apache.hadoop.metrics2.util.MBeans#register() .
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example 1
Source File: HddsUtils.java From hadoop-ozone with Apache License 2.0 | 6 votes |
/** * Register the provided MBean with additional JMX ObjectName properties. * If additional properties are not supported then fallback to registering * without properties. * * @param serviceName - see {@link MBeans#register} * @param mBeanName - see {@link MBeans#register} * @param jmxProperties - additional JMX ObjectName properties. * @param mBean - the MBean to register. * @return the named used to register the MBean. */ public static ObjectName registerWithJmxProperties( String serviceName, String mBeanName, Map<String, String> jmxProperties, Object mBean) { try { // Check support for registering with additional properties. final Method registerMethod = MBeans.class.getMethod( "register", String.class, String.class, Map.class, Object.class); return (ObjectName) registerMethod.invoke( null, serviceName, mBeanName, jmxProperties, mBean); } catch (NoSuchMethodException | IllegalAccessException | InvocationTargetException e) { // Fallback if (LOG.isTraceEnabled()) { LOG.trace("Registering MBean {} without additional properties {}", mBeanName, jmxProperties); } return MBeans.register(serviceName, mBeanName, mBean); } }
Example 2
Source File: SCMPipelineManager.java From hadoop-ozone with Apache License 2.0 | 5 votes |
protected SCMPipelineManager(ConfigurationSource conf, NodeManager nodeManager, Table<PipelineID, Pipeline> pipelineStore, EventPublisher eventPublisher, PipelineStateManager pipelineStateManager, PipelineFactory pipelineFactory) throws IOException { this.lock = new ReentrantReadWriteLock(); this.pipelineStore = pipelineStore; this.conf = conf; this.pipelineFactory = pipelineFactory; this.stateManager = pipelineStateManager; // TODO: See if thread priority needs to be set for these threads scheduler = new Scheduler("RatisPipelineUtilsThread", false, 1); this.backgroundPipelineCreator = new BackgroundPipelineCreator(this, scheduler, conf); this.eventPublisher = eventPublisher; this.nodeManager = nodeManager; this.metrics = SCMPipelineMetrics.create(); this.pmInfoBean = MBeans.register("SCMPipelineManager", "SCMPipelineManagerInfo", this); this.pipelineWaitDefaultTimeout = conf.getTimeDuration( HddsConfigKeys.HDDS_PIPELINE_REPORT_INTERVAL, HddsConfigKeys.HDDS_PIPELINE_REPORT_INTERVAL_DEFAULT, TimeUnit.MILLISECONDS); this.isInSafeMode = new AtomicBoolean(conf.getBoolean( HddsConfigKeys.HDDS_SCM_SAFEMODE_ENABLED, HddsConfigKeys.HDDS_SCM_SAFEMODE_ENABLED_DEFAULT)); // Pipeline creation is only allowed after the safemode prechecks have // passed, eg sufficient nodes have registered. this.pipelineCreationAllowed = new AtomicBoolean(!this.isInSafeMode.get()); }
Example 3
Source File: MetricsSourceAdapter.java From big-c with Apache License 2.0 | 5 votes |
synchronized void startMBeans() { if (mbeanName != null) { LOG.warn("MBean "+ name +" already initialized!"); LOG.debug("Stacktrace: ", new Throwable()); return; } mbeanName = MBeans.register(prefix, name, this); LOG.debug("MBean for source "+ name +" registered."); }
Example 4
Source File: FsDatasetImpl.java From big-c with Apache License 2.0 | 5 votes |
/** * Register the FSDataset MBean using the name * "hadoop:service=DataNode,name=FSDatasetState-<datanodeUuid>" */ void registerMBean(final String datanodeUuid) { // We wrap to bypass standard mbean naming convetion. // This wraping can be removed in java 6 as it is more flexible in // package naming for mbeans and their impl. try { StandardMBean bean = new StandardMBean(this,FSDatasetMBean.class); mbeanName = MBeans.register("DataNode", "FSDatasetState-" + datanodeUuid, bean); } catch (NotCompliantMBeanException e) { LOG.warn("Error registering FSDatasetState MBean", e); } LOG.info("Registered FSDatasetState MBean"); }
Example 5
Source File: FsDatasetImpl.java From hadoop with Apache License 2.0 | 5 votes |
/** * Register the FSDataset MBean using the name * "hadoop:service=DataNode,name=FSDatasetState-<datanodeUuid>" */ void registerMBean(final String datanodeUuid) { // We wrap to bypass standard mbean naming convetion. // This wraping can be removed in java 6 as it is more flexible in // package naming for mbeans and their impl. try { StandardMBean bean = new StandardMBean(this,FSDatasetMBean.class); mbeanName = MBeans.register("DataNode", "FSDatasetState-" + datanodeUuid, bean); } catch (NotCompliantMBeanException e) { LOG.warn("Error registering FSDatasetState MBean", e); } LOG.info("Registered FSDatasetState MBean"); }
Example 6
Source File: ReplicationActivityStatus.java From hadoop-ozone with Apache License 2.0 | 5 votes |
public void start() { try { this.jmxObjectName = MBeans.register( "StorageContainerManager", "ReplicationActivityStatus", this); } catch (Exception ex) { LOG.error("JMX bean for ReplicationActivityStatus can't be registered", ex); } }
Example 7
Source File: SCMConnectionManager.java From hadoop-ozone with Apache License 2.0 | 5 votes |
public SCMConnectionManager(ConfigurationSource conf) { this.mapLock = new ReentrantReadWriteLock(); Long timeOut = getScmRpcTimeOutInMilliseconds(conf); this.rpcTimeout = timeOut.intValue(); this.scmMachines = new HashMap<>(); this.conf = conf; jmxBean = MBeans.register("HddsDatanode", "SCMConnectionManager", this); }
Example 8
Source File: SCMNodeStorageStatMap.java From hadoop-ozone with Apache License 2.0 | 4 votes |
private void registerMXBean() { this.scmNodeStorageInfoBean = MBeans.register("StorageContainerManager", "scmNodeStorageInfo", this); }
Example 9
Source File: SCMNodeManager.java From hadoop-ozone with Apache License 2.0 | 4 votes |
private void registerMXBean() { this.nmInfoBean = MBeans.register("SCMNodeManager", "SCMNodeManagerInfo", this); }
Example 10
Source File: FairCallQueue.java From big-c with Apache License 2.0 | 4 votes |
private MetricsProxy(String namespace) { MBeans.register(namespace, "FairCallQueue", this); }
Example 11
Source File: SecondaryNameNode.java From hadoop with Apache License 2.0 | 4 votes |
/** * Initialize SecondaryNameNode. */ private void initialize(final Configuration conf, CommandLineOpts commandLineOpts) throws IOException { final InetSocketAddress infoSocAddr = getHttpAddress(conf); final String infoBindAddress = infoSocAddr.getHostName(); UserGroupInformation.setConfiguration(conf); if (UserGroupInformation.isSecurityEnabled()) { SecurityUtil.login(conf, DFSConfigKeys.DFS_SECONDARY_NAMENODE_KEYTAB_FILE_KEY, DFSConfigKeys.DFS_SECONDARY_NAMENODE_KERBEROS_PRINCIPAL_KEY, infoBindAddress); } // initiate Java VM metrics DefaultMetricsSystem.initialize("SecondaryNameNode"); JvmMetrics.create("SecondaryNameNode", conf.get(DFSConfigKeys.DFS_METRICS_SESSION_ID_KEY), DefaultMetricsSystem.instance()); // Create connection to the namenode. shouldRun = true; nameNodeAddr = NameNode.getServiceAddress(conf, true); this.conf = conf; this.namenode = NameNodeProxies.createNonHAProxy(conf, nameNodeAddr, NamenodeProtocol.class, UserGroupInformation.getCurrentUser(), true).getProxy(); // initialize checkpoint directories fsName = getInfoServer(); checkpointDirs = FSImage.getCheckpointDirs(conf, "/tmp/hadoop/dfs/namesecondary"); checkpointEditsDirs = FSImage.getCheckpointEditsDirs(conf, "/tmp/hadoop/dfs/namesecondary"); checkpointImage = new CheckpointStorage(conf, checkpointDirs, checkpointEditsDirs); checkpointImage.recoverCreate(commandLineOpts.shouldFormat()); checkpointImage.deleteTempEdits(); namesystem = new FSNamesystem(conf, checkpointImage, true); // Disable quota checks namesystem.dir.disableQuotaChecks(); // Initialize other scheduling parameters from the configuration checkpointConf = new CheckpointConf(conf); final InetSocketAddress httpAddr = infoSocAddr; final String httpsAddrString = conf.getTrimmed( DFSConfigKeys.DFS_NAMENODE_SECONDARY_HTTPS_ADDRESS_KEY, DFSConfigKeys.DFS_NAMENODE_SECONDARY_HTTPS_ADDRESS_DEFAULT); InetSocketAddress httpsAddr = NetUtils.createSocketAddr(httpsAddrString); HttpServer2.Builder builder = DFSUtil.httpServerTemplateForNNAndJN(conf, httpAddr, httpsAddr, "secondary", DFSConfigKeys.DFS_SECONDARY_NAMENODE_KERBEROS_INTERNAL_SPNEGO_PRINCIPAL_KEY, DFSConfigKeys.DFS_SECONDARY_NAMENODE_KEYTAB_FILE_KEY); nameNodeStatusBeanName = MBeans.register("SecondaryNameNode", "SecondaryNameNodeInfo", this); infoServer = builder.build(); infoServer.setAttribute("secondary.name.node", this); infoServer.setAttribute("name.system.image", checkpointImage); infoServer.setAttribute(JspHelper.CURRENT_CONF, conf); infoServer.addInternalServlet("imagetransfer", ImageServlet.PATH_SPEC, ImageServlet.class, true); infoServer.start(); LOG.info("Web server init done"); HttpConfig.Policy policy = DFSUtil.getHttpPolicy(conf); int connIdx = 0; if (policy.isHttpEnabled()) { InetSocketAddress httpAddress = infoServer.getConnectorAddress(connIdx++); conf.set(DFSConfigKeys.DFS_NAMENODE_SECONDARY_HTTP_ADDRESS_KEY, NetUtils.getHostPortString(httpAddress)); } if (policy.isHttpsEnabled()) { InetSocketAddress httpsAddress = infoServer.getConnectorAddress(connIdx); conf.set(DFSConfigKeys.DFS_NAMENODE_SECONDARY_HTTPS_ADDRESS_KEY, NetUtils.getHostPortString(httpsAddress)); } legacyOivImageDir = conf.get( DFSConfigKeys.DFS_NAMENODE_LEGACY_OIV_IMAGE_DIR_KEY); LOG.info("Checkpoint Period :" + checkpointConf.getPeriod() + " secs " + "(" + checkpointConf.getPeriod() / 60 + " min)"); LOG.info("Log Size Trigger :" + checkpointConf.getTxnCount() + " txns"); }
Example 12
Source File: DataNode.java From big-c with Apache License 2.0 | 4 votes |
private void registerMXBean() { dataNodeInfoBeanName = MBeans.register("DataNode", "DataNodeInfo", this); }
Example 13
Source File: DataNode.java From hadoop with Apache License 2.0 | 4 votes |
private void registerMXBean() { dataNodeInfoBeanName = MBeans.register("DataNode", "DataNodeInfo", this); }
Example 14
Source File: JournalNode.java From hadoop with Apache License 2.0 | 4 votes |
/** * Register JournalNodeMXBean */ private void registerJNMXBean() { journalNodeInfoBeanName = MBeans.register("JournalNode", "JournalNodeInfo", this); }
Example 15
Source File: MetricsSystemImpl.java From big-c with Apache License 2.0 | 4 votes |
private void initSystemMBean() { checkNotNull(prefix, "prefix should not be null here!"); if (mbeanName == null) { mbeanName = MBeans.register(prefix, MS_CONTROL_NAME, this); } }
Example 16
Source File: DecayRpcScheduler.java From big-c with Apache License 2.0 | 4 votes |
private MetricsProxy(String namespace) { MBeans.register(namespace, "DecayRpcScheduler", this); }
Example 17
Source File: NameNode.java From big-c with Apache License 2.0 | 4 votes |
/** * Register NameNodeStatusMXBean */ private void registerNNSMXBean() { nameNodeStatusBeanName = MBeans.register("NameNode", "NameNodeStatus", this); }
Example 18
Source File: SecondaryNameNode.java From big-c with Apache License 2.0 | 4 votes |
/** * Initialize SecondaryNameNode. */ private void initialize(final Configuration conf, CommandLineOpts commandLineOpts) throws IOException { final InetSocketAddress infoSocAddr = getHttpAddress(conf); final String infoBindAddress = infoSocAddr.getHostName(); UserGroupInformation.setConfiguration(conf); if (UserGroupInformation.isSecurityEnabled()) { SecurityUtil.login(conf, DFSConfigKeys.DFS_SECONDARY_NAMENODE_KEYTAB_FILE_KEY, DFSConfigKeys.DFS_SECONDARY_NAMENODE_KERBEROS_PRINCIPAL_KEY, infoBindAddress); } // initiate Java VM metrics DefaultMetricsSystem.initialize("SecondaryNameNode"); JvmMetrics.create("SecondaryNameNode", conf.get(DFSConfigKeys.DFS_METRICS_SESSION_ID_KEY), DefaultMetricsSystem.instance()); // Create connection to the namenode. shouldRun = true; nameNodeAddr = NameNode.getServiceAddress(conf, true); this.conf = conf; this.namenode = NameNodeProxies.createNonHAProxy(conf, nameNodeAddr, NamenodeProtocol.class, UserGroupInformation.getCurrentUser(), true).getProxy(); // initialize checkpoint directories fsName = getInfoServer(); checkpointDirs = FSImage.getCheckpointDirs(conf, "/tmp/hadoop/dfs/namesecondary"); checkpointEditsDirs = FSImage.getCheckpointEditsDirs(conf, "/tmp/hadoop/dfs/namesecondary"); checkpointImage = new CheckpointStorage(conf, checkpointDirs, checkpointEditsDirs); checkpointImage.recoverCreate(commandLineOpts.shouldFormat()); checkpointImage.deleteTempEdits(); namesystem = new FSNamesystem(conf, checkpointImage, true); // Disable quota checks namesystem.dir.disableQuotaChecks(); // Initialize other scheduling parameters from the configuration checkpointConf = new CheckpointConf(conf); final InetSocketAddress httpAddr = infoSocAddr; final String httpsAddrString = conf.getTrimmed( DFSConfigKeys.DFS_NAMENODE_SECONDARY_HTTPS_ADDRESS_KEY, DFSConfigKeys.DFS_NAMENODE_SECONDARY_HTTPS_ADDRESS_DEFAULT); InetSocketAddress httpsAddr = NetUtils.createSocketAddr(httpsAddrString); HttpServer2.Builder builder = DFSUtil.httpServerTemplateForNNAndJN(conf, httpAddr, httpsAddr, "secondary", DFSConfigKeys.DFS_SECONDARY_NAMENODE_KERBEROS_INTERNAL_SPNEGO_PRINCIPAL_KEY, DFSConfigKeys.DFS_SECONDARY_NAMENODE_KEYTAB_FILE_KEY); nameNodeStatusBeanName = MBeans.register("SecondaryNameNode", "SecondaryNameNodeInfo", this); infoServer = builder.build(); infoServer.setAttribute("secondary.name.node", this); infoServer.setAttribute("name.system.image", checkpointImage); infoServer.setAttribute(JspHelper.CURRENT_CONF, conf); infoServer.addInternalServlet("imagetransfer", ImageServlet.PATH_SPEC, ImageServlet.class, true); infoServer.start(); LOG.info("Web server init done"); HttpConfig.Policy policy = DFSUtil.getHttpPolicy(conf); int connIdx = 0; if (policy.isHttpEnabled()) { InetSocketAddress httpAddress = infoServer.getConnectorAddress(connIdx++); conf.set(DFSConfigKeys.DFS_NAMENODE_SECONDARY_HTTP_ADDRESS_KEY, NetUtils.getHostPortString(httpAddress)); } if (policy.isHttpsEnabled()) { InetSocketAddress httpsAddress = infoServer.getConnectorAddress(connIdx); conf.set(DFSConfigKeys.DFS_NAMENODE_SECONDARY_HTTPS_ADDRESS_KEY, NetUtils.getHostPortString(httpsAddress)); } legacyOivImageDir = conf.get( DFSConfigKeys.DFS_NAMENODE_LEGACY_OIV_IMAGE_DIR_KEY); LOG.info("Checkpoint Period :" + checkpointConf.getPeriod() + " secs " + "(" + checkpointConf.getPeriod() / 60 + " min)"); LOG.info("Log Size Trigger :" + checkpointConf.getTxnCount() + " txns"); }
Example 19
Source File: SnapshotManager.java From big-c with Apache License 2.0 | 4 votes |
public void registerMXBean() { mxBeanName = MBeans.register("NameNode", "SnapshotInfo", this); }
Example 20
Source File: MBeanSourceImpl.java From hbase with Apache License 2.0 | 2 votes |
/** * Register an mbean with the underlying metrics system * @param serviceName Metrics service/system name * @param metricsName name of the metrics obejct to expose * @param theMbean the actual MBean * @return ObjectName from jmx */ @Override public ObjectName register(String serviceName, String metricsName, Object theMbean) { return MBeans.register(serviceName, metricsName, theMbean); }