Java Code Examples for org.apache.hadoop.hbase.util.Threads#newDaemonThreadFactory()

The following examples show how to use org.apache.hadoop.hbase.util.Threads#newDaemonThreadFactory() . You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example 1
Source File: ThreadPoolManager.java    From phoenix with Apache License 2.0 6 votes vote down vote up
/**
 * @param conf
 */
private static ShutdownOnUnusedThreadPoolExecutor getDefaultExecutor(ThreadPoolBuilder builder) {
  int maxThreads = builder.getMaxThreads();
  long keepAliveTime = builder.getKeepAliveTime();

  // we prefer starting a new thread to queuing (the opposite of the usual ThreadPoolExecutor)
  // since we are probably writing to a bunch of index tables in this case. Any pending requests
  // are then queued up in an infinite (Integer.MAX_VALUE) queue. However, we allow core threads
  // to timeout, to we tune up/down for bursty situations. We could be a bit smarter and more
  // closely manage the core-thread pool size to handle the bursty traffic (so we can always keep
  // some core threads on hand, rather than starting from scratch each time), but that would take
  // even more time. If we shutdown the pool, but are still putting new tasks, we can just do the
  // usual policy and throw a RejectedExecutionException because we are shutting down anyways and
  // the worst thing is that this gets unloaded.
  ShutdownOnUnusedThreadPoolExecutor pool =
      new ShutdownOnUnusedThreadPoolExecutor(maxThreads, maxThreads, keepAliveTime,
          TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>(),
          Threads.newDaemonThreadFactory(builder.getName() + "-"), builder.getName());
  pool.allowCoreThreadTimeOut(true);
  return pool;
}
 
Example 2
Source File: ThreadPoolManager.java    From phoenix with BSD 3-Clause "New" or "Revised" License 6 votes vote down vote up
/**
 * @param conf
 * @return
 */
private static ShutdownOnUnusedThreadPoolExecutor getDefaultExecutor(ThreadPoolBuilder builder) {
  int maxThreads = builder.getMaxThreads();
  long keepAliveTime = builder.getKeepAliveTime();

  // we prefer starting a new thread to queuing (the opposite of the usual ThreadPoolExecutor)
  // since we are probably writing to a bunch of index tables in this case. Any pending requests
  // are then queued up in an infinite (Integer.MAX_VALUE) queue. However, we allow core threads
  // to timeout, to we tune up/down for bursty situations. We could be a bit smarter and more
  // closely manage the core-thread pool size to handle the bursty traffic (so we can always keep
  // some core threads on hand, rather than starting from scratch each time), but that would take
  // even more time. If we shutdown the pool, but are still putting new tasks, we can just do the
  // usual policy and throw a RejectedExecutionException because we are shutting down anyways and
  // the worst thing is that this gets unloaded.
  ShutdownOnUnusedThreadPoolExecutor pool =
      new ShutdownOnUnusedThreadPoolExecutor(maxThreads, maxThreads, keepAliveTime,
          TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>(),
          Threads.newDaemonThreadFactory(builder.getName() + "-"), builder.getName());
  pool.allowCoreThreadTimeOut(true);
  return pool;
}
 
Example 3
Source File: ThreadPoolManager.java    From phoenix with Apache License 2.0 6 votes vote down vote up
/**
 * @param conf
 */
private static ShutdownOnUnusedThreadPoolExecutor getDefaultExecutor(ThreadPoolBuilder builder) {
  int maxThreads = builder.getMaxThreads();
  long keepAliveTime = builder.getKeepAliveTime();

  // we prefer starting a new thread to queuing (the opposite of the usual ThreadPoolExecutor)
  // since we are probably writing to a bunch of index tables in this case. Any pending requests
  // are then queued up in an infinite (Integer.MAX_VALUE) queue. However, we allow core threads
  // to timeout, to we tune up/down for bursty situations. We could be a bit smarter and more
  // closely manage the core-thread pool size to handle the bursty traffic (so we can always keep
  // some core threads on hand, rather than starting from scratch each time), but that would take
  // even more time. If we shutdown the pool, but are still putting new tasks, we can just do the
  // usual policy and throw a RejectedExecutionException because we are shutting down anyways and
  // the worst thing is that this gets unloaded.
  ShutdownOnUnusedThreadPoolExecutor pool =
      new ShutdownOnUnusedThreadPoolExecutor(maxThreads, maxThreads, keepAliveTime,
          TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>(),
          Threads.newDaemonThreadFactory(builder.getName() + "-"), builder.getName());
  pool.allowCoreThreadTimeOut(true);
  return pool;
}
 
Example 4
Source File: MasterFifoRpcScheduler.java    From hbase with Apache License 2.0 6 votes vote down vote up
@Override
public void start() {
  LOG.info(
    "Using {} as call queue; handlerCount={}; maxQueueLength={}; rsReportHandlerCount={}; "
        + "rsReportMaxQueueLength={}",
    this.getClass().getSimpleName(), handlerCount, maxQueueLength, rsReportHandlerCount,
    rsRsreportMaxQueueLength);
  this.executor = new ThreadPoolExecutor(handlerCount, handlerCount, 60, TimeUnit.SECONDS,
      new ArrayBlockingQueue<Runnable>(maxQueueLength),
      Threads.newDaemonThreadFactory("MasterFifoRpcScheduler.call.handler"),
      new ThreadPoolExecutor.CallerRunsPolicy());
  this.rsReportExecutor = new ThreadPoolExecutor(rsReportHandlerCount, rsReportHandlerCount, 60,
      TimeUnit.SECONDS, new ArrayBlockingQueue<Runnable>(rsRsreportMaxQueueLength),
      Threads.newDaemonThreadFactory("MasterFifoRpcScheduler.RSReport.handler"),
      new ThreadPoolExecutor.CallerRunsPolicy());
}
 
Example 5
Source File: IncrementCoalescer.java    From hbase with Apache License 2.0 5 votes vote down vote up
public IncrementCoalescer(ThriftHBaseServiceHandler hand) {
  this.handler = hand;
  LinkedBlockingQueue<Runnable> queue = new LinkedBlockingQueue<>();
  pool = new ThreadPoolExecutor(CORE_POOL_SIZE, CORE_POOL_SIZE, 50,
      TimeUnit.MILLISECONDS, queue,
      Threads.newDaemonThreadFactory("IncrementCoalescer"));
  MBeans.register("thrift", "Thrift", this);
}
 
Example 6
Source File: FSHLog.java    From hbase with Apache License 2.0 5 votes vote down vote up
/**
 * Create an edit log at the given <code>dir</code> location. You should never have to load an
 * existing log. If there is a log at startup, it should have already been processed and deleted
 * by the time the WAL object is started up.
 * @param fs filesystem handle
 * @param rootDir path to where logs and oldlogs
 * @param logDir dir where wals are stored
 * @param archiveDir dir where wals are archived
 * @param conf configuration to use
 * @param listeners Listeners on WAL events. Listeners passed here will be registered before we do
 *          anything else; e.g. the Constructor {@link #rollWriter()}.
 * @param failIfWALExists If true IOException will be thrown if files related to this wal already
 *          exist.
 * @param prefix should always be hostname and port in distributed env and it will be URL encoded
 *          before being used. If prefix is null, "wal" will be used
 * @param suffix will be url encoded. null is treated as empty. non-empty must start with
 *          {@link org.apache.hadoop.hbase.wal.AbstractFSWALProvider#WAL_FILE_NAME_DELIMITER}
 */
public FSHLog(final FileSystem fs, final Path rootDir, final String logDir,
    final String archiveDir, final Configuration conf, final List<WALActionsListener> listeners,
    final boolean failIfWALExists, final String prefix, final String suffix) throws IOException {
  super(fs, rootDir, logDir, archiveDir, conf, listeners, failIfWALExists, prefix, suffix);
  this.minTolerableReplication = conf.getInt("hbase.regionserver.hlog.tolerable.lowreplication",
    CommonFSUtils.getDefaultReplication(fs, this.walDir));
  this.lowReplicationRollLimit = conf.getInt("hbase.regionserver.hlog.lowreplication.rolllimit",
    5);
  this.closeErrorsTolerated = conf.getInt("hbase.regionserver.logroll.errors.tolerated", 2);

  // This is the 'writer' -- a single threaded executor. This single thread 'consumes' what is
  // put on the ring buffer.
  String hostingThreadName = Thread.currentThread().getName();
  // Using BlockingWaitStrategy. Stuff that is going on here takes so long it makes no sense
  // spinning as other strategies do.
  this.disruptor = new Disruptor<>(RingBufferTruck::new,
      getPreallocatedEventCount(),
      Threads.newDaemonThreadFactory(hostingThreadName + ".append"),
      ProducerType.MULTI, new BlockingWaitStrategy());
  // Advance the ring buffer sequence so that it starts from 1 instead of 0,
  // because SyncFuture.NOT_DONE = 0.
  this.disruptor.getRingBuffer().next();
  int maxHandlersCount = conf.getInt(HConstants.REGION_SERVER_HANDLER_COUNT, 200);
  this.ringBufferEventHandler = new RingBufferEventHandler(
      conf.getInt("hbase.regionserver.hlog.syncer.count", 5), maxHandlersCount);
  this.disruptor.setDefaultExceptionHandler(new RingBufferExceptionHandler());
  this.disruptor.handleEventsWith(new RingBufferEventHandler[] { this.ringBufferEventHandler });
  // Starting up threads in constructor is a no no; Interface should have an init call.
  this.disruptor.start();
}
 
Example 7
Source File: SlowLogRecorder.java    From hbase with Apache License 2.0 5 votes vote down vote up
/**
 * Initialize disruptor with configurable ringbuffer size
 */
public SlowLogRecorder(Configuration conf) {
  isOnlineLogProviderEnabled = conf.getBoolean(HConstants.SLOW_LOG_BUFFER_ENABLED_KEY,
    HConstants.DEFAULT_ONLINE_LOG_PROVIDER_ENABLED);

  if (!isOnlineLogProviderEnabled) {
    this.disruptor = null;
    this.logEventHandler = null;
    this.eventCount = 0;
    return;
  }

  this.eventCount = conf.getInt(SLOW_LOG_RING_BUFFER_SIZE,
    HConstants.DEFAULT_SLOW_LOG_RING_BUFFER_SIZE);

  // This is the 'writer' -- a single threaded executor. This single thread consumes what is
  // put on the ringbuffer.
  final String hostingThreadName = Thread.currentThread().getName();

  // disruptor initialization with BlockingWaitStrategy
  this.disruptor = new Disruptor<>(RingBufferEnvelope::new,
    getEventCount(),
    Threads.newDaemonThreadFactory(hostingThreadName + ".slowlog.append"),
    ProducerType.MULTI,
    new BlockingWaitStrategy());
  this.disruptor.setDefaultExceptionHandler(new DisruptorExceptionHandler());

  // initialize ringbuffer event handler
  final boolean isSlowLogTableEnabled = conf.getBoolean(HConstants.SLOW_LOG_SYS_TABLE_ENABLED_KEY,
    HConstants.DEFAULT_SLOW_LOG_SYS_TABLE_ENABLED_KEY);
  this.logEventHandler = new LogEventHandler(this.eventCount, isSlowLogTableEnabled, conf);
  this.disruptor.handleEventsWith(new LogEventHandler[]{this.logEventHandler});
  this.disruptor.start();
}
 
Example 8
Source File: MemStoreFlusher.java    From hbase with Apache License 2.0 5 votes vote down vote up
synchronized void start(UncaughtExceptionHandler eh) {
  ThreadFactory flusherThreadFactory = Threads.newDaemonThreadFactory(
      server.getServerName().toShortString() + "-MemStoreFlusher", eh);
  for (int i = 0; i < flushHandlers.length; i++) {
    flushHandlers[i] = new FlushHandler("MemStoreFlusher." + i);
    flusherThreadFactory.newThread(flushHandlers[i]);
    flushHandlers[i].start();
  }
}
 
Example 9
Source File: FifoRpcScheduler.java    From hbase with Apache License 2.0 5 votes vote down vote up
@Override
public void start() {
  LOG.info("Using {} as user call queue; handlerCount={}; maxQueueLength={}",
    this.getClass().getSimpleName(), handlerCount, maxQueueLength);
  this.executor = new ThreadPoolExecutor(
      handlerCount,
      handlerCount,
      60,
      TimeUnit.SECONDS,
      new ArrayBlockingQueue<>(maxQueueLength),
      Threads.newDaemonThreadFactory("FifoRpcScheduler.handler"),
      new ThreadPoolExecutor.CallerRunsPolicy());
}
 
Example 10
Source File: LogRollBackupSubprocedurePool.java    From hbase with Apache License 2.0 5 votes vote down vote up
public LogRollBackupSubprocedurePool(String name, Configuration conf) {
  // configure the executor service
  long keepAlive =
      conf.getLong(LogRollRegionServerProcedureManager.BACKUP_TIMEOUT_MILLIS_KEY,
        LogRollRegionServerProcedureManager.BACKUP_TIMEOUT_MILLIS_DEFAULT);
  int threads = conf.getInt(CONCURENT_BACKUP_TASKS_KEY, DEFAULT_CONCURRENT_BACKUP_TASKS);
  this.name = name;
  executor =
      new ThreadPoolExecutor(1, threads, keepAlive, TimeUnit.SECONDS,
          new LinkedBlockingQueue<>(),
          Threads.newDaemonThreadFactory("rs(" + name + ")-backup"));
  taskPool = new ExecutorCompletionService<>(executor);
}
 
Example 11
Source File: AcidGuaranteesTestTool.java    From hbase with Apache License 2.0 5 votes vote down vote up
private ExecutorService createThreadPool() {
  int maxThreads = 256;
  int coreThreads = 128;

  long keepAliveTime = 60;
  BlockingQueue<Runnable> workQueue = new LinkedBlockingQueue<Runnable>(
      maxThreads * HConstants.DEFAULT_HBASE_CLIENT_MAX_TOTAL_TASKS);

  ThreadPoolExecutor tpe = new ThreadPoolExecutor(coreThreads, maxThreads, keepAliveTime,
      TimeUnit.SECONDS, workQueue, Threads.newDaemonThreadFactory(toString() + "-shared"));
  tpe.allowCoreThreadTimeOut(true);
  return tpe;
}
 
Example 12
Source File: SimpleRSProcedureManager.java    From hbase with Apache License 2.0 5 votes vote down vote up
public SimpleSubprocedurePool(String name, Configuration conf) {
  this.name = name;
  executor = new ThreadPoolExecutor(1, 1, 500,
      TimeUnit.SECONDS, new LinkedBlockingQueue<>(),
      Threads.newDaemonThreadFactory("rs(" + name + ")-procedure"));
  taskPool = new ExecutorCompletionService<>(executor);
}
 
Example 13
Source File: TestOpenTableInCoprocessor.java    From hbase with Apache License 2.0 5 votes vote down vote up
/**
 * @return a pool that has one thread only at every time. A second action added to the pool (
 *         running concurrently), will cause an exception.
 */
private ExecutorService getPool() {
  int maxThreads = 1;
  long keepAliveTime = 60;
  ThreadPoolExecutor pool =
      new ThreadPoolExecutor(1, maxThreads, keepAliveTime, TimeUnit.SECONDS,
          new SynchronousQueue<>(), Threads.newDaemonThreadFactory("hbase-table"));
  pool.allowCoreThreadTimeOut(true);
  return pool;
}
 
Example 14
Source File: TestAsyncWALReplay.java    From hbase with Apache License 2.0 5 votes vote down vote up
@BeforeClass
public static void setUpBeforeClass() throws Exception {
  GROUP = new NioEventLoopGroup(1, Threads.newDaemonThreadFactory("TestAsyncWALReplay"));
  CHANNEL_CLASS = NioSocketChannel.class;
  Configuration conf = AbstractTestWALReplay.TEST_UTIL.getConfiguration();
  conf.set(WALFactory.WAL_PROVIDER, "asyncfs");
  AbstractTestWALReplay.setUpBeforeClass();
}
 
Example 15
Source File: TestRegionServerReportForDuty.java    From hbase with Apache License 2.0 5 votes vote down vote up
/**
 * Tests region sever reportForDuty with RS RPC retry
 */
@Test
public void testReportForDutyWithRSRpcRetry() throws Exception {
  ScheduledThreadPoolExecutor scheduledThreadPoolExecutor =
      new ScheduledThreadPoolExecutor(1, Threads.newDaemonThreadFactory("RSDelayedStart"));

  // Start a master and wait for it to become the active/primary master.
  // Use a random unique port
  cluster.getConfiguration().setInt(HConstants.MASTER_PORT, HBaseTestingUtility.randomFreePort());
  // Override the default RS RPC retry interval of 100ms to 300ms
  cluster.getConfiguration().setLong("hbase.regionserver.rpc.retry.interval", 300);
  // master has a rs. defaultMinToStart = 2
  boolean tablesOnMaster = LoadBalancer.isTablesOnMaster(testUtil.getConfiguration());
  cluster.getConfiguration().setInt(ServerManager.WAIT_ON_REGIONSERVERS_MINTOSTART,
    tablesOnMaster ? 2 : 1);
  cluster.getConfiguration().setInt(ServerManager.WAIT_ON_REGIONSERVERS_MAXTOSTART,
    tablesOnMaster ? 2 : 1);
  master = cluster.addMaster();
  rs = cluster.addRegionServer();
  LOG.debug("Starting master: " + master.getMaster().getServerName());
  master.start();
  // Delay the RS start so that the meta assignment fails in first attempt and goes to retry block
  scheduledThreadPoolExecutor.schedule(new Runnable() {
    @Override
    public void run() {
      rs.start();
    }
  }, 1000, TimeUnit.MILLISECONDS);

  waitForClusterOnline(master);
}
 
Example 16
Source File: HBaseConnection.java    From kylin with Apache License 2.0 5 votes vote down vote up
public static ExecutorService getCoprocessorPool() {
    if (coprocessorPool != null) {
        return coprocessorPool;
    }

    synchronized (HBaseConnection.class) {
        if (coprocessorPool != null) {
            return coprocessorPool;
        }

        KylinConfig config = KylinConfig.getInstanceFromEnv();

        // copy from HConnectionImplementation.getBatchPool()
        int maxThreads = config.getHBaseMaxConnectionThreads();
        int coreThreads = config.getHBaseCoreConnectionThreads();
        long keepAliveTime = config.getHBaseConnectionThreadPoolAliveSeconds();
        LinkedBlockingQueue<Runnable> workQueue = new LinkedBlockingQueue<Runnable>(maxThreads * 100);
        ThreadPoolExecutor tpe = new ThreadPoolExecutor(coreThreads, maxThreads, keepAliveTime, TimeUnit.SECONDS, workQueue, //
                Threads.newDaemonThreadFactory("kylin-coproc-"));
        tpe.allowCoreThreadTimeOut(true);

        logger.info("Creating coprocessor thread pool with max of {}, core of {}", maxThreads, coreThreads);

        coprocessorPool = tpe;
        return coprocessorPool;
    }
}
 
Example 17
Source File: HBaseConnection.java    From kylin-on-parquet-v2 with Apache License 2.0 5 votes vote down vote up
public static ExecutorService getCoprocessorPool() {
    if (coprocessorPool != null) {
        return coprocessorPool;
    }

    synchronized (HBaseConnection.class) {
        if (coprocessorPool != null) {
            return coprocessorPool;
        }

        KylinConfig config = KylinConfig.getInstanceFromEnv();

        // copy from HConnectionImplementation.getBatchPool()
        int maxThreads = config.getHBaseMaxConnectionThreads();
        int coreThreads = config.getHBaseCoreConnectionThreads();
        long keepAliveTime = config.getHBaseConnectionThreadPoolAliveSeconds();
        LinkedBlockingQueue<Runnable> workQueue = new LinkedBlockingQueue<Runnable>(maxThreads * 100);
        ThreadPoolExecutor tpe = new ThreadPoolExecutor(coreThreads, maxThreads, keepAliveTime, TimeUnit.SECONDS, workQueue, //
                Threads.newDaemonThreadFactory("kylin-coproc-"));
        tpe.allowCoreThreadTimeOut(true);

        logger.info("Creating coprocessor thread pool with max of {}, core of {}", maxThreads, coreThreads);

        coprocessorPool = tpe;
        return coprocessorPool;
    }
}
 
Example 18
Source File: TestAsyncFSWAL.java    From hbase with Apache License 2.0 4 votes vote down vote up
@BeforeClass
public static void setUpBeforeClass() throws Exception {
  GROUP = new NioEventLoopGroup(1, Threads.newDaemonThreadFactory("TestAsyncFSWAL"));
  CHANNEL_CLASS = NioSocketChannel.class;
  AbstractTestFSWAL.setUpBeforeClass();
}
 
Example 19
Source File: ProcedureMember.java    From hbase with Apache License 2.0 3 votes vote down vote up
/**
 * Default thread pool for the procedure
 *
 * @param memberName
 * @param procThreads the maximum number of threads to allow in the pool
 * @param keepAliveMillis the maximum time (ms) that excess idle threads will wait for new tasks
 */
public static ThreadPoolExecutor defaultPool(String memberName, int procThreads,
    long keepAliveMillis) {
  return new ThreadPoolExecutor(1, procThreads, keepAliveMillis, TimeUnit.MILLISECONDS,
      new SynchronousQueue<>(),
      Threads.newDaemonThreadFactory("member: '" + memberName + "' subprocedure"));
}
 
Example 20
Source File: ProcedureCoordinator.java    From hbase with Apache License 2.0 3 votes vote down vote up
/**
 * Default thread pool for the procedure
 *
 * @param coordName
 * @param opThreads the maximum number of threads to allow in the pool
 * @param keepAliveMillis the maximum time (ms) that excess idle threads will wait for new tasks
 */
public static ThreadPoolExecutor defaultPool(String coordName, int opThreads,
    long keepAliveMillis) {
  return new ThreadPoolExecutor(1, opThreads, keepAliveMillis, TimeUnit.MILLISECONDS,
      new SynchronousQueue<>(),
      Threads.newDaemonThreadFactory("(" + coordName + ")-proc-coordinator"));
}