Java Code Examples for org.apache.hadoop.hdfs.DFSInotifyEventInputStream#poll()
The following examples show how to use
org.apache.hadoop.hdfs.DFSInotifyEventInputStream#poll() .
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example 1
Source File: GetHDFSEvents.java From localization_nifi with Apache License 2.0 | 6 votes |
private EventBatch getEventBatch(DFSInotifyEventInputStream eventStream, long duration, TimeUnit timeUnit, int retries) throws IOException, InterruptedException, MissingEventsException { // According to the inotify API we should retry a few times if poll throws an IOException. // Please see org.apache.hadoop.hdfs.DFSInotifyEventInputStream#poll for documentation. int i = 0; while (true) { try { i += 1; return eventStream.poll(duration, timeUnit); } catch (IOException e) { if (i > retries) { getLogger().debug("Failed to poll for event batch. Reached max retry times.", e); throw e; } else { getLogger().debug("Attempt {} failed to poll for event batch. Retrying.", new Object[]{i}); } } } }
Example 2
Source File: GetHDFSEvents.java From nifi with Apache License 2.0 | 6 votes |
private EventBatch getEventBatch(DFSInotifyEventInputStream eventStream, long duration, TimeUnit timeUnit, int retries) throws IOException, InterruptedException, MissingEventsException { // According to the inotify API we should retry a few times if poll throws an IOException. // Please see org.apache.hadoop.hdfs.DFSInotifyEventInputStream#poll for documentation. int i = 0; while (true) { try { i += 1; return eventStream.poll(duration, timeUnit); } catch (IOException e) { if (i > retries) { getLogger().debug("Failed to poll for event batch. Reached max retry times.", e); throw e; } else { getLogger().debug("Attempt {} failed to poll for event batch. Retrying.", new Object[]{i}); } } } }
Example 3
Source File: HdfsFileWatcherPolicy.java From kafka-connect-fs with Apache License 2.0 | 4 votes |
@Override public void run() { while (true) { try { DFSInotifyEventInputStream eventStream = admin.getInotifyEventStream(); if (fs.getFileStatus(fs.getWorkingDirectory()) != null && fs.exists(fs.getWorkingDirectory())) { EventBatch batch = eventStream.poll(); if (batch == null) continue; for (Event event : batch.getEvents()) { switch (event.getEventType()) { case CREATE: if (!((Event.CreateEvent) event).getPath().endsWith("._COPYING_")) { enqueue(((Event.CreateEvent) event).getPath()); } break; case APPEND: if (!((Event.AppendEvent) event).getPath().endsWith("._COPYING_")) { enqueue(((Event.AppendEvent) event).getPath()); } break; case RENAME: if (((Event.RenameEvent) event).getSrcPath().endsWith("._COPYING_")) { enqueue(((Event.RenameEvent) event).getDstPath()); } break; case CLOSE: if (!((Event.CloseEvent) event).getPath().endsWith("._COPYING_")) { enqueue(((Event.CloseEvent) event).getPath()); } break; default: break; } } } } catch (IOException ioe) { if (retrySleepMs > 0) { time.sleep(retrySleepMs); } else { log.warn("Error watching path [{}]. Stopping it...", fs.getWorkingDirectory(), ioe); throw new IllegalWorkerStateException(ioe); } } catch (Exception e) { log.warn("Stopping watcher due to an unexpected exception when watching path [{}].", fs.getWorkingDirectory(), e); throw new IllegalWorkerStateException(e); } } }