Java Code Examples for org.apache.commons.logging.Log#isDebugEnabled()
The following examples show how to use
org.apache.commons.logging.Log#isDebugEnabled() .
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example 1
Source File: Declaration.java From lams with GNU General Public License v2.0 | 6 votes |
/** * Attempt to load custom rules for the target class at the specified * pattern. * <p> * On return, any custom rules associated with the plugin class have * been loaded into the Rules object currently associated with the * specified digester object. */ public void configure(Digester digester, String pattern) throws PluginException { Log log = digester.getLogger(); boolean debug = log.isDebugEnabled(); if (debug) { log.debug("configure being called!"); } if (!initialized) { throw new PluginAssertionFailure("Not initialized."); } if (ruleLoader != null) { ruleLoader.addRules(digester, pattern); } }
Example 2
Source File: SchedulerAppUtils.java From big-c with Apache License 2.0 | 6 votes |
public static boolean isBlacklisted(SchedulerApplicationAttempt application, SchedulerNode node, Log LOG) { if (application.isBlacklisted(node.getNodeName())) { if (LOG.isDebugEnabled()) { LOG.debug("Skipping 'host' " + node.getNodeName() + " for " + application.getApplicationId() + " since it has been blacklisted"); } return true; } if (application.isBlacklisted(node.getRackName())) { if (LOG.isDebugEnabled()) { LOG.debug("Skipping 'rack' " + node.getRackName() + " for " + application.getApplicationId() + " since it has been blacklisted"); } return true; } return false; }
Example 3
Source File: AnnotationUtils.java From spring4-understanding with Apache License 2.0 | 6 votes |
/** * Handle the supplied annotation introspection exception. * <p>If the supplied exception is an {@link AnnotationConfigurationException}, * it will simply be thrown, allowing it to propagate to the caller, and * nothing will be logged. * <p>Otherwise, this method logs an introspection failure (in particular * {@code TypeNotPresentExceptions}) before moving on, assuming nested * Class values were not resolvable within annotation attributes and * thereby effectively pretending there were no annotations on the specified * element. * @param element the element that we tried to introspect annotations on * @param ex the exception that we encountered * @see #rethrowAnnotationConfigurationException */ static void handleIntrospectionFailure(AnnotatedElement element, Exception ex) { rethrowAnnotationConfigurationException(ex); Log loggerToUse = logger; if (loggerToUse == null) { loggerToUse = LogFactory.getLog(AnnotationUtils.class); logger = loggerToUse; } if (element instanceof Class && Annotation.class.isAssignableFrom((Class<?>) element)) { // Meta-annotation lookup on an annotation type if (loggerToUse.isDebugEnabled()) { loggerToUse.debug("Failed to introspect meta-annotations on [" + element + "]: " + ex); } } else { // Direct annotation lookup on regular Class, Method, Field if (loggerToUse.isInfoEnabled()) { loggerToUse.info("Failed to introspect annotations on [" + element + "]: " + ex); } } }
Example 4
Source File: SimpleApplicationEventMulticaster.java From lams with GNU General Public License v2.0 | 6 votes |
@SuppressWarnings({"unchecked", "rawtypes"}) private void doInvokeListener(ApplicationListener listener, ApplicationEvent event) { try { listener.onApplicationEvent(event); } catch (ClassCastException ex) { String msg = ex.getMessage(); if (msg == null || msg.startsWith(event.getClass().getName())) { // Possibly a lambda-defined listener which we could not resolve the generic event type for Log logger = LogFactory.getLog(getClass()); if (logger.isDebugEnabled()) { logger.debug("Non-matching event type for listener: " + listener, ex); } } else { throw ex; } } }
Example 5
Source File: FiCaSchedulerUtils.java From hadoop with Apache License 2.0 | 6 votes |
public static boolean isBlacklisted(FiCaSchedulerApp application, FiCaSchedulerNode node, Log LOG) { if (application.isBlacklisted(node.getNodeName())) { if (LOG.isDebugEnabled()) { LOG.debug("Skipping 'host' " + node.getNodeName() + " for " + application.getApplicationId() + " since it has been blacklisted"); } return true; } if (application.isBlacklisted(node.getRackName())) { if (LOG.isDebugEnabled()) { LOG.debug("Skipping 'rack' " + node.getRackName() + " for " + application.getApplicationId() + " since it has been blacklisted"); } return true; } return false; }
Example 6
Source File: MethodUtils.java From commons-beanutils with Apache License 2.0 | 5 votes |
/** * Gets the class for the primitive type corresponding to the primitive wrapper class given. * For example, an instance of {@code Boolean.class</code> returns a <code>boolean.class}. * @param wrapperType the * @return the primitive type class corresponding to the given wrapper class, * null if no match is found */ public static Class<?> getPrimitiveType(final Class<?> wrapperType) { // does anyone know a better strategy than comparing names? if (Boolean.class.equals(wrapperType)) { return boolean.class; } else if (Float.class.equals(wrapperType)) { return float.class; } else if (Long.class.equals(wrapperType)) { return long.class; } else if (Integer.class.equals(wrapperType)) { return int.class; } else if (Short.class.equals(wrapperType)) { return short.class; } else if (Byte.class.equals(wrapperType)) { return byte.class; } else if (Double.class.equals(wrapperType)) { return double.class; } else if (Character.class.equals(wrapperType)) { return char.class; } else { final Log log = LogFactory.getLog(MethodUtils.class); if (log.isDebugEnabled()) { log.debug("Not a known primitive wrapper class: " + wrapperType); } return null; } }
Example 7
Source File: PluginRules.java From lams with GNU General Public License v2.0 | 5 votes |
/** * Return a List of all registered Rule instances that match the specified * nodepath, or a zero-length List if there are no matches. If more * than one Rule instance matches, they <strong>must</strong> be returned * in the order originally registered through the <code>add()</code> * method. * <p> * @param namespaceURI Namespace URI for which to select matching rules, * or <code>null</code> to match regardless of namespace URI * @param path the path to the xml nodes to be matched. */ public List<Rule> match(String namespaceURI, String path) { Log log = LogUtils.getLogger(digester); boolean debug = log.isDebugEnabled(); if (debug) { log.debug( "Matching path [" + path + "] on rules object " + this.toString()); } List<Rule> matches; if ((mountPoint != null) && (path.length() <= mountPoint.length())) { if (debug) { log.debug( "Path [" + path + "] delegated to parent."); } matches = parent.match(namespaceURI, path); // Note that in the case where path equals mountPoint, // we deliberately return only the rules from the parent, // even though this object may hold some rules matching // this same path. See PluginCreateRule's begin, body and end // methods for the reason. } else { log.debug("delegating to decorated rules."); matches = decoratedRules.match(namespaceURI, path); } return matches; }
Example 8
Source File: InitializationUtils.java From elasticsearch-hadoop with Apache License 2.0 | 5 votes |
public static boolean setValueReaderIfNotSet(Settings settings, Class<? extends ValueReader> clazz, Log log) { if (!StringUtils.hasText(settings.getSerializerValueReaderClassName())) { settings.setProperty(ConfigurationOptions.ES_SERIALIZATION_READER_VALUE_CLASS, clazz.getName()); Log logger = (log != null ? log : LogFactory.getLog(clazz)); if (logger.isDebugEnabled()) { logger.debug(String.format("Using pre-defined reader serializer [%s] as default", settings.getSerializerValueReaderClassName())); } return true; } return false; }
Example 9
Source File: RangerPerfTracerFactory.java From ranger with Apache License 2.0 | 5 votes |
static RangerPerfTracer getPerfTracer(Log logger, String tag, String data) { RangerPerfTracer ret = null; if (PerfDataRecorder.collectStatistics()) { ret = new RangerPerfCollectorTracer(logger, tag, data); } else if (logger.isDebugEnabled()) { ret = new RangerPerfTracer(logger, tag, data); } return ret; }
Example 10
Source File: IOUtils.java From chimera with Apache License 2.0 | 5 votes |
/** * Closes the Closeable objects and <b>ignore</b> any {@link IOException} or * null pointers. Must only be used for cleanup in exception handlers. * * @param log the log to record problems to at debug level. Can be null. * @param closeables the objects to close. */ public static void cleanup(Log log, java.io.Closeable... closeables) { for (java.io.Closeable c : closeables) { if (c != null) { try { c.close(); } catch(Throwable e) { if (log != null && log.isDebugEnabled()) { log.debug("Exception in closing " + c, e); } } } } }
Example 11
Source File: InitializationUtils.java From elasticsearch-hadoop with Apache License 2.0 | 5 votes |
public static boolean setFieldExtractorIfNotSet(Settings settings, Class<? extends FieldExtractor> clazz, Log log) { if (!StringUtils.hasText(settings.getMappingIdExtractorClassName())) { Log logger = (log != null ? log : LogFactory.getLog(clazz)); String name = clazz.getName(); settings.setProperty(ConfigurationOptions.ES_MAPPING_DEFAULT_EXTRACTOR_CLASS, name); if (logger.isDebugEnabled()) { logger.debug(String.format("Using pre-defined field extractor [%s] as default", settings.getMappingIdExtractorClassName())); } return true; } return false; }
Example 12
Source File: ExceptionWebSocketHandlerDecorator.java From spring4-understanding with Apache License 2.0 | 5 votes |
public static void tryCloseWithError(WebSocketSession session, Throwable exception, Log logger) { if (logger.isDebugEnabled()) { logger.debug("Closing due to exception for " + session, exception); } if (session.isOpen()) { try { session.close(CloseStatus.SERVER_ERROR); } catch (Throwable ex) { // ignore } } }
Example 13
Source File: InitializationUtils.java From elasticsearch-hadoop with Apache License 2.0 | 5 votes |
/** * Retrieves the Elasticsearch cluster name and version from the settings, or, if they should be missing, * creates a bootstrap client and obtains their values. */ public static ClusterInfo discoverClusterInfo(Settings settings, Log log) { ClusterName remoteClusterName = null; EsMajorVersion remoteVersion = null; String clusterName = settings.getProperty(InternalConfigurationOptions.INTERNAL_ES_CLUSTER_NAME); String clusterUUID = settings.getProperty(InternalConfigurationOptions.INTERNAL_ES_CLUSTER_UUID); String version = settings.getProperty(InternalConfigurationOptions.INTERNAL_ES_VERSION); if (StringUtils.hasText(clusterName) && StringUtils.hasText(version)) { // UUID is optional for now if (log.isDebugEnabled()) { log.debug(String.format("Elasticsearch cluster [NAME:%s][UUID:%s][VERSION:%s] already present in configuration; skipping discovery", clusterName, clusterUUID, version)); } remoteClusterName = new ClusterName(clusterName, clusterUUID); remoteVersion = EsMajorVersion.parse(version); return new ClusterInfo(remoteClusterName, remoteVersion); } RestClient bootstrap = new RestClient(settings); // first get ES main action info try { ClusterInfo mainInfo = bootstrap.mainInfo(); if (log.isDebugEnabled()) { log.debug(String.format("Discovered Elasticsearch cluster [%s/%s], version [%s]", mainInfo.getClusterName().getName(), mainInfo.getClusterName().getUUID(), mainInfo.getMajorVersion())); } settings.setInternalClusterInfo(mainInfo); return mainInfo; } catch (EsHadoopException ex) { throw new EsHadoopIllegalArgumentException(String.format("Cannot detect ES version - " + "typically this happens if the network/Elasticsearch cluster is not accessible or when targeting " + "a WAN/Cloud instance without the proper setting '%s'", ConfigurationOptions.ES_NODES_WAN_ONLY), ex); } finally { bootstrap.close(); } }
Example 14
Source File: RangerPerfTracer.java From ranger with Apache License 2.0 | 4 votes |
public static boolean isPerfTraceEnabled(Log logger) { return logger.isDebugEnabled(); }
Example 15
Source File: ScriptResourceHelper.java From alfresco-core with GNU Lesser General Public License v3.0 | 4 votes |
/** * Recursively resolve imports in the specified scripts, adding the imports to the * specific list of scriplets to combine later. * * @param location Script location - used to ensure duplicates are not added * @param script The script to recursively resolve imports for * @param scripts The collection of scriplets to execute with imports resolved and removed */ private static void recurseScriptImports( String location, String script, ScriptResourceLoader loader, Map<String, String> scripts, Log logger) { int index = 0; // skip any initial whitespace for (; index<script.length(); index++) { if (Character.isWhitespace(script.charAt(index)) == false) { break; } } // look for the "<import" directive marker if (script.startsWith(IMPORT_PREFIX, index)) { // skip whitespace between "<import" and "resource" boolean afterWhitespace = false; index += IMPORT_PREFIX.length() + 1; for (; index<script.length(); index++) { if (Character.isWhitespace(script.charAt(index)) == false) { afterWhitespace = true; break; } } if (afterWhitespace == true && script.startsWith(IMPORT_RESOURCE, index)) { // found an import line! index += IMPORT_RESOURCE.length(); int resourceStart = index; for (; index<script.length(); index++) { if (script.charAt(index) == '"' && script.charAt(index + 1) == '>') { // found end of import line - so we have a resource path String resource = script.substring(resourceStart, index); if (logger.isDebugEnabled()) logger.debug("Found script resource import: " + resource); if (scripts.containsKey(resource) == false) { // load the script resource (and parse any recursive includes...) String includedScript = loader.loadScriptResource(resource); if (includedScript != null) { if (logger.isDebugEnabled()) logger.debug("Succesfully located script '" + resource + "'"); recurseScriptImports(resource, includedScript, loader, scripts, logger); } } else { if (logger.isDebugEnabled()) logger.debug("Note: already imported resource: " + resource); } // continue scanning this script for additional includes // skip the last two characters of the import directive for (index += 2; index<script.length(); index++) { if (Character.isWhitespace(script.charAt(index)) == false) { break; } } recurseScriptImports(location, script.substring(index), loader, scripts, logger); return; } } // if we get here, we failed to find the end of an import line throw new ScriptException( "Malformed 'import' line - must be first in file, no comments and strictly of the form:" + "\r\n<import resource=\"...\">"); } else { throw new ScriptException( "Malformed 'import' line - must be first in file, no comments and strictly of the form:" + "\r\n<import resource=\"...\">"); } } else { // no (further) includes found - include the original script content if (logger.isDebugEnabled()) logger.debug("Imports resolved, adding resource '" + location); if (logger.isTraceEnabled()) logger.trace(script); scripts.put(location, script); } }
Example 16
Source File: RestService.java From elasticsearch-hadoop with Apache License 2.0 | 4 votes |
@SuppressWarnings("unchecked") public static List<PartitionDefinition> findPartitions(Settings settings, Log log) { Version.logVersion(); InitializationUtils.validateSettings(settings); ClusterInfo clusterInfo = InitializationUtils.discoverClusterInfo(settings, log); InitializationUtils.validateSettingsForReading(settings); List<NodeInfo> nodes = InitializationUtils.discoverNodesIfNeeded(settings, log); InitializationUtils.filterNonClientNodesIfNeeded(settings, log); InitializationUtils.filterNonDataNodesIfNeeded(settings, log); InitializationUtils.filterNonIngestNodesIfNeeded(settings, log); RestRepository client = new RestRepository(settings); try { boolean indexExists = client.resourceExists(true); List<List<Map<String, Object>>> shards = null; if (!indexExists) { if (settings.getIndexReadMissingAsEmpty()) { log.info(String.format("Index [%s] missing - treating it as empty", settings.getResourceRead())); shards = Collections.emptyList(); } else { throw new EsHadoopIllegalArgumentException( String.format("Index [%s] missing and settings [%s] is set to false", settings.getResourceRead(), ConfigurationOptions.ES_INDEX_READ_MISSING_AS_EMPTY)); } } else { shards = client.getReadTargetShards(); if (log.isTraceEnabled()) { log.trace("Creating splits for shards " + shards); } } log.info(String.format("Reading from [%s]", settings.getResourceRead())); MappingSet mapping = null; if (!shards.isEmpty()) { mapping = client.getMappings(); if (log.isDebugEnabled()) { log.debug(String.format("Discovered resolved mapping {%s} for [%s]", mapping.getResolvedView(), settings.getResourceRead())); } // validate if possible FieldPresenceValidation validation = settings.getReadFieldExistanceValidation(); if (validation.isRequired()) { MappingUtils.validateMapping(SettingsUtils.determineSourceFields(settings), mapping.getResolvedView(), validation, log); } } final Map<String, NodeInfo> nodesMap = new HashMap<String, NodeInfo>(); if (nodes != null) { for (NodeInfo node : nodes) { nodesMap.put(node.getId(), node); } } final List<PartitionDefinition> partitions; if (clusterInfo.getMajorVersion().onOrAfter(EsMajorVersion.V_5_X) && settings.getMaxDocsPerPartition() != null) { partitions = findSlicePartitions(client.getRestClient(), settings, mapping, nodesMap, shards, log); } else { partitions = findShardPartitions(settings, mapping, nodesMap, shards, log); } Collections.shuffle(partitions); return partitions; } finally { client.close(); } }
Example 17
Source File: InitializationUtils.java From elasticsearch-hadoop with Apache License 2.0 | 4 votes |
public static void filterNonDataNodesIfNeeded(Settings settings, Log log) { if (!settings.getNodesDataOnly()) { return; } RestClient bootstrap = new RestClient(settings); try { String message = "No data nodes with HTTP-enabled available"; List<NodeInfo> dataNodes = bootstrap.getHttpDataNodes(); if (dataNodes.isEmpty()) { throw new EsHadoopIllegalArgumentException(message); } if (log.isDebugEnabled()) { log.debug(String.format("Found data nodes %s", dataNodes)); } List<String> toRetain = new ArrayList<String>(dataNodes.size()); for (NodeInfo node : dataNodes) { toRetain.add(node.getPublishAddress()); } List<String> ddNodes = SettingsUtils.discoveredOrDeclaredNodes(settings); // remove non-data nodes ddNodes.retainAll(toRetain); if (log.isDebugEnabled()) { log.debug(String.format("Filtered discovered only nodes %s to data-only %s", SettingsUtils.discoveredOrDeclaredNodes(settings), ddNodes)); } if (ddNodes.isEmpty()) { if (settings.getNodesDiscovery()) { message += String.format("; looks like the data nodes discovered have been removed; is the cluster in a stable state? %s", dataNodes); } else { message += String.format("; node discovery is disabled and none of nodes specified fit the criterion %s", SettingsUtils.discoveredOrDeclaredNodes(settings)); } throw new EsHadoopIllegalArgumentException(message); } SettingsUtils.setDiscoveredNodes(settings, ddNodes); } finally { bootstrap.close(); } }
Example 18
Source File: PluginRules.java From lams with GNU General Public License v2.0 | 4 votes |
/** * Register a new Rule instance matching the specified pattern. * * @param pattern Nesting pattern to be matched for this Rule. * This parameter treats equally patterns that begin with and without * a leading slash ('/'). * @param rule Rule instance to be registered */ public void add(String pattern, Rule rule) { Log log = LogUtils.getLogger(digester); boolean debug = log.isDebugEnabled(); if (debug) { log.debug("add entry" + ": mapping pattern [" + pattern + "]" + " to rule of type [" + rule.getClass().getName() + "]"); } // allow patterns with a leading slash character if (pattern.startsWith("/")) { pattern = pattern.substring(1); } if (mountPoint != null && !pattern.equals(mountPoint) && !pattern.startsWith(mountPoint + "/")) { // This can only occur if a plugin attempts to add a // rule with a pattern that doesn't start with the // prefix passed to the addRules method. Plugins mustn't // add rules outside the scope of the tag they were specified // on, so refuse this. // alas, can't throw exception log.warn( "An attempt was made to add a rule with a pattern that" + "is not at or below the mountpoint of the current" + " PluginRules object." + " Rule pattern: " + pattern + ", mountpoint: " + mountPoint + ", rule type: " + rule.getClass().getName()); return; } decoratedRules.add(pattern, rule); if (rule instanceof InitializableRule) { try { ((InitializableRule)rule).postRegisterInit(pattern); } catch (PluginConfigurationException e) { // Currently, Digester doesn't handle exceptions well // from the add method. The workaround is for the // initialisable rule to remember that its initialisation // failed, and to throw the exception when begin is // called for the first time. if (debug) { log.debug("Rule initialisation failed", e); } // throw e; -- alas, can't do this return; } } if (debug) { log.debug("add exit" + ": mapped pattern [" + pattern + "]" + " to rule of type [" + rule.getClass().getName() + "]"); } }
Example 19
Source File: LogFormatUtils.java From java-technology-stack with MIT License | 3 votes |
/** * Use this to log a message with different levels of detail (or different * messages) at TRACE vs DEBUG log levels. Effectively, a substitute for: * <pre class="code"> * if (logger.isDebugEnabled()) { * String str = logger.isTraceEnabled() ? "..." : "..."; * if (logger.isTraceEnabled()) { * logger.trace(str); * } * else { * logger.debug(str); * } * } * </pre> * @param logger the logger to use to log the message * @param messageFactory function that accepts a boolean set to the value * of {@link Log#isTraceEnabled()} */ public static void traceDebug(Log logger, Function<Boolean, String> messageFactory) { if (logger.isDebugEnabled()) { String logMessage = messageFactory.apply(logger.isTraceEnabled()); if (logger.isTraceEnabled()) { logger.trace(logMessage); } else { logger.debug(logMessage); } } }