Python tensorpack.utils.logger.error() Examples
The following are 3
code examples of tensorpack.utils.logger.error().
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example.
You may also want to check out all available functions/classes of the module
tensorpack.utils.logger
, or try the search function
.
Example #1
Source File: load-resnet.py From webvision-2.0-benchmarks with Apache License 2.0 | 5 votes |
def convert_param_name(param): resnet_param = {} for k, v in six.iteritems(param): try: newname = name_conversion(k) except Exception: logger.error("Exception when processing caffe layer {}".format(k)) raise logger.info("Name Transform: " + k + ' --> ' + newname) resnet_param[newname] = v return resnet_param
Example #2
Source File: load-resnet.py From tensorpack with Apache License 2.0 | 5 votes |
def convert_param_name(param): resnet_param = {} for k, v in six.iteritems(param): try: newname = name_conversion(k) except Exception: logger.error("Exception when processing caffe layer {}".format(k)) raise logger.info("Name Transform: " + k + ' --> ' + newname) resnet_param[newname] = v return resnet_param
Example #3
Source File: shufflenet.py From tensorpack with Apache License 2.0 | 5 votes |
def get_config(model, nr_tower): batch = TOTAL_BATCH_SIZE // nr_tower logger.info("Running on {} towers. Batch size per tower: {}".format(nr_tower, batch)) dataset_train = get_data('train', batch) dataset_val = get_data('val', batch) step_size = 1280000 // TOTAL_BATCH_SIZE max_iter = 3 * 10**5 max_epoch = (max_iter // step_size) + 1 callbacks = [ ModelSaver(), ScheduledHyperParamSetter('learning_rate', [(0, 0.5), (max_iter, 0)], interp='linear', step_based=True), EstimatedTimeLeft() ] infs = [ClassificationError('wrong-top1', 'val-error-top1'), ClassificationError('wrong-top5', 'val-error-top5')] if nr_tower == 1: # single-GPU inference with queue prefetch callbacks.append(InferenceRunner(QueueInput(dataset_val), infs)) else: # multi-GPU inference (with mandatory queue prefetch) callbacks.append(DataParallelInferenceRunner( dataset_val, infs, list(range(nr_tower)))) return TrainConfig( model=model, dataflow=dataset_train, callbacks=callbacks, steps_per_epoch=step_size, max_epoch=max_epoch, )