Python tensorflow.python.ops.gen_nn_ops.fused_batch_norm_grad() Examples
The following are 5
code examples of tensorflow.python.ops.gen_nn_ops.fused_batch_norm_grad().
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example.
You may also want to check out all available functions/classes of the module
tensorflow.python.ops.gen_nn_ops
, or try the search function
.
Example #1
Source File: nn_grad.py From lambda-packs with MIT License | 5 votes |
def _FusedBatchNormGrad(op, *grad): """Return the gradients for the 3 inputs of BatchNorm. Args: op: The BatchNormOp for which we need to compute gradients. *grad: An argument list for tensors of gradients wrt the outputs with grad[0] as grad_y. Returns: grad_x: gradient for x, which is scale * rsqrt(variance + epsilon) * [grad_y - mean(grad_y) - (x - mean(x)) * mean(grad_y * (x - mean(x))) / (variance + epsilon)] grad_scale: gradient for scale, which is sum(grad_y * (x - mean(x)) * rsqrt(variance + epsilon)) grad_offset: gradient for offset, which is sum(grad_y) """ return gen_nn_ops.fused_batch_norm_grad( grad[0], op.inputs[0], op.inputs[1], op.outputs[3], op.outputs[4], epsilon=op.get_attr("epsilon"), data_format=op.get_attr("data_format"), is_training=op.get_attr("is_training"))
Example #2
Source File: nn_grad.py From auto-alt-text-lambda-api with MIT License | 5 votes |
def _FusedBatchNormGrad(op, *grad): """Return the gradients for the 3 inputs of BatchNorm. Args: op: The BatchNormOp for which we need to compute gradients. *grad: An argument list for tensors of gradients wrt the outputs with grad[0] as grad_y. Returns: grad_x: gradient for x, which is scale * rsqrt(variance + epsilon) * [grad_y - mean(grad_y) - (x - mean(x)) * mean(grad_y * (x - mean(x))) / (variance + epsilon)] grad_scale: gradient for scale, which is sum(grad_y * (x - mean(x)) * rsqrt(variance + epsilon)) grad_offset: gradient for offset, which is sum(grad_y) """ return gen_nn_ops.fused_batch_norm_grad( grad[0], op.inputs[0], op.inputs[1], op.outputs[3], op.outputs[4], epsilon=op.get_attr("epsilon"), data_format=op.get_attr("data_format"), is_training=op.get_attr("is_training"))
Example #3
Source File: nn_grad.py From deep_image_model with Apache License 2.0 | 5 votes |
def _FusedBatchNormGrad(op, *grad): """Return the gradients for the 3 inputs of BatchNorm. Args: op: The BatchNormOp for which we need to compute gradients. *grad: An argument list for tensors of gradients wrt the outputs with grad[0] as grad_y. Returns: grad_x: gradient for x, which is scale * rsqrt(variance + epsilon) * [grad_y - mean(grad_y) - (x - mean(x)) * mean(grad_y * (x - mean(x))) / (variance + epsilon)] grad_scale: gradient for scale, which is sum(grad_y * (x - mean(x)) * rsqrt(variance + epsilon)) grad_offset: gradient for offset, which is sum(grad_y) """ return gen_nn_ops.fused_batch_norm_grad( grad[0], op.inputs[0], op.inputs[1], op.outputs[3], op.outputs[4], epsilon=op.get_attr("epsilon"), data_format=op.get_attr("data_format"), is_training=op.get_attr("is_training"))
Example #4
Source File: nn_grad.py From keras-lambda with MIT License | 5 votes |
def _FusedBatchNormGrad(op, *grad): """Return the gradients for the 3 inputs of BatchNorm. Args: op: The BatchNormOp for which we need to compute gradients. *grad: An argument list for tensors of gradients wrt the outputs with grad[0] as grad_y. Returns: grad_x: gradient for x, which is scale * rsqrt(variance + epsilon) * [grad_y - mean(grad_y) - (x - mean(x)) * mean(grad_y * (x - mean(x))) / (variance + epsilon)] grad_scale: gradient for scale, which is sum(grad_y * (x - mean(x)) * rsqrt(variance + epsilon)) grad_offset: gradient for offset, which is sum(grad_y) """ return gen_nn_ops.fused_batch_norm_grad( grad[0], op.inputs[0], op.inputs[1], op.outputs[3], op.outputs[4], epsilon=op.get_attr("epsilon"), data_format=op.get_attr("data_format"), is_training=op.get_attr("is_training"))
Example #5
Source File: nn_grad.py From Serverless-Deep-Learning-with-TensorFlow-and-AWS-Lambda with MIT License | 4 votes |
def _BaseFusedBatchNormGrad(op, use_v2, *grad): """Return the gradients for the 3 inputs of BatchNorm. Args: op: The BatchNormOp for which we need to compute gradients. use_v2: Boolean indicating whether to use the V2 version of the fused batch norm gradient. *grad: An argument list for tensors of gradients wrt the outputs with grad[0] as grad_y. Returns: grad_x: gradient for x, which is scale * rsqrt(variance + epsilon) * [grad_y - mean(grad_y) - (x - mean(x)) * mean(grad_y * (x - mean(x))) / (variance + epsilon)] in training mode; grad_y * scale * rsqrt(pop_variance + epsilon) in freeze mode. grad_scale: gradient for scale, which is sum(grad_y * (x - mean(x)) * rsqrt(variance + epsilon)) in training mode; sum(grad_y * (x - pop_mean) * rsqrt(pop_variance + epsilon)) in freeze mode. grad_offset: gradient for offset, which is sum(grad_y) in training mode; sum(grad_y) in freeze mode. """ x = op.inputs[0] grad_y = grad[0] scale = op.inputs[1] epsilon = op.get_attr("epsilon") data_format = op.get_attr("data_format") is_training = op.get_attr("is_training") grad_fun = (gen_nn_ops.fused_batch_norm_grad_v2 if use_v2 else gen_nn_ops.fused_batch_norm_grad) if is_training: return grad_fun( grad_y, x, scale, op.outputs[3], op.outputs[4], epsilon=epsilon, data_format=data_format, is_training=is_training) else: pop_mean = op.inputs[3] pop_var = op.inputs[4] if data_format == b"NCHW": x = array_ops.transpose(x, [0, 2, 3, 1]) grad_y = array_ops.transpose(grad_y, [0, 2, 3, 1]) dx, dscale, doffset, _, _ = grad_fun( grad_y, x, scale, pop_mean, pop_var, epsilon=epsilon, data_format='NHWC', is_training=is_training) if data_format == b"NCHW": dx = array_ops.transpose(dx, [0, 3, 1, 2]) return dx, dscale, doffset, None, None