# Python numpy.e() Examples

The following are 30 code examples for showing how to use numpy.e(). These examples are extracted from open source projects. You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example.

You may check out the related API usage on the sidebar.

You may also want to check out all available functions/classes of the module , or try the search function .

Example 1
 Project: neuropythy   Author: noahbenson   File: core.py    License: GNU Affero General Public License v3.0 6 votes  ```def to_potential(f):
'''
to_potential(f) yields f if f is a potential function; if f is not, but f can be converted to
a potential function, that conversion is performed then the result is yielded.
to_potential(Ellipsis) yields a potential function whose output is simply its input (i.e., the
identity function).
to_potential(None) is equivalent to to_potential(0).

The following can be converted into potential functions:
* Anything for which pimms.is_array(x, 'number') yields True (i.e., arrays of constants).
* Any tuple (g, h) where g(x) yields a potential value and h(x) yields a jacobian matrix for
the parameter vector x.
'''
if   is_potential(f): return f
elif f is Ellipsis:   return identity
elif pimms.is_array(f, 'number'): return const_potential(f)
elif isinstance(f, tuple) and len(f) == 2: return PotentialLambda(f, f)
else: raise ValueError('Could not convert object of type %s to potential function' % type(f)) ```
Example 2
 Project: neuropythy   Author: noahbenson   File: core.py    License: GNU Affero General Public License v3.0 6 votes  ```def sigmoid(f=Ellipsis, mu=0, sigma=1, scale=1, invert=False, normalize=False):
'''
sigmoid() yields a potential function that is equivalent to the integral of gaussian(), i.e.,
the error function, but scaled to match gaussian().
sigmoid(f) is equivalent to compose(sigmoid(), f).

All options that are accepted by the gaussian() function are accepted by sigmoid() with the same
default values and are handled in an equivalent manner with the exception of the invert option;
when a sigmoid is inverted, the function approaches its maximum value at -inf and approaches 0
at inf.

Note that because sigmoid() explicitly matches gaussian(), the base formula used is as follows:
f(x) = scale * sigma * sqrt(pi/2) * erf((x - mu) / (sqrt(2) * sigma))
k*sig*Sqrt[Pi/2] Erf[(x - mu)/sig/Sqrt]
'''
f = to_potential(f)
F = erf((f - mu) / (sigma * np.sqrt(2.0)))
if invert: F = 1 - F
F = np.sqrt(np.pi / 2) * scale * F
if normalize: F = F / (np.sqrt(2.0*np.pi) * sigma)
return F ```
Example 3
 Project: recruit   Author: Frank-qlu   File: test_io.py    License: Apache License 2.0 6 votes  ```def test_closing_fid(self):
# Test that issue #1517 (too many opened files) remains closed
# It might be a "weak" test since failed to get triggered on
# e.g. Debian sid of 2012 Jul 05 but was reported to
# trigger the failure on Ubuntu 10.04:
# http://projects.scipy.org/numpy/ticket/1517#comment:2
with temppath(suffix='.npz') as tmp:
# We need to check if the garbage collector can properly close
# numpy npz file returned by np.load when their reference count
# goes to zero.  Python 3 running in debug mode raises a
# ResourceWarning when file closing is left to the garbage
# collector, so we catch the warnings.  Because ResourceWarning
# is unknown in Python < 3.x, we take the easy way out and
# catch all warnings.
with suppress_warnings() as sup:
sup.filter(Warning)  # TODO: specify exact message
for i in range(1, 1025):
try:
except Exception as e:
msg = "Failed to load data from a file: %s" % e
raise AssertionError(msg) ```
Example 4
 Project: recruit   Author: Frank-qlu   File: test_io.py    License: Apache License 2.0 6 votes  ```def test_complex_negative_exponent(self):
# Previous to 1.15, some formats generated x+-yj, gh 7895
ncols = 2
nrows = 2
a = np.zeros((ncols, nrows), dtype=np.complex128)
re = np.pi
im = np.e
a[:] = re - 1.0j * im
c = BytesIO()
np.savetxt(c, a, fmt='%.3e')
c.seek(0)
assert_equal(
lines,
[b' (3.142e+00-2.718e+00j)  (3.142e+00-2.718e+00j)\n',
b' (3.142e+00-2.718e+00j)  (3.142e+00-2.718e+00j)\n']) ```
Example 5
 Project: recruit   Author: Frank-qlu   File: test_io.py    License: Apache License 2.0 6 votes  ```def test_complex_misformatted(self):
# test for backward compatibility
# some complex formats used to generate x+-yj
a = np.zeros((2, 2), dtype=np.complex128)
re = np.pi
im = np.e
a[:] = re - 1.0j * im
c = BytesIO()
np.savetxt(c, a, fmt='%.16e')
c.seek(0)
c.seek(0)
# misformat the sign on the imaginary part, gh 7895
c.seek(0)
assert_equal(res, a) ```
Example 6
 Project: lambda-packs   Author: ryfeus   File: _multivariate.py    License: MIT License 6 votes  ```def entropy(self, mean=None, cov=1):
"""
Compute the differential entropy of the multivariate normal.

Parameters
----------
%(_mvn_doc_default_callparams)s

Returns
-------
h : scalar
Entropy of the multivariate normal distribution

Notes
-----
%(_mvn_doc_callparams_note)s

"""
dim, mean, cov = self._process_parameters(None, mean, cov)
_, logdet = np.linalg.slogdet(2 * np.pi * np.e * cov)
return 0.5 * logdet ```
Example 7
 Project: lambda-packs   Author: ryfeus   File: test_io.py    License: MIT License 6 votes  ```def test_closing_fid(self):
# Test that issue #1517 (too many opened files) remains closed
# It might be a "weak" test since failed to get triggered on
# e.g. Debian sid of 2012 Jul 05 but was reported to
# trigger the failure on Ubuntu 10.04:
# http://projects.scipy.org/numpy/ticket/1517#comment:2
with temppath(suffix='.npz') as tmp:
# We need to check if the garbage collector can properly close
# numpy npz file returned by np.load when their reference count
# goes to zero.  Python 3 running in debug mode raises a
# ResourceWarning when file closing is left to the garbage
# collector, so we catch the warnings.  Because ResourceWarning
# is unknown in Python < 3.x, we take the easy way out and
# catch all warnings.
with suppress_warnings() as sup:
sup.filter(Warning)  # TODO: specify exact message
for i in range(1, 1025):
try:
except Exception as e:
msg = "Failed to load data from a file: %s" % e
raise AssertionError(msg) ```
Example 8
 Project: lambda-packs   Author: ryfeus   File: test_io.py    License: MIT License 6 votes  ```def test_invalid_raise(self):
# Test invalid raise
data = ["1, 1, 1, 1, 1"] * 50
for i in range(5):
data[10 * i] = "2, 2, 2, 2 2"
data.insert(0, "a, b, c, d, e")
mdata = TextIO("\n".join(data))
#
kwargs = dict(delimiter=",", dtype=None, names=True)
# XXX: is there a better way to get the return value of the
# callable in assert_warns ?
ret = {}

def f(_ret={}):
_ret['mtest'] = np.ndfromtxt(mdata, invalid_raise=False, **kwargs)
assert_warns(ConversionWarning, f, _ret=ret)
mtest = ret['mtest']
assert_equal(len(mtest), 45)
assert_equal(mtest, np.ones(45, dtype=[(_, int) for _ in 'abcde']))
#
mdata.seek(0)
assert_raises(ValueError, np.ndfromtxt, mdata,
delimiter=",", names=True) ```
Example 9
 Project: auto-alt-text-lambda-api   Author: abhisuri97   File: test_io.py    License: MIT License 6 votes  ```def test_closing_fid(self):
# Test that issue #1517 (too many opened files) remains closed
# It might be a "weak" test since failed to get triggered on
# e.g. Debian sid of 2012 Jul 05 but was reported to
# trigger the failure on Ubuntu 10.04:
# http://projects.scipy.org/numpy/ticket/1517#comment:2
with temppath(suffix='.npz') as tmp:
# We need to check if the garbage collector can properly close
# numpy npz file returned by np.load when their reference count
# goes to zero.  Python 3 running in debug mode raises a
# ResourceWarning when file closing is left to the garbage
# collector, so we catch the warnings.  Because ResourceWarning
# is unknown in Python < 3.x, we take the easy way out and
# catch all warnings.
with warnings.catch_warnings():
warnings.simplefilter("ignore")
for i in range(1, 1025):
try:
except Exception as e:
msg = "Failed to load data from a file: %s" % e
raise AssertionError(msg) ```
Example 10
 Project: auto-alt-text-lambda-api   Author: abhisuri97   File: test_io.py    License: MIT License 6 votes  ```def test_invalid_raise(self):
# Test invalid raise
data = ["1, 1, 1, 1, 1"] * 50
for i in range(5):
data[10 * i] = "2, 2, 2, 2 2"
data.insert(0, "a, b, c, d, e")
mdata = TextIO("\n".join(data))
#
kwargs = dict(delimiter=",", dtype=None, names=True)
# XXX: is there a better way to get the return value of the
# callable in assert_warns ?
ret = {}

def f(_ret={}):
_ret['mtest'] = np.ndfromtxt(mdata, invalid_raise=False, **kwargs)
assert_warns(ConversionWarning, f, _ret=ret)
mtest = ret['mtest']
assert_equal(len(mtest), 45)
assert_equal(mtest, np.ones(45, dtype=[(_, int) for _ in 'abcde']))
#
mdata.seek(0)
assert_raises(ValueError, np.ndfromtxt, mdata,
delimiter=",", names=True) ```
Example 11
 Project: vnpy_crypto   Author: birforce   File: test_io.py    License: MIT License 6 votes  ```def test_closing_fid(self):
# Test that issue #1517 (too many opened files) remains closed
# It might be a "weak" test since failed to get triggered on
# e.g. Debian sid of 2012 Jul 05 but was reported to
# trigger the failure on Ubuntu 10.04:
# http://projects.scipy.org/numpy/ticket/1517#comment:2
with temppath(suffix='.npz') as tmp:
# We need to check if the garbage collector can properly close
# numpy npz file returned by np.load when their reference count
# goes to zero.  Python 3 running in debug mode raises a
# ResourceWarning when file closing is left to the garbage
# collector, so we catch the warnings.  Because ResourceWarning
# is unknown in Python < 3.x, we take the easy way out and
# catch all warnings.
with suppress_warnings() as sup:
sup.filter(Warning)  # TODO: specify exact message
for i in range(1, 1025):
try:
except Exception as e:
msg = "Failed to load data from a file: %s" % e
raise AssertionError(msg) ```
Example 12
 Project: vnpy_crypto   Author: birforce   File: test_io.py    License: MIT License 6 votes  ```def test_invalid_raise(self):
# Test invalid raise
data = ["1, 1, 1, 1, 1"] * 50
for i in range(5):
data[10 * i] = "2, 2, 2, 2 2"
data.insert(0, "a, b, c, d, e")
mdata = TextIO("\n".join(data))
#
kwargs = dict(delimiter=",", dtype=None, names=True)
# XXX: is there a better way to get the return value of the
# callable in assert_warns ?
ret = {}

def f(_ret={}):
_ret['mtest'] = np.ndfromtxt(mdata, invalid_raise=False, **kwargs)
assert_warns(ConversionWarning, f, _ret=ret)
mtest = ret['mtest']
assert_equal(len(mtest), 45)
assert_equal(mtest, np.ones(45, dtype=[(_, int) for _ in 'abcde']))
#
mdata.seek(0)
assert_raises(ValueError, np.ndfromtxt, mdata,
delimiter=",", names=True) ```
Example 13
 Project: vnpy_crypto   Author: birforce   File: infotheo.py    License: MIT License 6 votes  ```def gencrossentropy(px,py,pxpy,alpha=1,logbase=2, measure='T'):
"""
Generalized cross-entropy measures.

Parameters
----------
px : array-like
Discrete probability distribution of random variable X
py : array-like
Discrete probability distribution of random variable Y
pxpy : 2d array-like, optional
Joint probability distribution of X and Y.  If pxpy is None, X and Y
are assumed to be independent.
logbase : int or np.e, optional
Default is 2 (bits)
measure : str, optional
The measure is the type of generalized cross-entropy desired. 'T' is
the cross-entropy version of the Tsallis measure.  'CR' is Cressie-Read
measure.

""" ```
Example 14
 Project: garage   Author: rlworkgroup   File: diagonal_gaussian.py    License: MIT License 6 votes  ```def entropy_sym(self, dist_info_vars, name='entropy_sym'):
"""Symbolic entropy of a distribution.

Args:
dist_info_vars (dict): Symbolic parameters of a distribution.
name (str): TensorFlow scope name.

Returns:
tf.Tensor: Symbolic entropy of the distribution.

"""
with tf.name_scope(name):
log_std_var = dist_info_vars['log_std']
return tf.reduce_sum(log_std_var +
np.log(np.sqrt(2 * np.pi * np.e)),
axis=-1) ```
Example 15
 Project: RaptorX-Contact   Author: j3xugit   File: DistanceUtils.py    License: GNU General Public License v3.0 6 votes  ```def CalcDistProb(data=None, bins=None, invalidDistanceSeparated=False):

labelMatrices = [ ]
for distm in data:
#m, _, _ = DiscretizeDistMatrix(distm, subType=subType)
m, _, _ = DiscretizeDistMatrix(distm, bins=bins, invalidDistanceSeparated=invalidDistanceSeparated)
labelMatrices.append(m)

## need fix here
#probs = CalcLabelProb( labelMatrices, config.responseProbDims['Discrete' + subType] )
if invalidDistanceSeparated:
probs = CalcLabelProb( labelMatrices, len(bins) + 1 )
else:
probs = CalcLabelProb( labelMatrices, len(bins) )

return probs

## d needs to be positive, cannot be -1
## cutoffs is the distance boundary array
## return the largest index position such that cutoffs[position] <= d, i.e.,  d< cutoffs[position+1] ```
Example 16
 Project: GCNet   Author: xvjiarui   File: balanced_l1_loss.py    License: Apache License 2.0 6 votes  ```def balanced_l1_loss(pred,
target,
beta=1.0,
alpha=0.5,
gamma=1.5,
reduction='mean'):
assert beta > 0
assert pred.size() == target.size() and target.numel() > 0

diff = torch.abs(pred - target)
b = np.e**(gamma / alpha) - 1
loss = torch.where(
diff < beta, alpha / b *
(b * diff + 1) * torch.log(b * diff / beta + 1) - alpha * diff,
gamma * diff + gamma / b - alpha * beta)

return loss ```
Example 17
 Project: mmdetection   Author: open-mmlab   File: balanced_l1_loss.py    License: Apache License 2.0 5 votes  ```def balanced_l1_loss(pred,
target,
beta=1.0,
alpha=0.5,
gamma=1.5,
reduction='mean'):
"""Calculate balanced L1 loss.

Please see the `Libra R-CNN <https://arxiv.org/pdf/1904.02701.pdf>`_

Args:
pred (torch.Tensor): The prediction with shape (N, 4).
target (torch.Tensor): The learning target of the prediction with
shape (N, 4).
beta (float): The loss is a piecewise function of prediction and target
and ``beta`` serves as a threshold for the difference between the
prediction and target. Defaults to 1.0.
alpha (float): The denominator ``alpha`` in the balanced L1 loss.
Defaults to 0.5.
gamma (float): The ``gamma`` in the balanced L1 loss.
Defaults to 1.5.
reduction (str, optional): The method that reduces the loss to a
scalar. Options are "none", "mean" and "sum".

Returns:
torch.Tensor: The calculated loss
"""
assert beta > 0
assert pred.size() == target.size() and target.numel() > 0

diff = torch.abs(pred - target)
b = np.e**(gamma / alpha) - 1
loss = torch.where(
diff < beta, alpha / b *
(b * diff + 1) * torch.log(b * diff / beta + 1) - alpha * diff,
gamma * diff + gamma / b - alpha * beta)

return loss ```
Example 18
 Project: neuropythy   Author: noahbenson   File: core.py    License: GNU Affero General Public License v3.0 5 votes  ```def part(f, ii=None, input_len=None):
'''
part(u, ii) for constant or constant potential u yields a constant-potential form of u[ii].
part(f, ii) for potential function f yields a potential function g(x) that is equivalent to
f(x)[ii].
part(ii) is equivalent to part(identity, ii); i.e., pat of the input parameters to the function.
'''
if ii is None: return PotentialPart(f, input_len=input_len)
f = to_potential(f)
if is_const_potential(f): return PotentialConstant(f.c[ii])
else:                     return compose(PotentialPart(ii, input_len=input_len), f) ```
Example 19
 Project: neuropythy   Author: noahbenson   File: core.py    License: GNU Affero General Public License v3.0 5 votes  ```def exp(x):
x = to_potential(x)
if is_const_potential(x): return PotentialConstant(np.exp(x.c))
else:                     return ConstantPowerPotential(np.e, x) ```
Example 20
 Project: neuropythy   Author: noahbenson   File: core.py    License: GNU Affero General Public License v3.0 5 votes  ```def gaussian(f=Ellipsis, mu=0, sigma=1, scale=1, invert=False, normalize=False):
'''
gaussian() yields a potential function f(x) that calculates a Gaussian function over x; the
formula used is given below.
gaussian(g) yields a function h(x) such that, if f(x) is yielded by gaussian(), h(x) = f(g(x)).

The formula employed by the Gaussian function is as follows, with mu, sigma, and scale all being
parameters that one can provide via optional arguments:
scale * exp(0.5 * ((x - mu) / sigma)**2)

The following optional arguments may be given:
* mu (default: 0) specifies the mean of the Gaussian.
* sigma (default: 1) specifies the standard deviation (sigma) parameger of the Gaussian.
* scale (default: 1) specifies the scale to use.
* invert (default: False) specifies whether the Gaussian should be inverted. If inverted, then
the formula, scale * exp(...), is replaced with scale * (1 - exp(...)).
* normalize (default: False) specifies whether the result should be multiplied by the inverse
of the area under the uninverted and unscaled curve; i.e., if normalize is True, the entire
result is multiplied by 1/sqrt(2*pi*sigma**2).
'''
f = to_potential(f)
F = exp(-0.5 * ((f - mu) / sigma)**2)
if invert: F = 1 - F
F = F * scale
if normalize: F = F / (np.sqrt(2.0*np.pi) * sigma)
return F ```
Example 21
 Project: lirpg   Author: Hwhitetooth   File: distributions.py    License: MIT License 5 votes  ```def entropy(self):
return tf.reduce_sum(self.logstd + .5 * np.log(2.0 * np.pi * np.e), axis=-1) ```
Example 22
 Project: pywr   Author: pywr   File: test_license.py    License: GNU General Public License v3.0 5 votes  ```def test_simple_model_with_exponential_license(simple_linear_model):
m = simple_linear_model
si = ScenarioIndex(0, np.array(, dtype=np.int32))

annual_total = 365
# Expoential licence with max_value of e should give a hard constraint of 1.0 when on track
lic = AnnualExponentialLicense(m, m.nodes["Input"], annual_total, np.e)
# Apply licence to the model
m.nodes["Input"].max_flow = lic
m.nodes["Output"].max_flow = 10.0
m.nodes["Output"].cost = -10.0
m.setup()

m.step()

# Licence is a hard constraint of 1.0
# timestepper.current is now end of the first day
assert_allclose(m.nodes["Output"].flow, 1.0)
# Check the constraint for the next timestep.
assert_allclose(lic.value(m.timestepper._next, si), 1.0)

# Now constrain the demand so that licence is not fully used
m.nodes["Output"].max_flow = 0.5
m.step()

assert_allclose(m.nodes["Output"].flow, 0.5)
# Check the constraint for the next timestep. The available amount should now be larger
# due to the reduced use
remaining = (annual_total-1.5)
assert_allclose(lic.value(m.timestepper._next, si), np.exp(-remaining / (365 - 2) + 1))

# Unconstrain the demand
m.nodes["Output"].max_flow = 10.0
m.step()
assert_allclose(m.nodes["Output"].flow, np.exp(-remaining / (365 - 2) + 1))
# Licence should now be on track for an expected value of 1.0
remaining -= np.exp(-remaining / (365 - 2) + 1)
assert_allclose(lic.value(m.timestepper._next, si), np.exp(-remaining / (365 - 3) + 1)) ```
Example 23
 Project: pywr   Author: pywr   File: test_license.py    License: GNU General Public License v3.0 5 votes  ```def test_simple_model_with_hyperbola_license(simple_linear_model):
m = simple_linear_model
si = ScenarioIndex(0, np.array(, dtype=np.int32))

annual_total = 365
# Expoential licence with max_value of e should give a hard constraint of 1.0 when on track
lic = AnnualHyperbolaLicense(m, m.nodes["Input"], annual_total, 1.0)
# Apply licence to the model
m.nodes["Input"].max_flow = lic
m.nodes["Output"].max_flow = 10.0
m.nodes["Output"].cost = -10.0
m.setup()

m.step()

# Licence is a hard constraint of 1.0
# timestepper.current is now end of the first day
assert_allclose(m.nodes["Output"].flow, 1.0)
# Check the constraint for the next timestep.
assert_allclose(lic.value(m.timestepper._next, si), 1.0)

# Now constrain the demand so that licence is not fully used
m.nodes["Output"].max_flow = 0.5
m.step()

assert_allclose(m.nodes["Output"].flow, 0.5)
# Check the constraint for the next timestep. The available amount should now be larger
# due to the reduced use
remaining = (annual_total-1.5)
assert_allclose(lic.value(m.timestepper._next, si), (365 - 2) / remaining)

# Unconstrain the demand
m.nodes["Output"].max_flow = 10.0
m.step()
assert_allclose(m.nodes["Output"].flow, (365 - 2) / remaining)
# Licence should now be on track for an expected value of 1.0
remaining -= (365 - 2) / remaining
assert_allclose(lic.value(m.timestepper._next, si), (365 - 3) / remaining) ```
Example 24
 Project: HardRLWithYoutube   Author: MaxSobolMark   File: distributions.py    License: MIT License 5 votes  ```def entropy(self):
return tf.reduce_sum(self.logstd + .5 * np.log(2.0 * np.pi * np.e), axis=-1) ```
Example 25
 Project: AerialDetection   Author: dingjiansw101   File: losses.py    License: Apache License 2.0 5 votes  ```def balanced_l1_loss(pred,
target,
beta=1.0,
alpha=0.5,
gamma=1.5,
reduction='none'):
assert beta > 0
assert pred.size() == target.size() and target.numel() > 0

diff = torch.abs(pred - target)
b = np.e**(gamma / alpha) - 1
loss = torch.where(
diff < beta, alpha / b *
(b * diff + 1) * torch.log(b * diff / beta + 1) - alpha * diff,
gamma * diff + gamma / b - alpha * beta)

reduction_enum = F._Reduction.get_enum(reduction)
# none: 0, elementwise_mean:1, sum: 2
if reduction_enum == 0:
return loss
elif reduction_enum == 1:
return loss.sum() / pred.numel()
elif reduction_enum == 2:
return loss.sum()

return loss ```
Example 26
 Project: Reinforcement_Learning_for_Traffic_Light_Control   Author: quantumiracle   File: distributions.py    License: Apache License 2.0 5 votes  ```def entropy(self):
return tf.reduce_sum(self.logstd + .5 * np.log(2.0 * np.pi * np.e), axis=-1) ```
Example 27
 Project: Reinforcement_Learning_for_Traffic_Light_Control   Author: quantumiracle   File: distributions.py    License: Apache License 2.0 5 votes  ```def entropy(self):
return tf.reduce_sum(self.logstd + .5 * np.log(2.0 * np.pi * np.e), axis=-1) ```
Example 28
 Project: Reinforcement_Learning_for_Traffic_Light_Control   Author: quantumiracle   File: distributions.py    License: Apache License 2.0 5 votes  ```def entropy(self):
return tf.reduce_sum(self.logstd + .5 * np.log(2.0 * np.pi * np.e), axis=-1) ```
Example 29
 Project: video-caption-openNMT.pytorch   Author: xiadingZ   File: ciderD_scorer.py    License: MIT License 5 votes  ```def __iadd__(self, other):
'''add an instance (e.g., from another sentence).'''

if type(other) is tuple:
## avoid creating new CiderScorer instances
self.cook_append(other, other)
else:
self.ctest.extend(other.ctest)
self.crefs.extend(other.crefs)

return self ```
Example 30
 Project: video-caption-openNMT.pytorch   Author: xiadingZ   File: cider_scorer.py    License: MIT License 5 votes  ```def __iadd__(self, other):
'''add an instance (e.g., from another sentence).'''

if type(other) is tuple:
## avoid creating new CiderScorer instances
self.cook_append(other, other)
else:
self.ctest.extend(other.ctest)
self.crefs.extend(other.crefs)

return self ```