Losses
absolute_difference
absolute_difference(weights=1.0, name='AbsoluteDifference', scope=None, collect=True)
Adds an Absolute Difference loss to the training procedure.
weights
acts as a coefficient for the loss. If a scalar is provided, then
the loss is simply scaled by the given value. If weights
is a Tensor
of
shape [batch_size]
, then the total loss for each sample of the batch is
rescaled by the corresponding element in the weights
vector. If the shape of
weights
matches the shape of predictions
, then the loss of each
measurable element of predictions
is scaled by the corresponding value of
weights
.

Args:
 weights: Optional
Tensor
whose rank is either 0, or the same rank aslabels
, and must be broadcastable tolabels
(i.e., all dimensions must be either1
, or the same as the correspondinglosses
dimension).  name: operation name.
 scope: operation scope.
 collect: whether to collect this metric under the metric collection.
 weights: Optional

Returns: A scalar
Tensor
representing the loss value.
log_loss
log_loss(weights=1.0, epsilon=1e07, name='LogLoss', scope=None, collect=True)
mean_squared_error
mean_squared_error(weights=1.0, name='MeanSquaredError', scope=None, collect=True)
Computes Mean Square Loss.

Args:
 weights: Coefficients for the loss a
scalar
.  scope: scope to add the op to.
 name: name of the op.
 collect: add to losses collection.
 weights: Coefficients for the loss a

Returns: A scalar
Tensor
representing the loss value. 
Raises:
 ValueError: If
predictions
shape doesn't matchlabels
shape, orweights
isNone
.
 ValueError: If
huber_loss
huber_loss(weights=1.0, clip=0.0, name='HuberLoss', scope=None, collect=True)
Computes Huber Loss for DQN.

Args:
 weights: Coefficients for the loss a
scalar
.  scope: scope to add the op to.
 name: name of the op.
 collect: add to losses collection.
 weights: Coefficients for the loss a

Returns: A scalar
Tensor
representing the loss value. 
Raises:
 ValueError: If
predictions
shape doesn't matchlabels
shape, orweights
isNone
.
 ValueError: If
clipped_delta_loss
clipped_delta_loss(weights=1.0, clip_value_min=1.0, clip_value_max=1.0, name='HuberLoss', scope=None, collect=True)
Computes clipped delta Loss for DQN.

Args:
 weights: Coefficients for the loss a
scalar
.  scope: scope to add the op to.
 name: name of the op.
 collect: add to losses collection.
 weights: Coefficients for the loss a

Returns: A scalar
Tensor
representing the loss value. 
Raises:
 ValueError: If
predictions
shape doesn't matchlabels
shape, orweights
isNone
.
 ValueError: If
softmax_cross_entropy
softmax_cross_entropy(weights=1.0, label_smoothing=0, name='SoftmaxCrossEntropy', scope=None, collect=True)
Computes Softmax Cross entropy (softmax categorical cross entropy).
Computes softmax cross entropy between y_pred (logits) and y_true (labels).
Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.
 **WARNING:** This op expects unscaled logits, since it performs a
softmax
ony_pred
internally for efficiency. Do not call this op with the output ofsoftmax
, as it will produce incorrect results.
y_pred
and y_true
must have the same shape [batch_size, num_classes]
and the same dtype (either float32
or float64
). It is also required
that y_true
(labels) are binary arrays (For example, class 2 out of a
total of 5 different classes, will be define as [0., 1., 0., 0., 0.])

Args:
 weights: Coefficients for the loss a
scalar
.  label_smoothing: If greater than
0
then smooth the labels.  scope: scope to add the op to.
 name: name of the op.
 collect: add to losses collection.
 weights: Coefficients for the loss a

Returns: A scalar
Tensor
representing the loss value. 
Raises:
 ValueError: If
predictions
shape doesn't matchlabels
shape, orweights
isNone
.
 ValueError: If
sigmoid_cross_entropy
sigmoid_cross_entropy(weights=1.0, label_smoothing=0, name='SigmoidCrossEntropy', scope=None, collect=True)
Computes Sigmoid cross entropy.(binary cross entropy):
Computes sigmoid cross entropy between y_pred (logits) and y_true (labels).
Measures the probability error in discrete classification tasks in which each class is independent and not mutually exclusive. For instance, one could perform multilabel classification where a picture can contain both an elephant and a dog at the same time.
For brevity, let x = logits
, z = targets
. The logistic loss is
x  x * z + log(1 + exp(x))
To ensure stability and avoid overflow, the implementation uses
max(x, 0)  x * z + log(1 + exp(abs(x)))
y_pred
and y_true
must have the same type and shape.

Args:
 weights: Coefficients for the loss a
scalar
.  label_smoothing: If greater than
0
then smooth the labels.  scope: scope to add the op to.
 name: name of the op.
 collect: add to losses collection.
 weights: Coefficients for the loss a

Returns: A scalar
Tensor
representing the loss value. 
Raises:
 ValueError: If
predictions
shape doesn't matchlabels
shape, orweights
isNone
.
 ValueError: If
hinge_loss
hinge_loss(weights=1.0, name='HingeLoss', scope=None, collect=True)
Hinge Loss.

Args:
 weights: Coefficients for the loss a
scalar
.  name: name of the op.
 scope: The scope for the operations performed in computing the loss.
 collect: add to losses collection.
 weights: Coefficients for the loss a

Returns: A scalar
Tensor
representing the loss value. 
Raises:
 ValueError: If
predictions
shape doesn't matchlabels
shape, orweights
isNone
.
 ValueError: If
cosine_distance
cosine_distance(dim, weights=1.0, name='CosineDistance', scope=None, collect=True)
Adds a cosinedistance loss to the training procedure.
Note that the function assumes that predictions
and labels
are already unitnormalized.

WARNING:
weights
also supports dimensions of 1, but the broadcasting does not work as advertised, you'll wind up with weighted sum instead of weighted mean for any but the last dimension. This will be cleaned up soon, so please do not rely on the current behavior for anything but the shapes documented forweights
below. 
Args:
 dim: The dimension along which the cosine distance is computed.
 weights: Coefficients for the loss a
scalar
.  name: name of the op.
 scope: The scope for the operations performed in computing the loss.
 collect: add to losses collection.

Returns: A scalar
Tensor
representing the loss value. 
Raises:
 ValueError: If
predictions
shape doesn't matchlabels
shape, orweights
isNone
.
 ValueError: If
kullback_leibler_divergence
kullback_leibler_divergence(weights=1.0, name='KullbackLeiberDivergence', scope=None, collect=False)
Adds a Kullback leiber diverenge loss to the training procedure.

Args:
 name: name of the op.
 scope: The scope for the operations performed in computing the loss.
 collect: add to losses collection.

Returns: A scalar
Tensor
representing the loss value. 
Raises:
 ValueError: If
predictions
shape doesn't matchlabels
shape, orweights
isNone
.
 ValueError: If
poisson_loss
poisson_loss(weights=1.0, name='PoissonLoss', scope=None, collect=False)
Adds a poisson loss to the training procedure.

Args:
 name: name of the op.
 scope: The scope for the operations performed in computing the loss.
 collect: add to losses collection.

Returns: A scalar
Tensor
representing the loss value. 
Raises:
 ValueError: If
predictions
shape doesn't matchlabels
shape, orweights
isNone
.
 ValueError: If