Losses

absolute_difference

absolute_difference(weights=1.0, name='AbsoluteDifference', scope=None, collect=True)

Adds an Absolute Difference loss to the training procedure.

weights acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If weights is a Tensor of shape [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the weights vector. If the shape of weights matches the shape of predictions, then the loss of each measurable element of predictions is scaled by the corresponding value of weights.

  • Args:

    • weights: Optional Tensor whose rank is either 0, or the same rank as labels, and must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding losses dimension).
    • name: operation name.
    • scope: operation scope.
    • collect: whether to collect this metric under the metric collection.
  • Returns: A scalar Tensor representing the loss value.


log_loss

log_loss(weights=1.0, epsilon=1e-07, name='LogLoss', scope=None, collect=True)

mean_squared_error

mean_squared_error(weights=1.0, name='MeanSquaredError', scope=None, collect=True)

Computes Mean Square Loss.

  • Args:

    • weights: Coefficients for the loss a scalar.
    • scope: scope to add the op to.
    • name: name of the op.
    • collect: add to losses collection.
  • Returns: A scalar Tensor representing the loss value.

  • Raises:

    • ValueError: If predictions shape doesn't match labels shape, or weights is None.

huber_loss

huber_loss(weights=1.0, clip=0.0, name='HuberLoss', scope=None, collect=True)

Computes Huber Loss for DQN.

Wikipedia link DeepMind link

  • Args:

    • weights: Coefficients for the loss a scalar.
    • scope: scope to add the op to.
    • name: name of the op.
    • collect: add to losses collection.
  • Returns: A scalar Tensor representing the loss value.

  • Raises:

    • ValueError: If predictions shape doesn't match labels shape, or weights is None.

clipped_delta_loss

clipped_delta_loss(weights=1.0, clip_value_min=-1.0, clip_value_max=1.0, name='HuberLoss', scope=None, collect=True)

Computes clipped delta Loss for DQN.

Wikipedia link DeepMind link

  • Args:

    • weights: Coefficients for the loss a scalar.
    • scope: scope to add the op to.
    • name: name of the op.
    • collect: add to losses collection.
  • Returns: A scalar Tensor representing the loss value.

  • Raises:

    • ValueError: If predictions shape doesn't match labels shape, or weights is None.

softmax_cross_entropy

softmax_cross_entropy(weights=1.0, label_smoothing=0, name='SoftmaxCrossEntropy', scope=None, collect=True)

Computes Softmax Cross entropy (softmax categorical cross entropy).

Computes softmax cross entropy between y_pred (logits) and y_true (labels).

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

  • **WARNING:** This op expects unscaled logits, since it performs a softmax on y_pred internally for efficiency. Do not call this op with the output of softmax, as it will produce incorrect results.

y_pred and y_true must have the same shape [batch_size, num_classes] and the same dtype (either float32 or float64). It is also required that y_true (labels) are binary arrays (For example, class 2 out of a total of 5 different classes, will be define as [0., 1., 0., 0., 0.])

  • Args:

    • weights: Coefficients for the loss a scalar.
    • label_smoothing: If greater than 0 then smooth the labels.
    • scope: scope to add the op to.
    • name: name of the op.
    • collect: add to losses collection.
  • Returns: A scalar Tensor representing the loss value.

  • Raises:

    • ValueError: If predictions shape doesn't match labels shape, or weights is None.

sigmoid_cross_entropy

sigmoid_cross_entropy(weights=1.0, label_smoothing=0, name='SigmoidCrossEntropy', scope=None, collect=True)

Computes Sigmoid cross entropy.(binary cross entropy):

Computes sigmoid cross entropy between y_pred (logits) and y_true (labels).

Measures the probability error in discrete classification tasks in which each class is independent and not mutually exclusive. For instance, one could perform multilabel classification where a picture can contain both an elephant and a dog at the same time.

For brevity, let x = logits, z = targets. The logistic loss is

x - x * z + log(1 + exp(-x))

To ensure stability and avoid overflow, the implementation uses

max(x, 0) - x * z + log(1 + exp(-abs(x)))

y_pred and y_true must have the same type and shape.

  • Args:

    • weights: Coefficients for the loss a scalar.
    • label_smoothing: If greater than 0 then smooth the labels.
    • scope: scope to add the op to.
    • name: name of the op.
    • collect: add to losses collection.
  • Returns: A scalar Tensor representing the loss value.

  • Raises:

    • ValueError: If predictions shape doesn't match labels shape, or weights is None.

hinge_loss

hinge_loss(weights=1.0, name='HingeLoss', scope=None, collect=True)

Hinge Loss.

  • Args:

    • weights: Coefficients for the loss a scalar.
    • name: name of the op.
    • scope: The scope for the operations performed in computing the loss.
    • collect: add to losses collection.
  • Returns: A scalar Tensor representing the loss value.

  • Raises:

    • ValueError: If predictions shape doesn't match labels shape, or weights is None.

cosine_distance

cosine_distance(dim, weights=1.0, name='CosineDistance', scope=None, collect=True)

Adds a cosine-distance loss to the training procedure.

Note that the function assumes that predictions and labels are already unit-normalized.

  • WARNING: weights also supports dimensions of 1, but the broadcasting does not work as advertised, you'll wind up with weighted sum instead of weighted mean for any but the last dimension. This will be cleaned up soon, so please do not rely on the current behavior for anything but the shapes documented for weights below.

  • Args:

    • dim: The dimension along which the cosine distance is computed.
    • weights: Coefficients for the loss a scalar.
    • name: name of the op.
    • scope: The scope for the operations performed in computing the loss.
    • collect: add to losses collection.
  • Returns: A scalar Tensor representing the loss value.

  • Raises:

    • ValueError: If predictions shape doesn't match labels shape, or weights is None.

kullback_leibler_divergence

kullback_leibler_divergence(weights=1.0, name='KullbackLeiberDivergence', scope=None, collect=False)

Adds a Kullback leiber diverenge loss to the training procedure.

  • Args:

    • name: name of the op.
    • scope: The scope for the operations performed in computing the loss.
    • collect: add to losses collection.
  • Returns: A scalar Tensor representing the loss value.

  • Raises:

    • ValueError: If predictions shape doesn't match labels shape, or weights is None.

poisson_loss

poisson_loss(weights=1.0, name='PoissonLoss', scope=None, collect=False)

Adds a poisson loss to the training procedure.

  • Args:

    • name: name of the op.
    • scope: The scope for the operations performed in computing the loss.
    • collect: add to losses collection.
  • Returns: A scalar Tensor representing the loss value.

  • Raises:

    • ValueError: If predictions shape doesn't match labels shape, or weights is None.