View source on GitHub |
Computes the crossentropy loss between the labels and predictions.
Inherits From: Loss
tf.keras.losses.SparseCategoricalCrossentropy( from_logits=False, ignore_class=None, reduction='sum_over_batch_size', name='sparse_categorical_crossentropy' ) Used in the notebooks
| Used in the guide | Used in the tutorials |
|---|---|
Use this crossentropy loss function when there are two or more label classes. We expect labels to be provided as integers. If you want to provide labels using one-hot representation, please use CategoricalCrossentropy loss. There should be # classes floating point values per feature for y_pred and a single floating point value per feature for y_true.
In the snippet below, there is a single floating point value per example for y_true and num_classes floating pointing values per example for y_pred. The shape of y_true is [batch_size] and the shape of y_pred is [batch_size, num_classes].
Examples:
y_true = [1, 2]y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]# Using 'auto'/'sum_over_batch_size' reduction type.scce = keras.losses.SparseCategoricalCrossentropy()scce(y_true, y_pred)1.177
# Calling with 'sample_weight'.scce(y_true, y_pred, sample_weight=np.array([0.3, 0.7]))0.814
# Using 'sum' reduction type.scce = keras.losses.SparseCategoricalCrossentropy(reduction="sum")scce(y_true, y_pred)2.354
# Using 'none' reduction type.scce = keras.losses.SparseCategoricalCrossentropy(reduction=None)scce(y_true, y_pred)array([0.0513, 2.303], dtype=float32)
Usage with the compile() API:
model.compile(optimizer='sgd', loss=keras.losses.SparseCategoricalCrossentropy()) Methods
call
call( y_true, y_pred ) from_config
@classmethodfrom_config( config )
get_config
get_config() __call__
__call__( y_true, y_pred, sample_weight=None ) Call self as a function.
View source on GitHub