Hello,

I frequently run into the same issue: I have a model with a single output and would like to use multiple weighted losses. This is quite common today for Neural Vocoders. As far as I see, this common use pattern appears problematic in Keras, as the compile function requires as many losses as outputs. This means that when I vary the number of losses, I need to change the number of outputs.

Another alternative is to create a composed loss that internally uses the weights and calculates the combined loss. The problem is that I will not see the value of the individual losses as metrics.

My question here is: Do I miss something in the API, or is this common use case not supported at all, in which case, this would be a feature-request.

Many thanks for your help.

Comment From: edge7

Hi, In that case, I usually use a combined loss, and then I add the single losses in the metrics to check their single values. something like:

final_model.compile(
            optimizer=optimizer,
            loss=get_segm_loss,
            metrics=[
                custom_iou_metric,
                metric_cont,
                spatial_loss,
                callable_generalized_cross_loss,
            ],
        )

where get_segm_loss is the combined one, and in metrics there are some of those that combined give get_segm_loss Hope it makes sense

Comment From: roebel

Thanks for the comment!

I had thought about such a solution. Unfortunately, it involves calculating all losses twice. Some of my losses are a bit costly. For Neural Vocoders it may involve calculate the Mel spectrogram from the synthesized signal.

But I think this can be made better when the combined Loss internally calculates the loss and as well the metrics and that allows accessing the metrics such that they can be accessed in the metrics property. The loss would be like this

class MultiLoss(keras.losses.Loss):
    def __init__(self, loss_specs: List[LossCfg]) :

        super().__init__(reduction="sum")
        self.losses = [
            lc.loss if isinstance(lc.loss, keras.losses.Loss) else 
            keras.losses.deserialize(lc.loss) for lc in loss_specs
        ]

        self.metrics = [keras.metrics.Mean(name=ll.name) for ll in self.losses]
        self.weights = [lc.weight for lc in loss_specs]


    def call(self, y_true, y_pred):
        loss = 0.
        for lf, lw, ml in zip(self.losses, self.weights, self.metrics):
            partial_loss = lf(y_true, y_pred)
            ml.update_state(partial_loss)
            loss += lw * partial_loss
        return loss

And I'd need to instantiate the loss and store it somewhere and then overwrite the metrics property of the model such that it gathers the metrics from the loss.

What is unfortunate in that case is that I will not be able to use the official CompiledMetrics container, which - as far as I see - would require me to establish the handling of training in distributed environments (multiple GPUs).

I might fix this by means of not using metrics.Mean but a derived class that overwrites the update_state method such that it takes an additional argument ignored

class MeanParts(keras.metrics.Mean):
...
def update_state(self, values, sample_weight=None, ignored=True):
      if not ignored:
           super().update_state(values, sample_weight)

so that I can trigger using the values in the MultiLoss but will by default ignore calls in the CompiledMetrics instance. I could then add the metrics I retrieve from the loss as part of the list of metrics that are argument to the compile method. This looks really nice. I'll try this out.

Thanks again for your comment.

Comment From: roebel

For info it works nicely with minor changes in the MeanParts Metric.

class MeanParts(keras.metrics.Mean):
...
def mean_update_state(self, values, sample_weight=None, ignored=True):
      super().update_state(values, sample_weight)

def update_state(self, *arg, **kwargs):
      pass

Thanks again @edge7

Comment From: google-ml-butler[bot]

Are you satisfied with the resolution of your issue? Yes No