# flambe.metric¶

## Package Contents¶

class flambe.metric.Metric[source]

Base Metric interface.

Objects implementing this interface should take in a sequence of examples and provide as output a processd list of the same size.

compute(self, pred: torch.Tensor, target: torch.Tensor)

Computes the metric over the given prediction and target.

Parameters: pred (torch.Tensor) – The model predictions target (torch.Tensor) – The ground truth targets The computed metric torch.Tensor
__call__(self, *args, **kwargs)

Makes Featurizer a callable.

__str__(self)

Return the name of the Metric (for use in logging).

class flambe.metric.MultiLabelCrossEntropy(weight: Optional[torch.Tensor] = None, ignore_index: Optional[int] = None, reduction: str = 'mean')[source]
compute(self, pred: torch.Tensor, target: torch.Tensor)

Computes the multilabel cross entropy loss.

Parameters: pred (torch.Tensor) – input logits of shape (B x N) target (torch.LontTensor) – target tensor of shape (B x N) loss – Multi label cross-entropy loss, of shape (B) torch.Tensor
class flambe.metric.MultiLabelNLLLoss(weight: Optional[torch.Tensor] = None, ignore_index: Optional[int] = None, reduction: str = 'mean')[source]
compute(self, pred: torch.Tensor, target: torch.Tensor)

Computes the Negative log likelihood loss for multilabel.

Parameters: pred (torch.Tensor) – input logits of shape (B x N) target (torch.LontTensor) – target tensor of shape (B x N) loss – Multi label negative log likelihood loss, of shape (B) torch.float
class flambe.metric.Accuracy[source]
compute(self, pred: torch.Tensor, target: torch.Tensor)

Computes the loss.

Parameters: pred (Tensor) – input logits of shape (B x N) target (LontTensor) – target tensor of shape (B) or (B x N) accuracy – single label accuracy, of shape (B) torch.Tensor
class flambe.metric.Perplexity[source]

Token level perplexity, computed a exp(cross_entropy).

compute(self, pred: torch.Tensor, target: torch.Tensor)

Compute the preplexity given the input and target.

Parameters: pred (torch.Tensor) – input logits of shape (B x N) target (torch.LontTensor) – target tensor of shape (B) Output perplexity torch.float
class flambe.metric.BPC[source]

Bits per character. Computed as log_2(perplexity)

compute(self, pred: torch.Tensor, target: torch.Tensor)

Compute the bits per character given the input and target.

Parameters: pred (torch.Tensor) – input logits of shape (B x N) target (torch.LontTensor) – target tensor of shape (B) Output perplexity torch.float
class flambe.metric.AUC(max_fpr=1.0)[source]
compute(self, pred: torch.Tensor, target: torch.Tensor)

Compute AUC at the given max false positive rate.

Parameters: pred (torch.Tensor) – The model predictions target (torch.Tensor) – The binary targets The computed AUC torch.Tensor
class flambe.metric.BinaryPrecision(threshold: float = 0.5, positive_label: int = 1)[source]

Compute Binary Precision.

An example is considered negative when its score is below the specified threshold. Binary precition is computed as follows:

 |True positives| / |True Positives| + |False Positives| 

compute_binary(self, pred: torch.Tensor, target: torch.Tensor)

Compute binary precision.

Parameters: pred (torch.Tensor) – Predictions made by the model. It should be a probability 0 <= p <= 1 for each sample, 1 being the positive class. target (torch.Tensor) – Ground truth. Each label should be either 0 or 1. The computed binary metric torch.float
__str__(self)

Return the name of the Metric (for use in logging).

class flambe.metric.BinaryRecall(threshold: float = 0.5, positive_label: int = 1)[source]

Compute binary recall.

An example is considered negative when its score is below the specified threshold. Binary precition is computed as follows:

 |True positives| / |True Positives| + |False Negatives| 

compute_binary(self, pred: torch.Tensor, target: torch.Tensor)

Compute binary recall.

Parameters: pred (torch.Tensor) – Predictions made by the model. It should be a probability 0 <= p <= 1 for each sample, 1 being the positive class. target (torch.Tensor) – Ground truth. Each label should be either 0 or 1. The computed binary metric torch.float
__str__(self)

Return the name of the Metric (for use in logging).

class flambe.metric.BinaryAccuracy[source]

Compute binary accuracy.

 |True positives + True negatives| / N 

compute_binary(self, pred: torch.Tensor, target: torch.Tensor)

Compute binary accuracy.

Parameters: pred (torch.Tensor) – Predictions made by the model. It should be a probability 0 <= p <= 1 for each sample, 1 being the positive class. target (torch.Tensor) – Ground truth. Each label should be either 0 or 1. The computed binary metric torch.float
class flambe.metric.F1(threshold: float = 0.5, positive_label: int = 1, eps: float = 1e-08)[source]
compute_binary(self, pred: torch.Tensor, target: torch.Tensor)

Compute F1. Score, the harmonic mean between precision and recall.

Parameters: pred (torch.Tensor) – Predictions made by the model. It should be a probability 0 <= p <= 1 for each sample, 1 being the positive class. target (torch.Tensor) – Ground truth. Each label should be either 0 or 1. The computed binary metric torch.float