# flambe.metric.dev.binary¶

## Module Contents¶

class flambe.metric.dev.binary.BinaryMetric(threshold: float = 0.5)[source]
__str__(self)[source]

Return the name of the Metric (for use in logging).

compute(self, pred: torch.Tensor, target: torch.Tensor)[source]

Compute the metric given predictions and targets

Parameters: pred (Tensor) – The model predictions target (Tensor) – The binary targets The computed binary metric float
compute_binary(self, pred: torch.Tensor, target: torch.Tensor)[source]

Compute a binary-input metric.

Parameters: pred (torch.Tensor) – Predictions made by the model. It should be a probability 0 <= p <= 1 for each sample, 1 being the positive class. target (torch.Tensor) – Ground truth. Each label should be either 0 or 1. The computed binary metric torch.float
class flambe.metric.dev.binary.BinaryAccuracy[source]

Compute binary accuracy.

 |True positives + True negatives| / N 

compute_binary(self, pred: torch.Tensor, target: torch.Tensor)[source]

Compute binary accuracy.

Parameters: pred (torch.Tensor) – Predictions made by the model. It should be a probability 0 <= p <= 1 for each sample, 1 being the positive class. target (torch.Tensor) – Ground truth. Each label should be either 0 or 1. The computed binary metric torch.float
class flambe.metric.dev.binary.BinaryPrecision(threshold: float = 0.5, positive_label: int = 1)[source]

Compute Binary Precision.

An example is considered negative when its score is below the specified threshold. Binary precition is computed as follows:

 |True positives| / |True Positives| + |False Positives| 

compute_binary(self, pred: torch.Tensor, target: torch.Tensor)[source]

Compute binary precision.

Parameters: pred (torch.Tensor) – Predictions made by the model. It should be a probability 0 <= p <= 1 for each sample, 1 being the positive class. target (torch.Tensor) – Ground truth. Each label should be either 0 or 1. The computed binary metric torch.float
__str__(self)[source]

Return the name of the Metric (for use in logging).

class flambe.metric.dev.binary.BinaryRecall(threshold: float = 0.5, positive_label: int = 1)[source]

Compute binary recall.

An example is considered negative when its score is below the specified threshold. Binary precition is computed as follows:

 |True positives| / |True Positives| + |False Negatives| 

compute_binary(self, pred: torch.Tensor, target: torch.Tensor)[source]

Compute binary recall.

Parameters: pred (torch.Tensor) – Predictions made by the model. It should be a probability 0 <= p <= 1 for each sample, 1 being the positive class. target (torch.Tensor) – Ground truth. Each label should be either 0 or 1. The computed binary metric torch.float
__str__(self)[source]

Return the name of the Metric (for use in logging).

class flambe.metric.dev.binary.F1(threshold: float = 0.5, positive_label: int = 1, eps: float = 1e-08)[source]
compute_binary(self, pred: torch.Tensor, target: torch.Tensor)[source]

Compute F1. Score, the harmonic mean between precision and recall.

Parameters: pred (torch.Tensor) – Predictions made by the model. It should be a probability 0 <= p <= 1 for each sample, 1 being the positive class. target (torch.Tensor) – Ground truth. Each label should be either 0 or 1. The computed binary metric torch.float