LanguageModel(embedder: Embedder, output_layer: Module, dropout: float = 0, pad_index: int = 0, tie_weights: bool = False, tie_weight_attr: str = 'embedding')¶
Implement an LanguageModel model for sequential classification.
This model can be used to language modeling, as well as other sequential classification tasks. The full sequence predictions are produced by the model, effectively making the number of examples the batch size multiplied by the sequence length.
forward(self, data: Tensor, target: Optional[Tensor] = None)¶
Run a forward pass through the network.
Parameters: data (Tensor) – The input data Returns: The output predictions of shape seq_len x batch_size x n_out Return type: Union[Tensor, Tuple[Tensor, Tensor]]