In this chapter we'll introduce neural architectures and AllenNLP abstractions that are commonly used for building your NLP model.
The main modeling operations done on natural language inputs include summarizing sequences,
contextualizing sequences (that is, computing contextualized embeddings from sequences), modeling
spans within a longer sequence, and modeling similarities between sequences using attention. In the
following sections we'll learn AllenNLP abstractions for these operations. All of these
torch.nn.Modules
can be used in whatever PyTorch code you want, whether or not you use the rest of
what's in AllenNLP
.
Though if you have a large number of spans, more than a hundred or so, doing this will be
slow, and you'll be better off just using an ArrayField
.