PyTorch Implementation of "Monotonic Chunkwise Attention" (ICLR 2018)
-
Updated
Apr 2, 2018 - Python
PyTorch Implementation of "Monotonic Chunkwise Attention" (ICLR 2018)
Repository for Attention Algorithm
Feature Selection Gates with Gradient Routing
Transliteration using Sequence to Sequence transduction using Hard Monotonic Attention, based on our EMNLP 2018 paper
45k context transformer for splice site prediction implemented with PyTorch.
Recurrent Visual Attention using PyTorch; Catch and MNIST classification
End-to-end trainable autoregressive and non-autoregressive transducers using hard attention
📝 Streamline text processing in Arabic and English with ChunkWise, a library offering 31 chunking strategies for NLP and RAG systems.
Add a description, image, and links to the hard-attention topic page so that developers can more easily learn about it.
To associate your repository with the hard-attention topic, visit your repo's landing page and select "manage topics."