# seq2seq **Repository Path**: mirrors_keon/seq2seq ## Basic Information - **Project Name**: seq2seq - **Description**: Minimal Seq2Seq model with Attention for Neural Machine Translation in PyTorch - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2022-01-10 - **Last Updated**: 2026-01-25 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # mini seq2seq Minimal Seq2Seq model with attention for neural machine translation in PyTorch. This implementation focuses on the following features: - Modular structure to be used in other projects - Minimal code for readability - Full utilization of batches and GPU. This implementation relies on [torchtext](https://github.com/pytorch/text) to minimize dataset management and preprocessing parts. ## Model description * Encoder: Bidirectional GRU * Decoder: GRU with Attention Mechanism * Attention: [Neural Machine Translation by Jointly Learning to Align and Translate](https://arxiv.org/abs/1409.0473) ![](http://www.wildml.com/wp-content/uploads/2015/12/Screen-Shot-2015-12-30-at-1.16.08-PM.png) ## Requirements * GPU & CUDA * Python3 * PyTorch * torchtext * Spacy * numpy * Visdom (optional) download tokenizers by doing so: ``` python -m spacy download de python -m spacy download en ``` ## References Based on the following implementations * [PyTorch Tutorial](http://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html) * [@spro/practical-pytorch](https://github.com/spro/practical-pytorch) * [@AuCson/PyTorch-Batch-Attention-Seq2seq](https://github.com/AuCson/PyTorch-Batch-Attention-Seq2seq)