Update: 23/07/2018_15:12:17
- Thesis - Contributions
- Stable and Effective Trainable Greedy Decoding for Sequence to Sequence Learning
- Soft Actor-Critic
- {SEARNN}
- Neural Lattice Language Models
- MaskGAN
- Latent Alignment and Variational Attention
- Generating Contradictory, Neutral, and Entailing Sentences
- Discrete Autoencoders for Sequence Models
- Differentiable Dynamic Programming for Structured Prediction and
- Deterministic Non-Autoregressive Neural Sequence Modeling by Iterative
- Breaking the Softmax Bottleneck
- Bi-Directional Block Self-Attention for Fast and Memory-Efficient Sequence
- A Stochastic Decoder for Neural Machine Translation
- Using stochastic computation graphs formalism for optimization of sequence-to-sequence model
- Translating Phrases in Neural Machine Translation
- Towards Neural Phrase-based Machine Translation
- Synthetic and Natural Noise Both Break Neural Machine Translation
- Sequence Modeling via Segmentations
- Differentiable lower bound for expected BLEU score
- Data Noising as Smoothing in Neural Network Language Models
- Comparative Study of CNN and RNN for Natural Language Processing
- Classical Structured Prediction Losses for Sequence to Sequence Learning
- Reward Augmented Maximum Likelihood for Neural Structured Prediction
- Multimodal Pivots for Image Caption Translation
- An Actor-Critic Algorithm for Sequence Prediction
- Sequence Level Training with Recurrent Neural Networks
- Scheduled Sampling for Sequence Prediction with Recurrent Neural
- Neural Machine Translation of Rare Words with Subword Units
- Minimum Risk Training for Neural Machine Translation
- Sequence to Sequence Learning with Neural Networks
- On Using Very Large Target Vocabulary for Neural Machine Translation
- Neural Machine Translation by Jointly Learning to Align and Translate