Update: 23/07/2018_15:12:17
  1. Thesis - Contributions
  2. Stable and Effective Trainable Greedy Decoding for Sequence to Sequence Learning
  3. Soft Actor-Critic
  4. {SEARNN}
  5. Neural Lattice Language Models
  6. MaskGAN
  7. Latent Alignment and Variational Attention
  8. Generating Contradictory, Neutral, and Entailing Sentences
  9. Discrete Autoencoders for Sequence Models
  10. Differentiable Dynamic Programming for Structured Prediction and
  11. Deterministic Non-Autoregressive Neural Sequence Modeling by Iterative
  12. Breaking the Softmax Bottleneck
  13. Bi-Directional Block Self-Attention for Fast and Memory-Efficient Sequence
  14. A Stochastic Decoder for Neural Machine Translation
  15. Using stochastic computation graphs formalism for optimization of sequence-to-sequence model
  16. Translating Phrases in Neural Machine Translation
  17. Towards Neural Phrase-based Machine Translation
  18. Synthetic and Natural Noise Both Break Neural Machine Translation
  19. Sequence Modeling via Segmentations
  20. Differentiable lower bound for expected BLEU score
  21. Data Noising as Smoothing in Neural Network Language Models
  22. Comparative Study of CNN and RNN for Natural Language Processing
  23. Classical Structured Prediction Losses for Sequence to Sequence Learning
  24. Reward Augmented Maximum Likelihood for Neural Structured Prediction
  25. Multimodal Pivots for Image Caption Translation
  26. An Actor-Critic Algorithm for Sequence Prediction
  27. Sequence Level Training with Recurrent Neural Networks
  28. Scheduled Sampling for Sequence Prediction with Recurrent Neural
  29. Neural Machine Translation of Rare Words with Subword Units
  30. Minimum Risk Training for Neural Machine Translation
  31. Sequence to Sequence Learning with Neural Networks
  32. On Using Very Large Target Vocabulary for Neural Machine Translation
  33. Neural Machine Translation by Jointly Learning to Align and Translate