Neural machine translation with characters and hierarchical encoding
Neural machine translation with characters and hierarchical encoding
Neural machine translation with characters and hierarchical encoding

Neural machine translation with characters and hierarchical encoding

We propose a Neural Machine Translation model with a hierarchical char2word encoder that takes individual characters as input and output.
Article
Reading time:
By
Alexander Rosenberg Johansen, Jonas Meinertz Hansen, Elias Khazen Obeid, Casper Kaae Sønderby, Ole Winther
TABLE OF CONTENTS

Discover Raffle Search

An AI search engine that simplifies data management, analysis, and insights for smarter business decisions and market strategies.
Discover Now

Explore our Research

Most existing Neural Machine Translation models use groups of characters or whole words as input and output units. We propose a hierarchical char2word encoder model that takes individual characters both as input and output.

We first argue that this hierarchical representation of the character encoder reduces computational complexity and shows that it improves translation performance.

Secondly, by qualitatively studying attention plots from the decoder, the model learns to compress common words into a single embedding, whereas rare words, such as names and places, are represented character by character.

Download

Neural machine translation with characters and hierarchical encoding
Neural machine translation with characters and hierarchical encoding

Neural machine translation with characters and hierarchical encoding

We propose a Neural Machine Translation model with a hierarchical char2word encoder that takes individual characters as input and output.

Explore our Research

Most existing Neural Machine Translation models use groups of characters or whole words as input and output units. We propose a hierarchical char2word encoder model that takes individual characters both as input and output.

We first argue that this hierarchical representation of the character encoder reduces computational complexity and shows that it improves translation performance.

Secondly, by qualitatively studying attention plots from the decoder, the model learns to compress common words into a single embedding, whereas rare words, such as names and places, are represented character by character.

Download

Don't miss any update!
SOC2 badge