Conversational agents have begun to rise both in the academic (in terms of
research) and commercial (in terms of applications) world. This paper
investigates the task of building a non-goal driven conversational agent, using
neural network generative models and analyzes how the conversation context is
handled. It compares a simpler Encoder-Decoder with a Hierarchical Recurrent
Encoder-Decoder architecture, which includes an additional module to model the
context of the conversation using previous utterances information. We found
that the hierarchical model was able to extract relevant context information
and include them in the generation of the output. However, it performed worse
(35-40%) than the simple Encoder-Decoder model regarding both grammatically
correct output and meaningful response. Despite these results, experiments
demonstrate how conversations about similar topics appear close to each other
in the context space due to the increased frequency of specific topic-related
words, thus leaving promising directions for future research and how the
context of a conversation can be exploited.

Source link