Skip to content

Alex Day

Dependency Tree Collapse for N-Gram Generation

Capstone Project, NLP2 min read

Introduction

Throughout the past semester I have been working on my senior capstone project for my CS degree (detailed further in its' [project page]({% link _projects/sentence_to_emoji.md %})). The project is to create Emoji summaries for sentences and one of the integral parts of this algorithm is separating a sentence into a sequence of n-grams that represent it. In the initial algorithm, I took a very naive approach of generating every single combination of n-grams, summarizing them all, and then returning the summary with the highest result. While this worked it did have some downsides. The main disadvantage of attacking the problem in this way was some of the n-grams contained two big ideas from the sentence and they were only getting translated to one Emoji. It was clear that this was not ideal. The approach detailed within this post uses information about the dependencies between words within the sentence in hopes that this will produce an output that is more representative.

Dependency Trees

Dependency trees are a way to map the dependencies of words within a sentence. The example in figure 1 maps the dependencies within the sentence "I finished my homework just before the class started". Because the node containing "I" is a child of the "finished" node there is a direct relation between the two words. In this case "I" is the object that actually "finished" something. In this dependency tree (and in the algorithm) the actual relations that link words together are neither shown or used.

A tree-like structure containing a sentence with the parts of speech of each word label as well as the dependencies shown by edges

Rules for Collapse

The primary assumption behind this algorithm is that some dependencies can be collapsed, or simplified to produce larger n-grams that make up the sentence. As of right now the algorithm only executes two rules but it is easy to imagine other rules that could be used. The rules are as follows: 1. If a node has only one child then the two nodes can be combined into one 2. If multiple leaves are on the same level and have the same parent then they can be combined into one node

Child Dependency

The same tree from above except there is a direct node to node relation with no other children. These two nodes are n-gramed

Figure 0. The same tree from above except there is a direct node to node relation with no other children. These two nodes are n-gramed

The same tree as in the image above but the two n-gramed nodes have been collapsed and are now just one node

Figure 0. The same tree as in the image above but the two n-gramed nodes have been collapsed and are now just one node

Neighbor dependency

Step 1 of Collapsing Neighbor Dependencies![Step 2 of Collapsing Neighbor Dependencies](/img/tree_collapse/neighbor_step_two.png)

Results

This algorithm is used directly in the sentence to Emoji algorithm, so it makes the most sense to present results as relative to its' application. Table 1 below shows the results from the old exhaustive n-gram sequencing algorithm as compared to this dependency relation informed algorithm.

Table 1. Results from the Tree Collapse compared to Exhaustive
Sentence 1. The student drew a snowflake on the chalk board
ExhaustiveThe student drewasnowflakeonthe chalk board
Tree CollapseThe studentdrewa snowflakethe chalkon board
Sentence 2. I finished the homework just before class started
ExhaustiveIfinished the homework just before class started
Tree CollapseIfinishedthe homeworkjust before classstarted
Sentence 3. Can you calculate the number of giraffes that have ever existed?
Exhaustivecanyou calculate the number of giraffe that have ever existed
Tree Collapsecan youcalculatenumberthat have everof giraffes existed

It is relatively clear that the tree collapse n-gram generation gives a more comprehensive n-gram sequence for the sentence. However, this may be more of a reflection of the naive-ness of our initial algorithm. The exhaustive algorithm relies entirely on the dataset that is used for the output Emoji. The lacklustre dataset is something that will be improved shortly.

Implementation

Below, in figure 5, is the algorithm implemented in Python 3 with spaCy and NLTK. The general flow of the algorithm is that it performs the collapses detailed above and returns a list of lists. Each list within the result is an n-gram split into its' constituent words. It is implemented as such so the word can also contain the initial location of the word within the input so it can later be sorted. Sorting is trivial and it (along with the rest of the code) is contained within this Juypter notebook within the algorithms repo.

Future Work

The one main future iteration for this algorithm involves generating more potential n-gram sequences. The current implementation only collapses the tree once to produce one n-gram sequence but it wouldn't be that hard to further collapse the tree to produce more. This further collapse could prove to improve the summary for longer sentences and could be combined with other n-gram sequence scoring techniques.

Related Reading and Works

Online Dependency Tree Parser

Part of Speech Tagging on Wikipedia

Part of Speech Tagging in spaCy

Part of Speech Tagging Translator Code (as of 11/11/19)