GPU Parsing with Interaction Nets
This was is my Master’s Dissertation exploring how to parse Context Free Grammars on a GPU. Specifically, I developed my parsing algorithm using interaction nets, a highly parallelizable computational model. The paper can be found here.
Abstract
There is a keen interest in lowering the parsing times of code and data in a variety of settings, including compilation and low-latency data processing. In recent year, graphics processing units (GPU) have been at the forefront of optimizing many algorithms, beyond the scope of graphical applications. This dissertation employs interaction nets, an inherently parallel model of computation, to overcome the sequential execution of parsing algorithms.
Until now, it has remained unclear to what extent interaction nets are a worthwhile model of computation in terms of providing more efficient parallel implementations of algorithms. Despite being touted as an inherently model of computation, there has been little work investigating whether their advantage in parallelism actually manifests in practical tasks. We extend an existing parallel parsing algorithm to include LR grammars, thereby designing a novel parallel LR parser. Additionally, we develop the fastest-yet interaction net evaluator designed to run on GPUs. Nonetheless, it does not reach the standard of the current state-of-the-art interaction net evaluator implemented on a CPU. We then implement our parallel parsing algorithm using interaction nets. However, our interaction net evaluator delivers lackluster performance, resulting in poor parsing times by our interaction net parser. Nevertheless, this presents the first non-trivial practical use of interaction nets.
In our evaluation, we identify two main flaws with using interaction nets for complex computations. First, interaction nets require an immense amount of memory for even small input sizes. Second, the majority of their execution is spent duplicating parts of the interaction net. These two factors combined make interaction nets an unviable model of computation for complex algorithms.