Declarative programming has been hailed as a promising approach to parallel programming since it makes it easier to reason about programs while hiding the implementation details of parallelism from the programmer. However, its advantage is also its disadvantage as it leaves the programmer with no straightforward way to optimize programs for performance. In this paper, we introduce Coordinated Linear Meld (CLM), a concurrent forward-chaining linear logic programming language, with a declarative way to coordinate the execution of parallel programs allowing the programmer to specify arbitrary scheduling and data partitioning policies. Our approach allows the programmer to write graph-based declarative programs and then optionally to use coordination to fine-tune parallel performance. In this paper we specify the set of coordination facts, discuss their implementation in a parallel virtual machine, and show—through example—how they can be used to optimize parallel execution. We compare the performance of CLM programs against the original uncoordinated Linear Meld and several other frameworks.
Mon 14 MarDisplayed time zone: Belfast change
11:35 - 12:50 | Language Implementation & DSLMain conference at Mallorca+Menorca Chair(s): Michael D. Bond Ohio State University | ||
11:35 25mTalk | Declarative Coordination of Graph-Based Parallel Programs Main conference Flavio Cruz , Ricardo Rocha FCUP, Universidade do Porto, Portugal, Seth Copen Goldstein Carnegie Mellon University Link to publication DOI | ||
12:00 25mTalk | Distributed Halide Main conference Link to publication DOI | ||
12:25 25mTalk | Parallel Type-checking with Haskell using Saturating LVars and Stream Generators Main conference Ryan R. Newton Indiana University, Omer S. Agacan Indiana University, Peter Fogg edX, Sam Tobin-Hochstadt Indiana University Link to publication DOI |