Declarative programming has been hailed as a promising approach to parallel programming since it makes it easier to reason about programs while hiding the implementation details of parallelism from the programmer. However, its advantage is also its disadvantage as it leaves the programmer with no straightforward way to optimize programs for performance. In this paper, we introduce Coordinated Linear Meld (CLM), a concurrent forward-chaining linear logic programming language, with a declarative way to coordinate the execution of parallel programs allowing the programmer to specify arbitrary scheduling and data partitioning policies. Our approach allows the programmer to write graph-based declarative programs and then optionally to use coordination to fine-tune parallel performance. In this paper we specify the set of coordination facts, discuss their implementation in a parallel virtual machine, and show—through example—how they can be used to optimize parallel execution. We compare the performance of CLM programs against the original uncoordinated Linear Meld and several other frameworks.
Mon 14 Mar Times are displayed in time zone: Greenwich Mean Time : Belfast change
11:35 - 12:50: Language Implementation & DSLMain conference at Mallorca+Menorca Chair(s): Michael D. BondOhio State University | |||
11:35 - 12:00 Talk | Declarative Coordination of Graph-Based Parallel Programs Main conference Flavio Cruz, Ricardo RochaFCUP, Universidade do Porto, Portugal, Seth Copen GoldsteinCarnegie Mellon University Link to publication DOI | ||
12:00 - 12:25 Talk | Distributed Halide Main conference Link to publication DOI | ||
12:25 - 12:50 Talk | Parallel Type-checking with Haskell using Saturating LVars and Stream Generators Main conference Ryan R. NewtonIndiana University, Omer S. AgacanIndiana University, Peter FoggedX, Sam Tobin-HochstadtIndiana University Link to publication DOI |