Work Stealing for Interactive Services to Meet Target Latency
Interactive web services increasingly drive critical business workloads such as search, advertising, games, shopping, and finance. Whereas optimizing parallel programs and distributed server systems have historically focused on average latency and throughput, the primary metric for interactive applications is instead consistent responsiveness, i.e., minimizing the number of requests that miss a target latency. This paper is the first to show how to generalize work-stealing, which is traditionally used to minimize the makespan of a single parallel job, to optimize for a target latency in interactive services with multiple parallel requests.
We design a new adaptive work stealing policy, called tail-control, that reduces the number of requests that miss a target latency. It uses instantaneous request progress, system load, and a target latency to choose when to parallelize requests with stealing, when to admit new requests, and when to limit parallelism of large requests. We implement this approach in the Intel Thread Building Block (TBB) library and evaluate it on real-world workloads and synthetic workloads. The tail-control policy substantially reduces the number of requests exceeding the desired target latency and delivers up to 58% relative improvement over various baseline policies. This generalization of work stealing for multiple requests effectively optimizes the number of requests that complete within a target latency, a key metric for interactive services.
Mon 14 Mar
|16:20 - 16:45|
Yangzihao Wang, Andrew DavidsonUniversity of California, Davis, Yuechao PanUniversity of California, Davis, Yuduo WuUniversity of California, Davis, Andy RiffelUniversity of California, Davis, John D. OwensUniversity of California, DavisLink to publication DOI
|16:45 - 17:10|
Saman AshkianiUniversity of California, Davis, Andrew DavidsonUniversity of California, Davis, Ulrich MeyerGoethe-Universitat Frankfurt am Main, John D. OwensUniversity of California, DavisLink to publication DOI
|17:10 - 17:35|
Keep Calm and React with Foresight: Strategies for Low-Latency and Energy-Efficient Elastic Data Stream ProcessingLink to publication DOI
|17:35 - 18:00|
Jing LiWashington University in St. Louis, Kunal AgrawalWashington University in St. Louis, Sameh ElniketyMicrosoft Research, Yuxiong HeMicrosoft Research, I-Ting Angelina LeeWashington University in St. Louis, Chenyang LuWashington University in St. Louis, Kathryn S McKinleyMicrosoft ResearchLink to publication DOI