Algorithms with Predictions


Schedule

Date Speaker Title
November 5 Nicole Megow Non-Clairvoyant Scheduling with Predictions
November 12 Vasilis Gkatzelis Learning-Augmented Mechanism Design
November 19 Antonios Antoniadis Different Benchmarks for Online Metric Algorithms with Predictions

Non-Clairvoyant Scheduling with Predictions

Nicole Megow

November 5th

Uncertainty poses a significant challenge on scheduling and planning tasks, where jobs may have unknown processing times or machines run at unknown speeds. However, assuming a complete lack of a priori information is overly pessimistic. With the rise of machine-learning methods and data-driven applications, access to predictions about input data or algorithmic actions becomes feasible. Yet, blindly trusting these predictions might lead to very poor solutions, due to the absence of quality guarantees.

In this talk, we explore recent advancements in the popular framework of Algorithms with Predictions, which integrates such error-prone predictions into online algorithm design. We examine various prediction models and error measures, showcasing learning-augmented algorithms for non-clairvoyant scheduling with strong error-dependent performance guarantees. We demonstrate the potential of imperfect predictions to enhance scheduling efficiency and address uncertainty in real-world scenarios.


Learning-Augmented Mechanism Design

Vasilis Gkatzelis

November 12th

This talk will introduce the model of “learning-augmented mechanism design” (or “mechanism design with predictions”), which is an alternative model for the design and analysis of mechanisms in settings that involve strategic and self-interested agents. Aiming to complement the traditional approach in computer science, which analyzes the performance of algorithms based on worst-case instances, recent work on “algorithms with predictions” has developed algorithms that are enhanced with machine-learned predictions regarding the optimal solution. The algorithms can use this information to guide their decisions and the goal is to achieve much stronger performance guarantees when these predictions are accurate (consistency) while also maintaining good worst-case guarantees, even if these predictions are very inaccurate (robustness). This talk will focus on the adaptation of this framework into mechanism design and focus on mechanisms that leverage such unreliable predictions to achieve improved outcomes in settings involving strategic agents.


Different Benchmarks for Online Metric Algorithms with Predictions

Antonios Antoniadis

November 19th

Learning augmented algorithms aim to combine the efficacy of machine learning approaches in handling practical inputs and the performance guarantees offered by traditional worst-case analysis of (usually) online algorithms. For numerous problems, machine learning techniques can readily generate predictions about the structure of the input. Several algorithmic results in the area can be seen as a meta-algorithm, dynamically "combining" a number of different algorithms. In this talk we illustrate two variations of this idea with the help of the Metrical Task System (MTS) problem, which captures online problems such as caching, k-server and convex body chasing.

First, we utilize results from the theory of online algorithms in order to develop a learning augmented algorithm that "combines" (i) a prediction-sensitive online algorithm that yields enhanced performance when these predictions are sufficiently accurate, and (ii) a classical online algorithm that disregards predictions. Our learning augmented algorithm assymptotically matches the performance of the best -- in hindsight --  between the two algorithms, a standard objective in the field.

Second, since the performance of different predictors may vary over time, it may be desirable to instead use as a benchmark a dynamic combination which follows different algorithms at different times. To this end, we design learning augmented algorithms against the benchmark of the best (in hindsight) unconstrained combination of k algorithms. We show that the algorithm obtains a competitive ratio of O(k^2), and that this is best possible. Finally, we discuss how our algorithms can be adapted to access predictors in a bandit-like fashion, querying only one algorithm at a time.