Game Theory And Mechanism Design Pdf
| I am a Professor at MIT's Electrical Engineering and Computer Science department, a member of CSAIL, and affiliated with LIDS and ORC. I am also an investigator in the Foundations of Data Science Institute (FODSI). [Short Bio] [Honors and Awards] Constantinos (aka "Costis" with an accent on 'i') Daskalakis is a Professor of Electrical Engineering and Computer Science at MIT. He holds a Diploma in Electrical and Computer Engineering from the National Technical University of Athens, and a PhD in Electrical Engineering and Computer Science from UC Berkeley. He works on Computation Theory and its interface with Game Theory, Economics, Probability Theory, Machine Learning and Statistics. He has resolved long-standing open problems about the computational complexity of Nash equilibrium, and the mathematical structure and computational complexity of multi-item auctions. His current work focuses on high-dimensional statistics and learning from biased, dependent, or strategic data. He has been honored with the ACM Doctoral Dissertation Award, the Kalai Prize from the Game Theory Society, the Sloan Fellowship in Computer Science, the SIAM Outstanding Paper Prize, the Microsoft Research Faculty Fellowship, the Simons Investigator Award, the Rolf Nevanlinna Prize from the International Mathematical Union, the ACM Grace Murray Hopper Award, and the Bodossaki Foundation Distinguished Young Scientists Award. 2020 Kanellakis Lecture at Brown Research Interests: Theory of Computation, and its interface with Economics, Game Theory, Machine Learning, Statistics and Probability Theory |
Recent Publications/Preprints
- Constantinos Daskalakis, Maxwell Fishelson, Noah Golowich: Near-Optimal No-Regret Learning in General Games.
In the 35th Annual Conference on Neural Information Processing Systems (NeurIPS), NeurIPS 2021. Oral. arXiv
- Constantinos Daskalakis, Patroklos Stefanou, Rui Yao, Manolis Zampetakis: Efficient Truncated Linear Regression with Unknown Noise Variance.
In the 35th Annual Conference on Neural Information Processing Systems (NeurIPS), NeurIPS 2021.
- Yuval Dagan, Constantinos Daskalakis, Nishanth Dikkala, Surbhi Goel, Anthimos Vardis Kandiros: Statistical Estimation from Dependent Data.
In the 38th International Conference on Machine Learning, ICML 2021. arXiv.
- Constantinos Daskalakis, Vasilis Kontonis, Christos Tzamos, Manolis Zampetakis: A Statistical Taylor Theorem and Extrapolation of Truncated Densities.
In the 34th Annual Conference on Learning Theory, COLT 2021. arXiv.
- Constantinos Daskalakis, Qinxuan Pan: Sample-Optimal and Efficient Learning of Tree Ising models.
In the 53rd ACM Symposium on Theory of Computing, STOC 2021. arXiv. twitter summary
- Constantinos Daskalakis, Stratis Skoulakis, Manolis Zampetakis: The Complexity of Constrained Min-Max Optimization.
In the 53rd ACM Symposium on Theory of Computing, STOC 2021. arXiv. twitter summary
- Yuval Dagan, Constantinos Daskalakis, Nishanth Dikkala, Anthimos Vardis Kandiros: Learning Ising Models from One or Multiple Samples.
In the 53rd ACM Symposium on Theory of Computing, STOC 2021. arXiv. twitter summary
- Jelena Diakonikolas, Constantinos Daskalakis, Michael I. Jordan: Efficient Methods for Structured Nonconvex-Nonconcave Min-Max Optimization.
In the 24th International Conference on Artificial Intelligence and Statistics (AISTATS), AISTATS 2021. arXiv
- Mucong Ding, Constantinos Daskalakis, Soheil Feizi: GANs with Conditional Independence Graphs: On Subadditivity of Probability Divergences.
In the 24th International Conference on Artificial Intelligence and Statistics (AISTATS), AISTATS 2021. Oral. arXiv
- Fotini Christia, Michael Curry, Constantinos Daskalakis, Erik Demaine, John P. Dickerson, MohammadTaghi Hajiaghayi, Adam Hesterberg, Marina Knittel, Aidan Milliff: Scalable Equilibrium Computation in Multi-agent Influence Games on Networks.
In the 34th AAAI Conference on Artificial Intelligence, AAAI 2021.
- Noah Golowich, Sarath Pattathil, Constantinos Daskalakis: Tight last-iterate convergence rates for no-regret learning in multi-player games.
In the 34th Annual Conference on Neural Information Processing Systems (NeurIPS), NeurIPS 2020. arXiv
- Constantinos Daskalakis, Dylan Foster, Noah Golowich: Independent Policy Gradient Methods for Competitive Reinforcement Learning.
In the 34th Annual Conference on Neural Information Processing Systems (NeurIPS), NeurIPS 2020. arXiv
- Constantinos Daskalakis, Dhruv Rohatgi, Manolis Zampetakis: Truncated Linear Regression in High Dimensions.
In the 34th Annual Conference on Neural Information Processing Systems (NeurIPS), NeurIPS 2020. arXiv
- Constantinos Daskalakis, Dhruv Rohatgi, Manolis Zampetakis: Constant-Expansion Suffices for Compressed Sensing with Generative Priors.
In the 34th Annual Conference on Neural Information Processing Systems (NeurIPS), NeurIPS 2020. Spotlight. arXiv
- Constantinos Daskalakis, Manolis Zampetakis: More Revenue from Two Samples via Factor Revealing SDPs.
In the 21st ACM Conference on Economics and Computation, EC 2020. arXiv
- Constantinos Daskalakis, Maxwell Fishelson, Brendan Lucier, Vasilis Syrgkanis, Santhoshini Velusamy: Simple, Credible, and Approximately-Optimal Auctions.
In the 21st ACM Conference on Economics and Computation, EC 2020. arXiv
- Johannes Brustle, Yang Cai, Constantinos Daskalakis: Multi-Item Mechanisms without Item-Independence: Learnability via Robustness.
In the 21st ACM Conference on Economics and Computation, EC 2020. arXiv
- Qi Lei, Jason D. Lee, Alexandros G. Dimakis, Constantinos P. Daskalakis: SGD Learns One-Layer Networks in WGANs.
In the 37th International Conference on Machine Learning, ICML 2020. arXiv
- Noah Golowich, Sarath Pattathil, Constantinos Daskalakis, Asuman Ozdaglar: Last Iterate is Slower than Averaged Iterate in Smooth Convex-Concave Saddle Point Problems.
In the 33nd Annual Conference on Learning Theory, COLT 2020. arXiv
- Constantinos Daskalakis, Nishanth Dikkala, Ioannis Panageas: Logistic regression with peer-group effects via inference in higher-order Ising models.
In the 23rd International Conference on Artificial Intelligence and Statistics, AISTATS 2020. arXiv.
- Constantinos Daskalakis, Andrew Ilyas, Manolis Zampetakis: A Theoretical and Practical Framework for Regression and Classification from Truncated Samples.
In the 23rd International Conference on Artificial Intelligence and Statistics, AISTATS 2020. arXiv.
- Constantinos Daskalakis, Themis Gouleakis, Christos Tzamos, Manolis Zampetakis: Computationally and Statistically Efficient Truncated Regression.
In the 32nd Annual Conference on Learning Theory, COLT 2019. arXiv.
- Yuval Dagan, Constantinos Daskalakis, Nishanth Dikkala, Siddhartha Jayanti: Generalization and learning under Dobrushin's condition.
In the 32nd Annual Conference on Learning Theory, COLT 2019. arXiv.
- Constantinos Daskalakis, Nishanth Dikkala, Ioannis Panageas: Regression from Dependent Observations.
In the 51st Annual ACM Symposium on the Theory of Computing, STOC 2019. arXiv
- Ajil Jalal, Andrew Ilyas, Constantinos Daskalakis, Alexandros G. Dimakis: The Robust Manifold Defense: Adversarial Training using Generative Models. arXiv, 2019.
- Constantinos Daskalakis, Ioannis Panageas: Last-Iterate Convergence: Zero-Sum Games and Constrained Min-Max Optimization.
In the 10th Innovations in Theoretical Computer Science (ITCS) conference, ITCS 2019. arXiv.
- Constantinos Daskalakis, Ioannis Panageas: The Limit Points of (Optimistic) Gradient Descent in Min-Max Optimization.
In the 32nd Annual Conference on Neural Information Processing Systems (NeurIPS), NeurIPS 2018. arXiv.
- Constantinos Daskalakis, Nishanth Dikkala, Siddhartha Jayanti: HOGWILD!-Gibbs can be PanAccurate.
In the 32nd Annual Conference on Neural Information Processing Systems (NeurIPS), NeurIPS2018. arXiv.
- Nima Anari, Constantinos Daskalakis, Wolfgang Maass, Christos Papadimitriou, Amin Saberi, Santosh Vempala: Smoothed Analysis of Discrete Tensor Decomposition and Assemblies of Neurons.
In the 32nd Annual Conference on Neural Information Processing Systems (NeurIPS), NeurIPS 2018. arXiv.
- Jayadev Acharya, Arnab Bhattacharyya, Constantinos Daskalakis, Saravanan Kandasamy: Learning and Testing Causal Models with Interventions.
In the 32nd Annual Conference on Neural Information Processing Systems (NeurIPS), NeurIPS 2018. arXiv.
- Constantinos Daskalakis, Themis Gouleakis, Christos Tzamos, Manolis Zampetakis: Efficient Statistics, in High Dimensions, from Truncated Samples.
In the 59th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2018. arXiv.
- Shipra Agrawal, Constantinos Daskalakis, Vahab Mirrokni, Balasubramanian Sivan: Robust Repeated Auctions under Heterogeneous Buyer Behavior.
In the 19th ACM conference on Economics and Computation, EC 2018. arXiv.
- Constantinos Daskalakis, Nishanth Dikkala, Nick Gravin: Testing Symmetric Markov Chains from a Single Trajectory.
In the 31st Annual Conference on Learning Theory, COLT 2018. arXiv.
- Constantinos Daskalakis, Christos Tzamos, Manolis Zampetakis: Bootstrapping EM via EM and Convergence Analysis in the Naive Bayes Model.
In the 21st International Conference on Artificial Intelligence and Statistics, AISTATS 2018.
- Constantinos Daskalakis, Andrew Ilyas, Vasilis Syrgkanis, Haoyang Zeng: Training GANs with Optimism.
In the 6th International Conference on Learning Representations, ICLR 2018. arXiv.
- Constantinos Daskalakis, Christos Tzamos and Manolis Zampetakis: A Converse to Banach's Fixed Point Theorem and its CLS Completeness.
In the 50th Annual ACM Symposium on the Theory of Computing, STOC 2018. arXiv
- Constantinos Daskalakis, Gautam Kamath and John Wright: Which Distribution Distances are Sublinearly Testable?.
In the 29th Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2018. arXiv
- Constantinos Daskalakis, Nishanth Dikkala and Gautam Kamath: Testing Ising Models
In the 29th Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2018. arXiv
Journal version in IEEE Transactions on Information Theory, 65(11): 6829-6852, 2019. ieee
- Constantinos Daskalakis, Nishanth Dikkala and Gautam Kamath: Concentration of Multilinear Functions of the Ising Model with Applications to Network Data.
In the 31st Annual Conference on Neural Information Processing Systems (NeurIPS), NeurIPS 2017. arXiv
- Yang Cai and Constantinos Daskalakis: Learning Multi-Item Auctions with (or without) Samples.
In the 58th IEEE Symposium on Foundations of Computer Science (FOCS), FOCS 2017. arxiv
- Constantinos Daskalakis and Yasushi Kawase: Optimal Stopping Rules for Sequential Hypothesis Testing.
In the 25th Annual European Symposium on Algorithms (ESA), ESA 2017. pdf
- Bryan Cai, Constantinos Daskalakis and Gautam Kamath: Priv'IT: Private and Sample Efficient Identity Testing.
In the 34th International Conference on Machine Learning, ICML 2017. arXiv
- Constantinos Daskalakis, Christos Tzamos and Manolis Zampetakis: Ten Steps of EM Suffice for Mixtures of Two Gaussians.
In the 30th Annual Conference on Learning Theory, COLT 2017. Preliminary version presented at NeurIPS 2016 Workshop on Non-Convex Optimization for Machine Learning. arXiv
- Constantinos Daskalakis and Qinxuan Pan: Square Hellinger Subadditivity for Bayesian Networks and its Applications to Identity Testing.
In the 30th Annual Conference on Learning Theory, COLT 2017. arXiv
Selected Publications
Equilibrium Complexity:
- Constantinos Daskalakis, Paul W. Goldberg and Christos H. Papadimitriou: The Complexity of Computing a Nash Equilibrium.
In the 38th ACM Symposium on Theory of Computing, STOC 2006.
Journal version as SIAM Journal on Computing, 39(1), 195--259, May 2009. (Invited, special issue for STOC 2006.) pdf
Expository article in Communications of the ACM 52(2):89--97, 2009. (Invited.) pdf
- Constantinos Daskalakis: On the Complexity of Approximating a Nash Equilibrium.
In the 22nd Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2011.
ACM Transactions on Algorithms (TALG), 9(3): 23, 2013. Special Issue for SODA 2011. Invited. pdf
- Constantinos Daskalakis and Christos Papadimitriou: Approximate Nash Equilibria in Anonymous Games.
Journal of Economic Theory, 156:207--245, 2015. pdf
Journal version of papers in FOCS 2007, FOCS 2008, and STOC 2009.
- Constantinos Daskalakis, Stratis Skoulakis, Manolis Zampetakis: The Complexity of Constrained Min-Max Optimization.
In the 53rd ACM Symposium on Theory of Computing, STOC 2021. arXiv. twitter summary
Equilibrium Computation and Machine Learning
- Constantinos Daskalakis, Maxwell Fishelson, Noah Golowich: Near-Optimal No-Regret Learning in General Games.
In the 35th Annual Conference on Neural Information Processing Systems (NeurIPS), NeurIPS 2021. Oral. arXiv
- Constantinos Daskalakis, Stratis Skoulakis, Manolis Zampetakis: The Complexity of Constrained Min-Max Optimization.
In the 53rd ACM Symposium on Theory of Computing, STOC 2021. arXiv. twitter summary
- Constantinos Daskalakis, Andrew Ilyas, Vasilis Syrgkanis, Haoyang Zeng: Training GANs with Optimism.
In the 6th International Conference on Learning Representations, ICLR 2018. arXiv.
- Noah Golowich, Sarath Pattathil, Constantinos Daskalakis: Tight last-iterate convergence rates for no-regret learning in multi-player games.
In the 34th Annual Conference on Neural Information Processing Systems (NeurIPS), NeurIPS 2020. arXiv
Game Theory
- Constantinos Daskalakis and Qinxuan Pan: A Counter-Example to Karlin's Strong Conjecture for Fictitious Play.
In the 55th IEEE Symposium on Foundations of Computer Science, FOCS 2014. arxiv
- Yang Cai, Ozan Candogan, Constantinos Daskalakis and Christos Papadimitriou: Zero-sum Polymatrix Games: A Generalization of Minmax.
Mathematics of Operations Research, 41(2): 648-655, 2016. pdf
Truncated Statistics ∩ Machine Learning ∩ Missing Data
- Constantinos Daskalakis, Themis Gouleakis, Christos Tzamos, Manolis Zampetakis: Efficient Statistics, in High Dimensions, from Truncated Samples.
In the 59th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2018. arXiv.
- Constantinos Daskalakis, Themis Gouleakis, Christos Tzamos, Manolis Zampetakis: Computationally and Statistically Efficient Truncated Regression.
In the 32nd Annual Conference on Learning Theory, COLT 2019. arXiv.
- Constantinos Daskalakis, Dhruv Rohatgi, Manolis Zampetakis: Truncated Linear Regression in High Dimensions.
In the 34th Annual Conference on Neural Information Processing Systems (NeurIPS), NeurIPS 2020. arXiv
Statistical Physics ∩ Machine Learning ∩ Dependent Data
- Yuval Dagan, Constantinos Daskalakis, Nishanth Dikkala, Siddhartha Jayanti: Generalization and learning under Dobrushin's condition.
In the 32nd Annual Conference on Learning Theory, COLT 2019. arXiv.
- Constantinos Daskalakis, Nishanth Dikkala, Ioannis Panageas: Regression from Dependent Observations.
In the 51st Annual ACM Symposium on the Theory of Computing, STOC 2019. arXiv
- Yuval Dagan, Constantinos Daskalakis, Nishanth Dikkala, Anthimos Vardis Kandiros: Learning Ising Models from One or Multiple Samples.
In the 53rd ACM Symposium on Theory of Computing, STOC 2021. arXiv.
Phylogenetics:
- Constantinos Daskalakis, Elchanan Mossel and Sebastien Roch: Optimal Phylogenetic Reconstruction.
In the 38th ACM Symposium on Theory of Computing, STOC 2006. arXiv
Journal version as Probability Theory and Related Fields, 149(1-2):149-189, 2011.
Coding Theory:
- Sanjeev Arora, Constantinos Daskalakis and David Steurer: Message-Passing Algorithms and Improved LP Decoding.
In the 41st ACM Symposium On Theory of Computing, STOC 2009.
Journal version as IEEE Transactions on Information Theory, 58(12):7260--7271, 2012. pdf
Probability Theory:
- Constantinos Daskalakis and Christos Papadimitriou: Sparse Covers for Sums of Indicators.
Probability Theory and Related Fields, 162(3):679-705, 2015. arXiv
- Constantinos Daskalakis, Anindya De, Gautam Kamath and Christos Tzamos: A Size-Free CLT for Poisson Multinomials and its Applications.
In the 48th ACM Symposium on Theory of Computing, STOC 2016. arXiv
- Constantinos Daskalakis, Nishanth Dikkala and Gautam Kamath: Concentration of Multilinear Functions of the Ising Model with Applications to Network Data.
In the 31st Annual Conference on Neural Information Processing Systems (NeurIPS), NeurIPS 2017. arXiv
Mechanism Design:
- Yang Cai, Constantinos Daskalakis and Matt Weinberg: An Algorithmic Characterization of Multi-Dimensional Mechanisms.
In the 44th ACM Symposium on Theory of Computing, STOC 2012. ECCC Report.
- Yang Cai, Constantinos Daskalakis and Matt Weinberg: Understanding Incentives: Mechanism Design becomes Algorithm Design.
In the 54th IEEE Symposium on Foundations of Computer Science, FOCS 2013. arxiv
- Constantinos Daskalakis, Alan Deckelbaum and Christos Tzamos: Mechanism Design via Optimal Transport.
In the 14th ACM Conference on Electronic Commerce, EC 2013. Best Paper and Best Student Paper Award. pdf
- Constantinos Daskalakis, Alan Deckelbaum and Christos Tzamos: Strong Duality for a Multiple-Good Monopolist.
In the 16th ACM Conference on Economics and Computation (EC), EC 2015. arXiv
Journal version as Econometrica, 85(3):735-767, 2017.
- Yang Cai, Constantinos Daskalakis and Christos H. Papadimitriou: Optimum Statistical Estimation with Strategic Data Sources.
In the 28th Annual Conference on Learning Theory (COLT), COLT 2015. arXiv
- Constantinos Daskalakis, Christos Papadimitriou and Christos Tzamos: Does Information Revelation Improve Revenue?.
In the 17th ACM Conference on Economics and Computation (EC), EC 2016. Invited to Special Issue. pdf
- Constantinos Daskalakis and Vasilis Syrgkanis: Learning in Auctions: Regret is Hard, Envy is Easy.
In the 57th IEEE Symposium on Foundations of Computer Science (FOCS), FOCS 2016. arxiv
- Yang Cai and Constantinos Daskalakis: Learning Multi-Item Auctions with (or without) Samples.
In the 58th IEEE Symposium on Foundations of Computer Science (FOCS), FOCS 2017. arxiv
Property Testing ∩ Statistics:
- Jayadev Acharya, Constantinos Daskalakis and Gautam Kamath: Optimal Testing for Properties of Distributions.
In the 29th Annual Conference on Neural Information Processing Systems (NeurIPS), NeurIPS 2015. Spotlight Paper. arXiv
- Constantinos Daskalakis and Qinxuan Pan: Square Hellinger Subadditivity for Bayesian Networks and its Applications to Identity Testing.
In the 30th Annual Conference on Learning Theory, COLT 2017. arXiv
- Bryan Cai, Constantinos Daskalakis and Gautam Kamath: Priv'IT: Private and Sample Efficient Identity Testing.
In the 34th International Conference on Machine Learning, ICML 2017. arXiv
- Constantinos Daskalakis, Gautam Kamath and John Wright: Which Distribution Distances are Sublinearly Testable?
In the 29th Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2018. arXiv
- Constantinos Daskalakis, Nishanth Dikkala and Gautam Kamath: Testing Ising Models
In the 29th Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2018. arXiv
Journal version in IEEE Transactions on Information Theory, 65(11): 6829-6852, 2019. ieee
Other Machine Learning and Statistics:
- Constantinos Daskalakis, Ilias Diakonikolas and Rocco A. Servedio: Learning Poisson Binomial Distributions.
In the 44th ACM Symposium on Theory of Computing, STOC 2012. arXiv
Algorithmica, 72(1):316-357, 2015. Special Issue on New Theoretical Challenges in Machine Learning. Invited. arXiv
- Constantinos Daskalakis, Christos Tzamos and Manolis Zampetakis: Ten Steps of EM Suffice for Mixtures of Two Gaussians.
In the 30th Annual Conference on Learning Theory, COLT 2017. Preliminary version presented at NeurIPS 2016 Workshop on Non-Convex Optimization for Machine Learning. arXiv
- Constantinos Daskalakis, Christos Tzamos and Manolis Zampetakis: A Converse to Banach's Fixed Point Theorem and its CLS Completeness.
In the 50th Annual ACM Symposium on the Theory of Computing, STOC 2018. arXiv
- Constantinos Daskalakis, Dhruv Rohatgi, Manolis Zampetakis: Constant-Expansion Suffices for Compressed Sensing with Generative Priors.
In the 34th Annual Conference on Neural Information Processing Systems (NeurIPS), NeurIPS 2020. Spotlight. arXiv
- Constantinos Daskalakis, Qinxuan Pan: Sample-Optimal and Efficient Learning of Tree Ising models.
In the 53rd ACM Symposium on Theory of Computing, STOC 2021. arXiv. twitter summary
Research Highlights, Slides, Videos
Quick Description and Links: I work on Computation Theory and its interface with Game Theory, Economics, Probability Theory, Machine Learning and Statistics.My work has studied Equilibrium Complexity, Mechanism Design, and the interface of Property Testing and High-Dimensional Statistics.
My current work studies Machine Learning settings, which deviate from the classical, yet too idealized, paradigm wherein multiple independent samples from the distribution of interest are available for learning. We consider settings where data is biased, dependent or strategic.
Learn more about this work here:
Equilibrium Computation and Machine Learning
A central question in Machine Learning, motivated in part by adversarial training applications, such as GANs and adversarial attacks, as well as broader multi-agent learning settings wherein agents learn, choose actions and receive rewards in a shared environment, is whether the success of gradient descent and its variants in identifying local optima in minimization problems involving non-convex objectives can be replicated in min-max optimization problems or more general equilibrium/multi-objective optimization problems. The challenge encountered in practice is that gradient descent and its variants have a hard time solving min-max optimization problems with non convex-concave objectives, exhibiting oscillatory behavior and/or hardly converging to meaningful solutions.
We build bridges to the complexity of fixed points and equilibrium computation problems, to prove intractability results for min-max optimization, establishing a stark separation between minimization and min-maximization. In particular, we show that any first-order method such as gradient descent (accessing the objective using its values and the values of its gradient) will have to make exponentially many queries to compute even an approximate local min-max equilibrium of a Lipschitz and smooth objective, and that no method at all (first-order, second-order, or whatever) can do this in polynomial time, subject to the complexity theoretic assumption that the complexity classes P and PPAD are different, i.e. we show that the problem is PPAD-complete.
- Constantinos Daskalakis, Stratis Skoulakis, Manolis Zampetakis: The Complexity of Constrained Min-Max Optimization.
In the 53rd ACM Symposium on Theory of Computing, STOC 2021. arXiv. twitter summaryIn the balance, these results establish the existence of a methodological roadblock in applying or extending Deep Learning to multi-agent settings. While postulating a complex parametric model for some learning agent and training the model using gradient descent has been a successful framework in single-agent settings, choosing complex models for a bunch of different agents and having them train their models via competing gradient descent procedures is just not going to work, even in two-agent settings, and even when one compromises on the solution concept, namely shooting for approximate and local min-max equilibria. Indeed, our work advocates that progress in multi-agent learning will be unlocked via modeling and methodological insights, as well as better clarity about the type of solution that is reasonable and attainable, combining ideas from Game Theory and Economics with Machine Learning and Optimization. Consistent with that view we identify families of multi-agent learning problems with structure, wherein equilibrium learning is tractable.
- Two-Player Zero-Sum Stochastic Games:
- Constantinos Daskalakis, Dylan Foster, Noah Golowich: Independent Policy Gradient Methods for Competitive Reinforcement Learning.
In the 34th Annual Conference on Neural Information Processing Systems (NeurIPS), NeurIPS 2020. arXiv- Min-Max Problems with co-hypomonotone objectives:
- Jelena Diakonikolas, Constantinos Daskalakis, Michael I. Jordan: Efficient Methods for Structured Nonconvex-Nonconcave Min-Max Optimization.
In the 24th International Conference on Artificial Intelligence and Statistics (AISTATS), AISTATS 2021. arXiv- Fast Equilibrium Learning in multi-player games with convex objectives
- Constantinos Daskalakis, Maxwell Fishelson, Noah Golowich: Near-Optimal No-Regret Learning in General Games.
In the 35th Annual Conference on Neural Information Processing Systems (NeurIPS), NeurIPS 2021. Oral. arXiv- Last-Iterate Convergence Results for Gradient Descent, Extra-Gradient, and Optimistic Gradient Descent in min-max problems with (non)convex-(non)concave objectives:
- Noah Golowich, Sarath Pattathil, Constantinos Daskalakis: Tight last-iterate convergence rates for no-regret learning in multi-player games.
In the 34th Annual Conference on Neural Information Processing Systems (NeurIPS), NeurIPS 2020. arXiv
- Noah Golowich, Sarath Pattathil, Constantinos Daskalakis, Asuman Ozdaglar: Last Iterate is Slower than Averaged Iterate in Smooth Convex-Concave Saddle Point Problems.
In the 33nd Annual Conference on Learning Theory, COLT 2020. arXiv
- Constantinos Daskalakis, Ioannis Panageas: Last-Iterate Convergence: Zero-Sum Games and Constrained Min-Max Optimization.
In the 10th Innovations in Theoretical Computer Science (ITCS) conference, ITCS 2019. arXiv.
- Constantinos Daskalakis, Ioannis Panageas: The Limit Points of (Optimistic) Gradient Descent in Min-Max Optimization.
In the 32nd Annual Conference on Neural Information Processing Systems (NeurIPS), NeurIPS 2018. arXiv.
- Constantinos Daskalakis, Andrew Ilyas, Vasilis Syrgkanis, Haoyang Zeng: Training GANs with Optimism.
In the 6th International Conference on Learning Representations, ICLR 2018. arXiv.
Learning from Biased Data
Censoring and truncation are common ways in which the training data are biased samples from the distribution of interest. They can be caused by instrument failures or saturation effects, badly designed experiments or data collection campaigns, ethical or privacy constraints preventing the use of all data that is available, or societal biases which are reflected in the available data. It is well-understood that training a model on data that is censored or truncated can give rise to a biased model, which makes incorrect predictions, or disproportionately correct predictions, and which replicates the biases in its training data. In the following figure, data produced by a linear model (on the left) are truncated according to their label (on the right) making the naive linear fit incorrect. Is there a way to recover from this?
Inference from truncated and censored samples has a long history, going back to Galton, Pearson & Lee, and Fisher, who developed techniques for estimating Normal distributions from truncated samples, using respectively curve fitting, the method of moments, and maximum likelihood estimation. The topic was further developed in Statistics and Econometrics for more general distributions and regression problems. Finally, truncation and censoring of data has received lots of study in Machine Learning and the field of domain adaptation. Despite several decades of work on these problems and their practical importance, however, known estimation methods are computationally inefficient or have suboptimal statistical rates, especially in high-dimensional settings, and even for the most basic problems such as truncated Gaussian estimation and truncated linear regression.
We build connections to high-dimensional probability, harmonic analysis and optimization to develop computationally and statistically efficient methods for solving high-dimensional, parametric or non-parametric statistical learning tasks with truncated data, which had evaded efficient methods. These include estimation from truncated data of multi-dimensional Gaussians, linear, logistic and probit regression models, as well as non-parametric multi-dimenionsal distributions with smooth log-densities. The figure shows an appilcation of our method to truncated linear regression, wherein we are aiming to estimate a linear model as well as the noise variance when the observations from the model are truncated according to their labels. The naive fit is shown in red, the true model in blue, and our prediction in green.
Interestingly, our results are obtained by instantiating to these learning tasks a broader learning framework, based on maximum likelihood estimation and stochastic gradient descent, which is modular and can accommodate much broader learning tasks beyond those for which theorems can be obtained, including models that are expressible via Deep Neural Networks. As such, we develop a Machine Learning framework that can exploit software and hardware optimizations as well as models developed in Deep Learning to extend its reach to the more uncharted territory of dealing with truncation and censoring phenomena in the data.
- Constantinos Daskalakis, Themis Gouleakis, Christos Tzamos, Manolis Zampetakis: Efficient Statistics, in High Dimensions, from Truncated Samples.
In the 59th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2018. arXiv.
- Constantinos Daskalakis, Themis Gouleakis, Christos Tzamos, Manolis Zampetakis: Computationally and Statistically Efficient Truncated Regression.
In the 32nd Annual Conference on Learning Theory, COLT 2019. arXiv.
- Constantinos Daskalakis, Andrew Ilyas, Manolis Zampetakis: A Theoretical and Practical Framework for Regression and Classification from Truncated Samples.
In the 23rd International Conference on Artificial Intelligence and Statistics, AISTATS 2020. arXiv.
- Constantinos Daskalakis, Dhruv Rohatgi, Manolis Zampetakis: Truncated Linear Regression in High Dimensions.
In the 34th Annual Conference on Neural Information Processing Systems (NeurIPS), NeurIPS 2020. arXiv
Learning from Dependent Data and Statistical Physics
Statistical inference and machine learning methods commonly assume access to independent observations from the phenomenon of interest. This assumption that the available data are independent samples is, of course, too strong. In many applications, variables are observed on nodes of a network, or some spatial or temporal domain, and are dependent. Examples abound in financial, epidemiological, geographical, and meteorological, applications, and dependencies naturally arise in social networks through peer effects, whose study has been recently explored in topics as diverse as criminal activity, welfare participation, school achievement, participation in retirement plans, obesity, etc. Estimating models that combine peer and individual effects to predict behavior has been challenging, and for many standard statistical inference tasks even consistency results are ellusive. Is it possible to have a statistical learning theory for dependent data?
We develop a PAC-learning framework which extends VC dimension and other measures of learning complexity to the data dependent setting, wherein training samples and test samples are all jointly sampled from a high-dimensional distribution. This general framework accommodates weakly dependent data, where the strength of the dependencies is measured via classical notions of temperature in statistical physics.
- Yuval Dagan, Constantinos Daskalakis, Nishanth Dikkala, Siddhartha Jayanti: Generalization and learning under Dobrushin's condition.
In the 32nd Annual Conference on Learning Theory, COLT 2019. arXiv.In more structured models, such as autoregressive linear models and networked logistic regression models we have developed estimation methods that accommodate strong data dependencies, as long as the temperature is lower bounded by some constant.
- Constantinos Daskalakis, Nishanth Dikkala, Ioannis Panageas: Regression from Dependent Observations.
In the 51st Annual ACM Symposium on the Theory of Computing, STOC 2019. arXiv
- Constantinos Daskalakis, Nishanth Dikkala, Ioannis Panageas: Logistic regression with peer-group effects via inference in higher-order Ising models.
In the 23rd International Conference on Artificial Intelligence and Statistics, AISTATS 2020. arXiv.As a byproduct of our work, we show concentration and anti-concentration of measure results for functions of dependent variables, advancing the state-of-the-art in high-dimensional probability & statistical physics on this topic. These can be leveraged to obtain a framework for learning Ising models from one, a few, or many samples.
- Constantinos Daskalakis, Nishanth Dikkala and Gautam Kamath: Concentration of Multilinear Functions of the Ising Model with Applications to Network Data.
In the 31st Annual Conference on Neural Information Processing Systems (NeurIPS), NeurIPS 2017. arXiv
- Yuval Dagan, Constantinos Daskalakis, Nishanth Dikkala, Anthimos Vardis Kandiros: Learning Ising Models from One or Multiple Samples.
In the 53rd ACM Symposium on Theory of Computing, STOC 2021. arXivWe thus unify under one umbrella two strands of literature on estimating Ising models: (1) a renaissance of recent work in Computer Science on estimating them from multiple independent samples under minimal assumptions about the model's interaction matrix; and (2) a growing literature in Probability Theory on estimating them from one sample in restrictive settings.
In particular, we provide guarantees for one-sample estimation, which quantify the estimation error in terms of the metric entropy of a family of interaction matrices. As corollaries of our main theorem, we derive bounds when the model's interaction matrix is a (sparse) linear combination of known matrices, or it belongs to a finite set, or to a high-dimensional manifold. Our result handles multiple independent samples by viewing them as one sample from a larger model, and can be used to derive estimation bounds that are qualitatively similar to state-of-the-art in the multiple-sample literature.
Equilibrium Complexity
My Ph.D. research examines whether rational, strategic agents may arrive, through their interaction, at a state where no single one of them would be better off by changing their own strategy unless others did so as well. Such a state is called Nash equilibrium, in honor of John Nash, who showed that such a state always exists. It is commonly used in Game Theory to predict the outcome arising through the interaction of strategic individuals in situations of conflict.
With Paul Goldberg and Christos Papadimitriou, I show that in complex systems Nash equilibrium can be computationally unachievable. This implies that it is not always justifiable or relevant to study the Nash equilibria of a system.
- Constantinos Daskalakis, Paul W. Goldberg and Christos H. Papadimitriou: The Complexity of Computing a Nash Equilibrium.
In the 38th ACM Symposium on Theory of Computing, STOC 2006.
Journal version as SIAM Journal on Computing, 39(1), 195--259, May 2009. (Invited, special issue for STOC 2006.) pdf
Expository article in Communications of the ACM 52(2):89--97, 2009. (Invited.) pdf
My dissertation, focusing on this work, received the 2008 ACM Doctoral Dissertation Award. With Paul Goldberg and Christos Papadimitriou, I also received the inaugural Kalai Prize from the Game Theory Society, with the following citation: "This paper made key conceptual and technical contributions in an illustrious line of work on the complexity of computing Nash equilibrium. It also highlights the necessity of constructing practical algorithms that compute equilibria efficiently on important subclasses of games." Here is a blog post from the congress by Paul. In 2011, our same work was awarded the 2011 SIAM Outstanding Paper Prize.
In more recent work I've shown that even arriving at an approximate Nash equilibrium can be computationally intractable, and I've identified families of games with structure where (approximate) Nash equilibria are tractable.
- Constantinos Daskalakis: On the Complexity of Approximating a Nash Equilibrium.
In the 22nd Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2011.
ACM Transactions on Algorithms (TALG), 9(3): 23, 2013. Special Issue for SODA 2011. Invited. pdf
- Constantinos Daskalakis and Christos Papadimitriou: Approximate Nash Equilibria in Anonymous Games.
Journal of Economic Theory, 156:207--245, 2015. pdf
Journal version of papers in FOCS 2007, FOCS 2008, and STOC 2009.
- Yang Cai, Ozan Candogan, Constantinos Daskalakis and Christos Papadimitriou: Zero-sum Polymatrix Games: A Generalization of Minmax.
Mathematics of Operations Research, 41(2): 648-655, 2016. pdf
Journal version of papers in ICALP 2009 and SODA 2011.Further Resources:
- video recording of tutorial on Complexity of Equilibria, at Simons Insitute for Theory of Computation, August 2015
- survey article on the complexity of Nash equilibria, which appeared in a Computer Science Review special volume dedicated to Christos Papadimitriou's work
Mechanism Design (image credit)
Mechanism design, sometimes called "inverse Game Theory," studies how to design incentives that promote a desired objective in the outcome of the strategic interaction between agents. Examples include designing an election protocol that ensures the election of the best candidate, designing a procedure for allocating public resources in order to maximize the social welfare from the allocation, and designing an auction for selling paintings that maximizes the auctioneer's revenue. In all these cases, the design challenge lies in the fact that the designer's objective is not aligned with that of each strategic agent: a voter wants to maximize the chance that her favorite candidate is elected, while a bidder wants to maximize his own utility. So the right incentives need to be put in place in order to incentivize the agents to behave in a manner that promotes the designer's objective, as much as this is feasible.
Mechanism design has received a lot of study in Economics since the 1960s, and has found a plethora of applications in practice such as in the design of online markets and the sale of radio spectrum to telecoms by governments, but many challenges remain. My group has focused on the challenge of multi-dimensional mechanism design, which targets mechanism design settings in which the preferences of the agents are multi-dimensional, e.g. selling multiple items in an auction, multiple radio-frequencies, etc. While maximizing social welfare in this setting can be achieved with the celebrated VCG mechanism, the understanding of other objectives such as revenue optimization is quite limited. With Yang Cai and Matt Weinberg, I provide an algorithmic framework for computing optimal mechanisms for arbitrary objectives.
- Yang Cai, Constantinos Daskalakis and Matt Weinberg: Understanding Incentives: Mechanism Design becomes Algorithm Design.
In the 54th IEEE Symposium on Foundations of Computer Science, FOCS 2013. arxivAn application of our framework provides an algorithmic generalization of Myerson's celebrated single-item revenue-optimal auction to the multi-item setting. Given the auctioneer's information about the bidders, in the form of a distribution over their preferences over items and bundles of items, our algorithmic framework efficiently computes a revenue-optimal auction.
- Yang Cai, Constantinos Daskalakis and Matt Weinberg: An Algorithmic Characterization of Multi-Dimensional Mechanisms.
In the 44th ACM Symposium on Theory of Computing, STOC 2012. ECCC Report.
- Yang Cai, Constantinos Daskalakis and Matt Weinberg: Optimal Multi-Dimensional Mechanism Design: Reducing Revenue to Welfare Maximization.
In the 53rd Annual IEEE Symposium on Foundations of Computer Science (FOCS), FOCS 2012. arxivIn work with Alan Deckelbaum and Christos Tzamos, we obtain analytical characterizations of revenue-optimal multi-item mechanisms in the single-buyer setting, using optimal transport theory. These results are contained in our Econometrica paper below, whose preliminary form received the Best Paper Award at the ACM Conference on Economics and Computation in 2013. Also below is a survey article I wrote about multi-item acutions.
- Constantinos Daskalakis, Alan Deckelbaum and Christos Tzamos: Mechanism Design via Optimal Transport.
In the 14th ACM Conference on Electronic Commerce, EC 2013. Best Paper and Best Student Paper Award. pdf
- Constantinos Daskalakis, Alan Deckelbaum and Christos Tzamos: Strong Duality for a Multiple-Good Monopolist.
In the 16th ACM Conference on Economics and Computation (EC), EC 2015.
Journal version as Econometrica, 85(3):735-767, 2017. arXiv
- Constantinos Daskalakis: Multi-Item Auctions Defying Intuition?
Newsletter of the ACM Special Interest Group on E-commerce, 14(1), July 2015. pdf
Futher resources:
- Slides on Algorithmic Bayesian Mechanism Design from an EC 2014 tutorial with Cai and Weinberg.
- video recording of tutorial on Algorithmic Mechanism Design, at Simons Insitute for Theory of Computation
High-Dimensional Statistics & Property Testing
Statistics has traditionally focused on the asymptotic analysis of tests, as the number of samples tends to infinity. Moreover, statistical tests typically only detect certain types of deviations from the null hypothesis, or are designed to select between a null and an alternative hypothesis that are fixed distributions or are from parametric families of distributions. Motivated by applications where the sample size is small relative to the support of the distribution, which arises naturally in high-dimensional settings, the field of Property Testing has aimed at rethinking the foundations of Statistics in the small sample regime, placing emphasis on accommodating composite and non-parametric hypotheses, as well as detecting all rather than some deviations from the null.
In a NeurIPS'15 spotlight paper with Acharya and Kamath, we provide statistical tests for testing non-parametric properties of distributions, such monotonicty, unimodality, log-concavity, monotone hazard rate, in the non-asymptotic regime and with tight control for type II errors. Our tests are derived from a general testing framework we provide, based on a robust and sample-optimal goodness-of-fit test.
- Jayadev Acharya, Constantinos Daskalakis and Gautam Kamath: Optimal Testing for Properties of Distributions.
In the 29th Annual Conference on Neural Information Processing Systems (NeurIPS), NeurIPS 2015. Spotlight Paper. arXiv
video recording from a talk at UT Austin, October 2015.Here is also a survey-ish paper I wrote on non-asymptotic statistical hypothesis testing with Kamath and Wright:
- Constantinos Daskalakis, Gautam Kamath and John Wright: Which Distribution Distances are Sublinearly Testable?.
In the 29th Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2018. arXivOur more recent work has focused on high-dimensional testing problems:
- Constantinos Daskalakis and Qinxuan Pan: Square Hellinger Subadditivity for Bayesian Networks and its Applications to Identity Testing.
In the 30th Annual Conference on Learning Theory, COLT 2017. arXiv
- Constantinos Daskalakis, Nishanth Dikkala and Gautam Kamath: Testing Ising Models
In the 29th Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2018. arXiv
Journal version in IEEE Transactions on Information Theory, 65(11): 6829-6852, 2019. ieeeon testing causal models:
- Jayadev Acharya, Arnab Bhattacharyya, Constantinos Daskalakis, Saravanan Kandasamy: Learning and Testing Causal Models with Interventions.
In the 32nd Annual Conference on Neural Information Processing Systems (NeurIPS), NeurIPS 2018. arXiv.and on testing while respecting the privacy of the sample:
- Bryan Cai, Constantinos Daskalakis and Gautam Kamath: Priv'IT: Private and Sample Efficient Identity Testing.
In the 34th International Conference on Machine Learning, ICML 2017. arXivHere are slides from a tutorial I gave on this line of work at Greek Stochastics Theta.
Links to Selected Talks, Videos, Slides
talk recording: Three ways Machine Learning fails and what to do about them, Columbia University Distinguished Lecture in CS, September 2020
talk recording: The Complexity of Min-Max Optimization, Montreal Machine Learning and Optimization, July 2020
talk recording: Robust Learning from Censored Data, MIT-MSR Trustworthy and Robust AI Collaboration Workshop, June 2020
talk recording: Equilibria, Fixed Points, and Computational Complexity, Nevanlinna Prize Lecture, International Congress of Mathematicians, Rio de Janeiro, 1 August 2018
talk recording: Complexity and Algorithmic Game Theory I (Equilibrium Complexity) and II (Mechanism Design) at Simons Insitute for Theory of Computation, Special Semester on Economics and Computation, August 2015
talk recording: Testing Properties of Distributions, UT Austin, October 2015
slides: Distribution Property Testing Tutorial, at Greek Stochastics Theta workshop
slides: Algorithmic Bayesian Mechanism Design, from EC 2014 tutorial with Yang Cai and Matt Weinberg.
my students
Current PHD students: Yuval Dagan, Max Fishelson, Andrew Ilyas, Noah Golowich, Vardis KandirosUndergraduate researchers: Enrico Micali, Dhruv Rohatgi, Patroklos Stefanou, Rui Yao
Graduated students (in order of graduation):
Yang Cai (Yale CS and Economics Assistant Professor)
Matt Weinberg (Princeton CS Assistant Professor)
Alan Deckelbaum (Renaissance Technologies)
Christos Tzamos (UW-Madison CS Assistant Professor)
Gautam Kamath (University of Waterloo CS Assistant Professor)
Nishanth Dikkala (Google SVC)
Manolis Zampetakis (Postdoc, UC Berkeley)Postdocs:
Nick Gravin (Associate Professor, Shanghai University of Finance and Economics)
Nima Haghpanah (Assistant Professor, Penn State Economics)
Ioannis Panageas (Assistant Professor, Singapore University of Technology and Design)
some general audience articles about my work
teaching
- 6.867: Machine Learning, Fall 2020
- 6.046/18.410: Design and Analysis of Algorithms, Fall 2019
- 6.890: Learning-Augmented Algorithms, Spring 2019
- 6.853: Topics in Algorithmic Game Theory: Algorithmic Game Theory and Data Science, Spring 2019
- 6.046/18.410: Design and Analysis of Algorithms, Fall 2018
- 6.883: Science of Deep Learning: Bridging Theory and Practice, Spring 2018
- 6.853: Topics in Algorithmic Game Theory: Algorithmic Game Theory and Data Science, Spring 2017
- 6.006: Introduction to Algorithms, Spring 2017
- 6.046/18.410: Design and Analysis of Algorithms, Spring 2016
- 6.891: Games, Decision, and Computation, Spring 2015
- 6.046/18.410: Design and Analysis of Algorithms, Fall 2014
- 6.S080: Introduction to Inference, Spring 2014
- 6.891: Games, Decision, and Computation, Fall 2013
- 6.046/18.410: Design and Analysis of Algorithms, Spring 2013
- 6.006: Introduction to Algorithms, Spring 2012
- 6.853: Topics in Algorithmic Game Theory, Fall 2011
- 6.896: Probability and Computation, Spring 2011
- 6.006: Introduction to Algorithms, Fall 2010
- 6.896: Topics in Algorithmic Game Theory, Spring 2010
- 6.006: Introduction to Algorithms, Fall 2009
service & outreach
EditorialOrganization
- Editorial Board, International Journal of Game Theory (IJGT), 2012-2018
- Advisory Editor, Games and Economic Behavior (GEB)
- Special Issue Editor, Games and Economic Behavior (GEB) special issue for STOC, FOCS, SODA 2013
- Special Issue Editor, SIAM Journal on Computing (SICOMP) special issue of STOC 2015
- Conference Program Committees: SODA 2008, EC 2009, SAGT 2009, WAOA 2009, STOC 2010, ICALP 2010, EC 2011, EC 2012, EC 2013, STOC 2013, ITCS 2014, EC 2014, ITCS 2015, STOC 2015, EC 2015, EC 2016, EC 2017 (general chair), ICML 2018, COLT 2020, EC 2020, NeurIPS 2020
Scientific Advisory Board, Simons Insitute for Theory of Computation, 2018-2020
- Organizer of semesters on Economics and Computation (Fall'15), "Causality" (Spring'22), and "Learning and Games" (Spring'22) at the Simons Institute for Theory of Computation, UC Berkeley
- ACM EC 14, tutorial on Multi-Dimensional Mechanism Design (with SLIDES)
- FOCS 2012 Workshop on Bayesian Mechanism Design (with SLIDES)
- Greece Economic and Algorithmic Theory (GREAT) Week: part 1, part 2, and part 3 (to be clear "GREAT week" is a pun on "NYCE day")
- Cambridge Area Economics and Computation (CAEC) Day
MIT Primes Program Supervisor
What a misfortune, although you are made
for fine and great works
this unjust fate of yours always
denies you encouragement and success;
that base customs should block you;
and pettiness and indifference.
And how terrible the day when you yield
(the day when you give up and yield),
and you leave on foot for Susa,
and you go to the monarch Artaxerxes
who favorably places you in his court,
and offers you satrapies and the like.
And you accept them with despair
these things that you do not want.
Your soul seeks other things, weeps for other things;
the praise of the public and the Sophists,
the hard-won and inestimable Well Done;
the Agora, the Theater, and the Laurels.
How can Artaxerxes give you these,
where will you find these in a satrapy;
and what life can you live without these.
Constantine P. Cavafy (1910).
Game Theory And Mechanism Design Pdf
Source: http://people.csail.mit.edu/costis/
Posted by: tilleryafterand.blogspot.com
0 Response to "Game Theory And Mechanism Design Pdf"
Post a Comment