I am an Assistant Professor in the School of Computer Science at Georgia Tech. International Colloquium on Automata, Languages, and Programming (ICALP), 2022, Sharper Rates for Separable Minimax and Finite Sum Optimization via Primal-Dual Extragradient Methods
?_l) [c7] Sivakanth Gopi, Yin Tat Lee, Daogao Liu, Ruoqi Shen, Kevin Tian: Private Convex Optimization in General Norms. Multicalibrated Partitions for Importance Weights Parikshit Gopalan, Omer Reingold, Vatsal Sharan, Udi Wieder ALT, 2022 arXiv . COLT, 2022. I was fortunate to work with Prof. Zhongzhi Zhang. Roy Frostig, Rong Ge, Sham M. Kakade, Aaron Sidford.
Contact: dwoodruf (at) cs (dot) cmu (dot) edu or dpwoodru (at) gmail (dot) com CV (updated July, 2021) 2013. I maintain a mailing list for my graduate students and the broader Stanford community that it is interested in the work of my research group. with Yair Carmon, Arun Jambulapati and Aaron Sidford
With Michael Kapralov, Yin Tat Lee, Cameron Musco, and Christopher Musco. I am affiliated with the Stanford Theory Group and Stanford Operations Research Group. Emphasis will be on providing mathematical tools for combinatorial optimization, i.e. Stanford, CA 94305 Deeparnab Chakrabarty, Andrei Graur, Haotian Jiang, Aaron Sidford. We prove that deterministic first-order methods, even applied to arbitrarily smooth functions, cannot achieve convergence rates in $$ better than $^{-8/5}$, which is within $^{-1/15}\\log\\frac{1}$ of the best known rate for such . /Producer (Apache FOP Version 1.0) Abstract. 4 0 obj I am an assistant professor in the department of Management Science and Engineering and the department of Computer Science at Stanford University. Research interests : Data streams, machine learning, numerical linear algebra, sketching, and sparse recovery.. You interact with data structures even more often than with algorithms (think Google, your mail server, and even your network routers). Neural Information Processing Systems (NeurIPS), 2021, Thinking Inside the Ball: Near-Optimal Minimization of the Maximal Loss
I am broadly interested in mathematics and theoretical computer science. I maintain a mailing list for my graduate students and the broader Stanford community that it is interested in the work of my research group. The paper, Efficient Convex Optimization Requires Superlinear Memory, was co-authored with Stanford professor Gregory Valiant as well as current Stanford student Annie Marsden and alumnus Vatsal Sharan. /Filter /FlateDecode ", "Team-convex-optimization for solving discounted and average-reward MDPs! The system can't perform the operation now. [pdf] [slides]
Google Scholar; Probability on trees and . ", "A short version of the conference publication under the same title. the Operations Research group. .
D Garber, E Hazan, C Jin, SM Kakade, C Musco, P Netrapalli, A Sidford. Follow. CV; Theory Group; Data Science; CSE 535: Theory of Optimization and Continuous Algorithms. ", "A low-bias low-cost estimator of subproblem solution suffices for acceleration!
Call (225) 687-7590 or park nicollet dermatology wayzata today! Done under the mentorship of M. Malliaris. Annie Marsden, Vatsal Sharan, Aaron Sidford, Gregory Valiant, Efficient Convex Optimization Requires Superlinear Memory. aaron sidford cvnatural fibrin removalnatural fibrin removal Aaron Sidford, Introduction to Optimization Theory; Lap Chi Lau, Convexity and Optimization; Nisheeth Vishnoi, Algorithms for . Jan van den Brand, Yin Tat Lee, Yang P. Liu, Thatchaphol Saranurak, Aaron Sidford, Zhao Song, Di Wang: Minimum Cost Flows, MDPs, and 1 -Regression in Nearly Linear Time for Dense Instances. ACM-SIAM Symposium on Discrete Algorithms (SODA), 2022, Stochastic Bias-Reduced Gradient Methods
theory and graph applications. With Yair Carmon, John C. Duchi, and Oliver Hinder. I am a fifth year Ph.D. student in Computer Science at Stanford University co-advised by Gregory Valiant and John Duchi. SODA 2023: 4667-4767. From 2016 to 2018, I also worked in
Aaron Sidford. Simple MAP inference via low-rank relaxations. I often do not respond to emails about applications. 2013. pdf, Fourier Transformation at a Representation, Annie Marsden. with Yang P. Liu and Aaron Sidford. With Bill Fefferman, Soumik Ghosh, Umesh Vazirani, and Zixin Zhou (2022). Prof. Sidford's paper was chosen from more than 150 accepted papers at the conference. Many of my results use fast matrix multiplication
Eigenvalues of the laplacian and their relationship to the connectedness of a graph. Links. With Cameron Musco, Praneeth Netrapalli, Aaron Sidford, Shashanka Ubaru, and David P. Woodruff. University, where
I graduated with a PhD from Princeton University in 2018. In this talk, I will present a new algorithm for solving linear programs. aaron sidford cvis sea bass a bony fish to eat. Information about your use of this site is shared with Google.
I am generally interested in algorithms and learning theory, particularly developing algorithms for machine learning with provable guarantees. 4026. . CS265/CME309: Randomized Algorithms and Probabilistic Analysis, Fall 2019. Nima Anari, Yang P. Liu, Thuy-Duong Vuong, Maximum Flow and Minimum-Cost Flow in Almost Linear Time, FOCS 2022, Best Paper In submission. Secured intranet portal for faculty, staff and students. 2016.
with Yair Carmon, Aaron Sidford and Kevin Tian
Mail Code. 2022 - current Assistant Professor, Georgia Institute of Technology (Georgia Tech) 2022 Visiting researcher, Max Planck Institute for Informatics. [i14] Yair Carmon, Arun Jambulapati, Yujia Jin, Yin Tat Lee, Daogao Liu, Aaron Sidford, Kevin Tian: ReSQueing Parallel and Private Stochastic Convex Optimization.
February 16, 2022 aaron sidford cv on alcatel kaios flip phone manual. Journal of Machine Learning Research, 2017 (arXiv). SODA 2023: 5068-5089. Nearly Optimal Communication and Query Complexity of Bipartite Matching . In Symposium on Theory of Computing (STOC 2020) (arXiv), Constant Girth Approximation for Directed Graphs in Subquadratic Time, With Shiri Chechik, Yang P. Liu, and Omer Rotem, Leverage Score Sampling for Faster Accelerated Regression and ERM, With Naman Agarwal, Sham Kakade, Rahul Kidambi, Yin Tat Lee, and Praneeth Netrapalli, In International Conference on Algorithmic Learning Theory (ALT 2020) (arXiv), Near-optimal Approximate Discrete and Continuous Submodular Function Minimization, In Symposium on Discrete Algorithms (SODA 2020) (arXiv), Fast and Space Efficient Spectral Sparsification in Dynamic Streams, With Michael Kapralov, Aida Mousavifar, Cameron Musco, Christopher Musco, Navid Nouri, and Jakab Tardos, In Conference on Neural Information Processing Systems (NeurIPS 2019), Complexity of Highly Parallel Non-Smooth Convex Optimization, With Sbastien Bubeck, Qijia Jiang, Yin Tat Lee, and Yuanzhi Li, Principal Component Projection and Regression in Nearly Linear Time through Asymmetric SVRG, A Direct (1/) Iteration Parallel Algorithm for Optimal Transport, In Conference on Neural Information Processing Systems (NeurIPS 2019) (arXiv), A General Framework for Efficient Symmetric Property Estimation, With Moses Charikar and Kirankumar Shiragur, Parallel Reachability in Almost Linear Work and Square Root Depth, In Symposium on Foundations of Computer Science (FOCS 2019) (arXiv), With Deeparnab Chakrabarty, Yin Tat Lee, Sahil Singla, and Sam Chiu-wai Wong, Deterministic Approximation of Random Walks in Small Space, With Jack Murtagh, Omer Reingold, and Salil P. Vadhan, In International Workshop on Randomization and Computation (RANDOM 2019), A Rank-1 Sketch for Matrix Multiplicative Weights, With Yair Carmon, John C. Duchi, and Kevin Tian, In Conference on Learning Theory (COLT 2019) (arXiv), Near-optimal method for highly smooth convex optimization, Efficient profile maximum likelihood for universal symmetric property estimation, In Symposium on Theory of Computing (STOC 2019) (arXiv), Memory-sample tradeoffs for linear regression with small error, Perron-Frobenius Theory in Nearly Linear Time: Positive Eigenvectors, M-matrices, Graph Kernels, and Other Applications, With AmirMahdi Ahmadinejad, Arun Jambulapati, and Amin Saberi, In Symposium on Discrete Algorithms (SODA 2019) (arXiv), Exploiting Numerical Sparsity for Efficient Learning: Faster Eigenvector Computation and Regression, In Conference on Neural Information Processing Systems (NeurIPS 2018) (arXiv), Near-Optimal Time and Sample Complexities for Solving Discounted Markov Decision Process with a Generative Model, With Mengdi Wang, Xian Wu, Lin F. Yang, and Yinyu Ye, Coordinate Methods for Accelerating Regression and Faster Approximate Maximum Flow, In Symposium on Foundations of Computer Science (FOCS 2018), Solving Directed Laplacian Systems in Nearly-Linear Time through Sparse LU Factorizations, With Michael B. Cohen, Jonathan A. Kelner, Rasmus Kyng, John Peebles, Richard Peng, and Anup B. Rao, In Symposium on Foundations of Computer Science (FOCS 2018) (arXiv), Efficient Convex Optimization with Membership Oracles, In Conference on Learning Theory (COLT 2018) (arXiv), Accelerating Stochastic Gradient Descent for Least Squares Regression, With Prateek Jain, Sham M. Kakade, Rahul Kidambi, and Praneeth Netrapalli, Approximating Cycles in Directed Graphs: Fast Algorithms for Girth and Roundtrip Spanners. Internatioonal Conference of Machine Learning (ICML), 2022, Semi-Streaming Bipartite Matching in Fewer Passes and Optimal Space
BayLearn, 2019, "Computing stationary solution for multi-agent RL is hard: Indeed, CCE for simultaneous games and NE for turn-based games are both PPAD-hard.
There will be a talk every day from 16:00-18:00 CEST from July 26 to August 13. arXiv preprint arXiv:2301.00457, 2023 arXiv. Before attending Stanford, I graduated from MIT in May 2018. University of Cambridge MPhil. I am
Full CV is available here. "t a","H Prof. Erik Demaine TAs: Timothy Kaler, Aaron Sidford [Home] [Assignments] [Open Problems] [Accessibility] sample frame from lecture videos Data structures play a central role in modern computer science.
Aaron Sidford, Gregory Valiant, Honglin Yuan COLT, 2022 arXiv | pdf. The Complexity of Infinite-Horizon General-Sum Stochastic Games, With Yujia Jin, Vidya Muthukumar, Aaron Sidford, To appear in Innovations in Theoretical Computer Science (ITCS 2023) (arXiv), Optimal and Adaptive Monteiro-Svaiter Acceleration, With Yair Carmon, Danielle Hausler, Arun Jambulapati, and Yujia Jin, To appear in Advances in Neural Information Processing Systems (NeurIPS 2022) (arXiv), On the Efficient Implementation of High Accuracy Optimality of Profile Maximum Likelihood, With Moses Charikar, Zhihao Jiang, and Kirankumar Shiragur, Improved Lower Bounds for Submodular Function Minimization, With Deeparnab Chakrabarty, Andrei Graur, and Haotian Jiang, In Symposium on Foundations of Computer Science (FOCS 2022) (arXiv), RECAPP: Crafting a More Efficient Catalyst for Convex Optimization, With Yair Carmon, Arun Jambulapati, and Yujia Jin, International Conference on Machine Learning (ICML 2022) (arXiv), Efficient Convex Optimization Requires Superlinear Memory, With Annie Marsden, Vatsal Sharan, and Gregory Valiant, Conference on Learning Theory (COLT 2022), Sharper Rates for Separable Minimax and Finite Sum Optimization via Primal-Dual Extragradient Method, Conference on Learning Theory (COLT 2022) (arXiv), Big-Step-Little-Step: Efficient Gradient Methods for Objectives with Multiple Scales, With Jonathan A. Kelner, Annie Marsden, Vatsal Sharan, Gregory Valiant, and Honglin Yuan, Regularized Box-Simplex Games and Dynamic Decremental Bipartite Matching, With Arun Jambulapati, Yujia Jin, and Kevin Tian, International Colloquium on Automata, Languages and Programming (ICALP 2022) (arXiv), Fully-Dynamic Graph Sparsifiers Against an Adaptive Adversary, With Aaron Bernstein, Jan van den Brand, Maximilian Probst, Danupon Nanongkai, Thatchaphol Saranurak, and He Sun, Faster Maxflow via Improved Dynamic Spectral Vertex Sparsifiers, With Jan van den Brand, Yu Gao, Arun Jambulapati, Yin Tat Lee, Yang P. Liu, and Richard Peng, In Symposium on Theory of Computing (STOC 2022) (arXiv), Semi-Streaming Bipartite Matching in Fewer Passes and Optimal Space, With Sepehr Assadi, Arun Jambulapati, Yujia Jin, and Kevin Tian, In Symposium on Discrete Algorithms (SODA 2022) (arXiv), Algorithmic trade-offs for girth approximation in undirected graphs, With Avi Kadria, Liam Roditty, Virginia Vassilevska Williams, and Uri Zwick, In Symposium on Discrete Algorithms (SODA 2022), Computing Lewis Weights to High Precision, With Maryam Fazel, Yin Tat Lee, and Swati Padmanabhan, With Hilal Asi, Yair Carmon, Arun Jambulapati, and Yujia Jin, In Advances in Neural Information Processing Systems (NeurIPS 2021) (arXiv), Thinking Inside the Ball: Near-Optimal Minimization of the Maximal Loss, In Conference on Learning Theory (COLT 2021) (arXiv), The Bethe and Sinkhorn Permanents of Low Rank Matrices and Implications for Profile Maximum Likelihood, With Nima Anari, Moses Charikar, and Kirankumar Shiragur, Towards Tight Bounds on the Sample Complexity of Average-reward MDPs, In International Conference on Machine Learning (ICML 2021) (arXiv), Minimum cost flows, MDPs, and 1-regression in nearly linear time for dense instances, With Jan van den Brand, Yin Tat Lee, Yang P. Liu, Thatchaphol Saranurak, and Zhao Song, Di Wang, In Symposium on Theory of Computing (STOC 2021) (arXiv), Ultrasparse Ultrasparsifiers and Faster Laplacian System Solvers, In Symposium on Discrete Algorithms (SODA 2021) (arXiv), Relative Lipschitzness in Extragradient Methods and a Direct Recipe for Acceleration, In Innovations in Theoretical Computer Science (ITCS 2021) (arXiv), Acceleration with a Ball Optimization Oracle, With Yair Carmon, Arun Jambulapati, Qijia Jiang, Yujia Jin, Yin Tat Lee, and Kevin Tian, In Conference on Neural Information Processing Systems (NeurIPS 2020), Instance Based Approximations to Profile Maximum Likelihood, In Conference on Neural Information Processing Systems (NeurIPS 2020) (arXiv), Large-Scale Methods for Distributionally Robust Optimization, With Daniel Levy*, Yair Carmon*, and John C. Duch (* denotes equal contribution), High-precision Estimation of Random Walks in Small Space, With AmirMahdi Ahmadinejad, Jonathan A. Kelner, Jack Murtagh, John Peebles, and Salil P. Vadhan, In Symposium on Foundations of Computer Science (FOCS 2020) (arXiv), Bipartite Matching in Nearly-linear Time on Moderately Dense Graphs, With Jan van den Brand, Yin Tat Lee, Danupon Nanongkai, Richard Peng, Thatchaphol Saranurak, Zhao Song, and Di Wang, In Symposium on Foundations of Computer Science (FOCS 2020), With Yair Carmon, Yujia Jin, and Kevin Tian, Unit Capacity Maxflow in Almost $O(m^{4/3})$ Time, Invited to the special issue (arXiv before merge)), Solving Discounted Stochastic Two-Player Games with Near-Optimal Time and Sample Complexity, In International Conference on Artificial Intelligence and Statistics (AISTATS 2020) (arXiv), Efficiently Solving MDPs with Stochastic Mirror Descent, In International Conference on Machine Learning (ICML 2020) (arXiv), Near-Optimal Methods for Minimizing Star-Convex Functions and Beyond, With Oliver Hinder and Nimit Sharad Sohoni, In Conference on Learning Theory (COLT 2020) (arXiv), Solving Tall Dense Linear Programs in Nearly Linear Time, With Jan van den Brand, Yin Tat Lee, and Zhao Song, In Symposium on Theory of Computing (STOC 2020). Algorithms Optimization and Numerical Analysis. Source: www.ebay.ie xwXSsN`$!l{@ $@TR)XZ(
RZD|y L0V@(#q `= nnWXX0+; R1{Ol (Lx\/V'LKP0RX~@9k(8u?yBOr y However, even restarting can be a hard task here. In Symposium on Foundations of Computer Science (FOCS 2017) (arXiv), "Convex Until Proven Guilty": Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions, With Yair Carmon, John C. Duchi, and Oliver Hinder, In International Conference on Machine Learning (ICML 2017) (arXiv), Almost-Linear-Time Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs, With Michael B. Cohen, Jonathan A. Kelner, John Peebles, Richard Peng, Anup B. Rao, and, Adrian Vladu, In Symposium on Theory of Computing (STOC 2017), Subquadratic Submodular Function Minimization, With Deeparnab Chakrabarty, Yin Tat Lee, and Sam Chiu-wai Wong, In Symposium on Theory of Computing (STOC 2017) (arXiv), Faster Algorithms for Computing the Stationary Distribution, Simulating Random Walks, and More, With Michael B. Cohen, Jonathan A. Kelner, John Peebles, Richard Peng, and Adrian Vladu, In Symposium on Foundations of Computer Science (FOCS 2016) (arXiv), With Michael B. Cohen, Yin Tat Lee, Gary L. Miller, and Jakub Pachocki, In Symposium on Theory of Computing (STOC 2016) (arXiv), With Alina Ene, Gary L. Miller, and Jakub Pachocki, Streaming PCA: Matching Matrix Bernstein and Near-Optimal Finite Sample Guarantees for Oja's Algorithm, With Prateek Jain, Chi Jin, Sham M. Kakade, and Praneeth Netrapalli, In Conference on Learning Theory (COLT 2016) (arXiv), Principal Component Projection Without Principal Component Analysis, With Roy Frostig, Cameron Musco, and Christopher Musco, In International Conference on Machine Learning (ICML 2016) (arXiv), Faster Eigenvector Computation via Shift-and-Invert Preconditioning, With Dan Garber, Elad Hazan, Chi Jin, Sham M. Kakade, Cameron Musco, and Praneeth Netrapalli, Efficient Algorithms for Large-scale Generalized Eigenvector Computation and Canonical Correlation Analysis. We establish lower bounds on the complexity of finding $$-stationary points of smooth, non-convex high-dimensional functions using first-order methods. Secured intranet portal for faculty, staff and students. I also completed my undergraduate degree (in mathematics) at MIT. Publications and Preprints. Roy Frostig, Sida Wang, Percy Liang, Chris Manning. AISTATS, 2021. Anup B. Rao. I am a senior researcher in the Algorithms group at Microsoft Research Redmond. (, In Symposium on Foundations of Computer Science (FOCS 2015) (, In Conference on Learning Theory (COLT 2015) (, In International Conference on Machine Learning (ICML 2015) (, In Innovations in Theoretical Computer Science (ITCS 2015) (, In Symposium on Fondations of Computer Science (FOCS 2013) (, In Symposium on the Theory of Computing (STOC 2013) (, Book chapter in Building Bridges II: Mathematics of Laszlo Lovasz, 2020 (, Journal of Machine Learning Research, 2017 (. ", "General variance reduction framework for solving saddle-point problems & Improved runtimes for matrix games. [pdf] [talk] [poster]
In particular, this work presents a sharp analysis of: (1) mini-batching, a method of averaging many . Intranet Web Portal. with Yair Carmon, Danielle Hausler, Arun Jambulapati and Aaron Sidford
They will share a $10,000 prize, with financial sponsorship provided by Google Inc. Yair Carmon, Arun Jambulapati, Yujia Jin, Yin Tat Lee, Daogao Liu, Aaron Sidford, and Kevin Tian. About Me. in Chemistry at the University of Chicago. Li Chen, Rasmus Kyng, Yang P. Liu, Richard Peng, Maximilian Probst Gutenberg, Sushant Sachdeva, Online Edge Coloring via Tree Recurrences and Correlation Decay, STOC 2022 If you have been admitted to Stanford, please reach out to discuss the possibility of rotating or working together. View Full Stanford Profile. COLT, 2022. arXiv | code | conference pdf (alphabetical authorship), Annie Marsden, John Duchi and Gregory Valiant, Misspecification in Prediction Problems and Robustness via Improper Learning. Allen Liu. I have the great privilege and good fortune of advising the following PhD students: I have also had the great privilege and good fortune of advising the following PhD students who have now graduated: Kirankumar Shiragur (co-advised with Moses Charikar) - PhD 2022, AmirMahdi Ahmadinejad (co-advised with Amin Saberi) - PhD 2020, Yair Carmon (co-advised with John Duchi) - PhD 2020. Conference Publications 2023 The Complexity of Infinite-Horizon General-Sum Stochastic Games With Yujia Jin, Vidya Muthukumar, Aaron Sidford To appear in Innovations in Theoretical Computer Science (ITCS 2023) (arXiv) 2022 Optimal and Adaptive Monteiro-Svaiter Acceleration With Yair Carmon, small tool to obtain upper bounds of such algebraic algorithms. arXiv | conference pdf (alphabetical authorship), Jonathan Kelner, Annie Marsden, Vatsal Sharan, Aaron Sidford, Gregory Valiant, Honglin Yuan, Big-Step-Little-Step: Gradient Methods for Objectives with Multiple Scales. Authors: Michael B. Cohen, Jonathan Kelner, Rasmus Kyng, John Peebles, Richard Peng, Anup B. Rao, Aaron Sidford Download PDF Abstract: We show how to solve directed Laplacian systems in nearly-linear time. Discrete Mathematics and Algorithms: An Introduction to Combinatorial Optimization: I used these notes to accompany the course Discrete Mathematics and Algorithms. Faculty and Staff Intranet. With Prateek Jain, Sham M. Kakade, Rahul Kidambi, and Praneeth Netrapalli. Honorable Mention for the 2015 ACM Doctoral Dissertation Award went to Aaron Sidford of the Massachusetts Institute of Technology, and Siavash Mirarab of the University of Texas at Austin. Conference of Learning Theory (COLT), 2021, Towards Tight Bounds on the Sample Complexity of Average-reward MDPs
In each setting we provide faster exact and approximate algorithms. ", "Collection of new upper and lower sample complexity bounds for solving average-reward MDPs.
Sidford received his PhD from the department of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology where he was advised by Professor Jonathan Kelner. Articles Cited by Public access. 475 Via Ortega << To appear as a contributed talk at QIP 2023 ; Quantum Pseudoentanglement. publications by categories in reversed chronological order. (ACM Doctoral Dissertation Award, Honorable Mention.) . pdf, Sequential Matrix Completion. Navajo Math Circles Instructor. Department of Electrical Engineering, Stanford University, 94305, Stanford, CA, USA resume/cv; publications. Alcatel flip phones are also ready to purchase with consumer cellular. Email: sidford@stanford.edu.
NeurIPS Smooth Games Optimization and Machine Learning Workshop, 2019, Variance Reduction for Matrix Games
Google Scholar, The Complexity of Infinite-Horizon General-Sum Stochastic Games, The Complexity of Optimizing Single and Multi-player Games, A Near-Optimal Method for Minimizing the Maximum of N Convex Loss Functions, On the Sample Complexity for Average-reward Markov Decision Processes, Stochastic Methods for Matrix Games and its Applications, Acceleration with a Ball Optimization Oracle, Principal Component Projection and Regression in Nearly Linear Time through Asymmetric SVRG, The Complexity of Infinite-Horizon General-Sum Stochastic Games
Before attending Stanford, I graduated from MIT in May 2018. I am an assistant professor in the department of Management Science and Engineering and the department of Computer Science at Stanford University. Many of these algorithms are iterative and solve a sequence of smaller subproblems, whose solution can be maintained via the aforementioned dynamic algorithms. BayLearn, 2021, On the Sample Complexity of Average-reward MDPs
Title. United States. with Sepehr Assadi, Arun Jambulapati, Aaron Sidford and Kevin Tian
Management Science & Engineering We organize regular talks and if you are interested and are Stanford affiliated, feel free to reach out (from a Stanford email). Lower bounds for finding stationary points II: first-order methods. of practical importance. Our algorithm combines the derandomized square graph operation (Rozenman and Vadhan, 2005), which we recently used for solving Laplacian systems in nearly logarithmic space (Murtagh, Reingold, Sidford, and Vadhan, 2017), with ideas from (Cheng, Cheng, Liu, Peng, and Teng, 2015), which gave an algorithm that is time-efficient (while ours is . I received my PhD from the department of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology where I was advised by Professor Jonathan Kelner.
rl1 Student Intranet. with Kevin Tian and Aaron Sidford
With Yosheb Getachew, Yujia Jin, Aaron Sidford, and Kevin Tian (2023). Stanford University Efficient Convex Optimization Requires Superlinear Memory. CoRR abs/2101.05719 ( 2021 ) to appear in Innovations in Theoretical Computer Science (ITCS), 2022, Optimal and Adaptive Monteiro-Svaiter Acceleration
Research Institute for Interdisciplinary Sciences (RIIS) at
Verified email at stanford.edu - Homepage. In Foundations of Computer Science (FOCS), 2013 IEEE 54th Annual Symposium on. Given a linear program with n variables, m > n constraints, and bit complexity L, our algorithm runs in (sqrt(n) L) iterations each consisting of solving (1) linear systems and additional nearly linear time computation.
", "About how and why coordinate (variance-reduced) methods are a good idea for exploiting (numerical) sparsity of data. he Complexity of Infinite-Horizon General-Sum Stochastic Games, Yujia Jin, Vidya Muthukumar, Aaron Sidford, Innovations in Theoretical Computer Science (ITCS 202, air Carmon, Danielle Hausler, Arun Jambulapati, and Yujia Jin, Advances in Neural Information Processing Systems (NeurIPS 2022), Moses Charikar, Zhihao Jiang, and Kirankumar Shiragur, Advances in Neural Information Processing Systems (NeurIPS 202, n Symposium on Foundations of Computer Science (FOCS 2022) (, International Conference on Machine Learning (ICML 2022) (, Conference on Learning Theory (COLT 2022) (, International Colloquium on Automata, Languages and Programming (ICALP 2022) (, In Symposium on Theory of Computing (STOC 2022) (, In Symposium on Discrete Algorithms (SODA 2022) (, In Advances in Neural Information Processing Systems (NeurIPS 2021) (, In Conference on Learning Theory (COLT 2021) (, In International Conference on Machine Learning (ICML 2021) (, In Symposium on Theory of Computing (STOC 2021) (, In Symposium on Discrete Algorithms (SODA 2021) (, In Innovations in Theoretical Computer Science (ITCS 2021) (, In Conference on Neural Information Processing Systems (NeurIPS 2020) (, In Symposium on Foundations of Computer Science (FOCS 2020) (, In International Conference on Artificial Intelligence and Statistics (AISTATS 2020) (, In International Conference on Machine Learning (ICML 2020) (, In Conference on Learning Theory (COLT 2020) (, In Symposium on Theory of Computing (STOC 2020) (, In International Conference on Algorithmic Learning Theory (ALT 2020) (, In Symposium on Discrete Algorithms (SODA 2020) (, In Conference on Neural Information Processing Systems (NeurIPS 2019) (, In Symposium on Foundations of Computer Science (FOCS 2019) (, In Conference on Learning Theory (COLT 2019) (, In Symposium on Theory of Computing (STOC 2019) (, In Symposium on Discrete Algorithms (SODA 2019) (, In Conference on Neural Information Processing Systems (NeurIPS 2018) (, In Symposium on Foundations of Computer Science (FOCS 2018) (, In Conference on Learning Theory (COLT 2018) (, In Symposium on Discrete Algorithms (SODA 2018) (, In Innovations in Theoretical Computer Science (ITCS 2018) (, In Symposium on Foundations of Computer Science (FOCS 2017) (, In International Conference on Machine Learning (ICML 2017) (, In Symposium on Theory of Computing (STOC 2017) (, In Symposium on Foundations of Computer Science (FOCS 2016) (, In Symposium on Theory of Computing (STOC 2016) (, In Conference on Learning Theory (COLT 2016) (, In International Conference on Machine Learning (ICML 2016) (, In International Conference on Machine Learning (ICML 2016). ReSQueing Parallel and Private Stochastic Convex Optimization. ", "Improved upper and lower bounds on first-order queries for solving \(\min_{x}\max_{i\in[n]}\ell_i(x)\). In Sidford's dissertation, Iterative Methods, Combinatorial .
2021 - 2022 Postdoc, Simons Institute & UC . " Geometric median in nearly linear time ." In Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016, Cambridge, MA, USA, June 18-21, 2016, Pp. My research is on the design and theoretical analysis of efficient algorithms and data structures. We are excited to have Professor Sidford join the Management Science & Engineering faculty starting Fall 2016. Np%p `a!2D4! with Aaron Sidford
In Innovations in Theoretical Computer Science (ITCS 2018) (arXiv), Derandomization Beyond Connectivity: Undirected Laplacian Systems in Nearly Logarithmic Space. Optimization Algorithms: I used variants of these notes to accompany the courses Introduction to Optimization Theory and Optimization Algorithms which I created. Stanford University. Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems. Here is a slightly more formal third-person biography, and here is a recent-ish CV. Annie Marsden, Vatsal Sharan, Aaron Sidford, and Gregory Valiant, Efficient Convex Optimization Requires Superlinear Memory. In Symposium on Foundations of Computer Science (FOCS 2020) Invited to the special issue ( arXiv) which is why I created a
It was released on november 10, 2017.