default search action
Aryan Mokhtari
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j26]Qiujiang Jin, Tongzheng Ren, Nhat Ho, Aryan Mokhtari:
Statistical and Computational Complexities of BFGS Quasi-Newton Method for Generalized Linear Models. Trans. Mach. Learn. Res. 2024 (2024) - [c76]Ruichen Jiang, Parameswaran Raman, Shoham Sabach, Aryan Mokhtari, Mingyi Hong, Volkan Cevher:
Krylov Cubic Regularized Newton: A Subspace Second-Order Method with Dimension-Free Convergence Rate. AISTATS 2024: 4411-4419 - [c75]Liam Collins, Hamed Hassani, Mahdi Soltanolkotabi, Aryan Mokhtari, Sanjay Shakkottai:
Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks. ICML 2024 - [i68]Ruichen Jiang, Parameswaran Raman, Shoham Sabach, Aryan Mokhtari, Mingyi Hong, Volkan Cevher:
Krylov Cubic Regularized Newton: A Subspace Second-Order Method with Dimension-Free Convergence Rate. CoRR abs/2401.03058 (2024) - [i67]Jincheng Cao, Ruichen Jiang, Erfan Yazdandoost Hamedani, Aryan Mokhtari:
An Accelerated Gradient Method for Simple Bilevel Optimization with Convex Lower-level Problem. CoRR abs/2402.08097 (2024) - [i66]Liam Collins, Advait Parulekar, Aryan Mokhtari, Sujay Sanghavi, Sanjay Shakkottai:
In-Context Learning with Transformers: Softmax Attention Adapts to Function Lipschitzness. CoRR abs/2402.11639 (2024) - [i65]Ruichen Jiang, Michal Derezinski, Aryan Mokhtari:
Stochastic Newton Proximal Extragradient Method. CoRR abs/2406.01478 (2024) - [i64]Ruichen Jiang, Ali Kavis, Qiujiang Jin, Sujay Sanghavi, Aryan Mokhtari:
Adaptive and Optimal Second-order Optimistic Methods for Minimax Optimization. CoRR abs/2406.02016 (2024) - [i63]Devyani Maladkar, Ruichen Jiang, Aryan Mokhtari:
Convergence Analysis of Adaptive Gradient Methods under Refined Smoothness and Noise Assumptions. CoRR abs/2406.04592 (2024) - 2023
- [j25]Qiujiang Jin, Aryan Mokhtari:
Non-asymptotic superlinear convergence of standard quasi-Newton methods. Math. Program. 200(1): 425-473 (2023) - [j24]Mohammad Fereydounian, Aryan Mokhtari, Ramtin Pedarsani, Hamed Hassani:
Provably Private Distributed Averaging Consensus: An Information-Theoretic Approach. IEEE Trans. Inf. Theory 69(11): 7317-7335 (2023) - [j23]Isidoros Tziotis, Zebang Shen, Ramtin Pedarsani, Hamed Hassani, Aryan Mokhtari:
Straggler-Resilient Personalized Federated Learning. Trans. Mach. Learn. Res. 2023 (2023) - [c74]Ruichen Jiang, Nazanin Abolfazli, Aryan Mokhtari, Erfan Yazdandoost Hamedani:
A Conditional Gradient-based Method for Simple Bilevel Optimization with Convex Lower-level Problem. AISTATS 2023: 10305-10323 - [c73]Advait Parulekar, Liam Collins, Karthikeyan Shanmugam, Aryan Mokhtari, Sanjay Shakkottai:
InfoNCE Loss Provably Learns Cluster-Preserving Representations. COLT 2023: 1914-1961 - [c72]Ruichen Jiang, Qiujiang Jin, Aryan Mokhtari:
Online Learning Guided Curvature Approximation: A Quasi-Newton Method with Global Non-Asymptotic Superlinear Convergence. COLT 2023: 1962-1992 - [c71]Jerry Gu, Liam Collins, Debashri Roy, Aryan Mokhtari, Sanjay Shakkottai, Kaushik R. Chowdhury:
Meta-Learning for Image-Guided Millimeter-Wave Beam Selection in Unseen Environments. ICASSP 2023: 1-5 - [c70]Parikshit Hegde, Gustavo de Veciana, Aryan Mokhtari:
Network Adaptive Federated Learning: Congestion and Lossy Compression. INFOCOM 2023: 1-10 - [c69]Jincheng Cao, Ruichen Jiang, Nazanin Abolfazli, Erfan Yazdandoost Hamedani, Aryan Mokhtari:
Projection-Free Methods for Stochastic Simple Bilevel Optimization with Convex Lower-level Problem. NeurIPS 2023 - [c68]Ruichen Jiang, Aryan Mokhtari:
Accelerated Quasi-Newton Proximal Extragradient: Faster Rate for Smooth Convex Optimization. NeurIPS 2023 - [c67]Nived Rajaraman, Devvrit, Aryan Mokhtari, Kannan Ramchandran:
Greedy Pruning with Group Lasso Provably Generalizes for Matrix Sensing. NeurIPS 2023 - [i62]Parikshit Hegde, Gustavo de Veciana, Aryan Mokhtari:
Network Adaptive Federated Learning: Congestion and Lossy Compression. CoRR abs/2301.04430 (2023) - [i61]Advait Parulekar, Liam Collins, Karthikeyan Shanmugam, Aryan Mokhtari, Sanjay Shakkottai:
InfoNCE Loss Provably Learns Cluster-Preserving Representations. CoRR abs/2302.07920 (2023) - [i60]Ruichen Jiang, Qiujiang Jin, Aryan Mokhtari:
Online Learning Guided Curvature Approximation: A Quasi-Newton Method with Global Non-Asymptotic Superlinear Convergence. CoRR abs/2302.08580 (2023) - [i59]Nived Rajaraman, Devvrit, Aryan Mokhtari, Kannan Ramchandran:
Greedy Pruning with Group Lasso Provably Generalizes for Matrix Sensing and Neural Networks with Quadratic Activations. CoRR abs/2303.11453 (2023) - [i58]Ruichen Jiang, Aryan Mokhtari:
Accelerated Quasi-Newton Proximal Extragradient: Faster Rate for Smooth Convex Optimization. CoRR abs/2306.02212 (2023) - [i57]Zhan Gao, Aryan Mokhtari, Alec Koppel:
Limited-Memory Greedy Quasi-Newton Method with Non-asymptotic Superlinear Convergence Rate. CoRR abs/2306.15444 (2023) - [i56]Liam Collins, Hamed Hassani, Mahdi Soltanolkotabi, Aryan Mokhtari, Sanjay Shakkottai:
Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks. CoRR abs/2307.06887 (2023) - [i55]Jincheng Cao, Ruichen Jiang, Nazanin Abolfazli, Erfan Yazdandoost Hamedani, Aryan Mokhtari:
Projection-Free Methods for Stochastic Simple Bilevel Optimization with Convex Lower-level Problem. CoRR abs/2308.07536 (2023) - 2022
- [j22]Amirhossein Reisizadeh, Isidoros Tziotis, Hamed Hassani, Aryan Mokhtari, Ramtin Pedarsani:
Straggler-Resilient Federated Learning: Leveraging the Interplay Between Statistical Accuracy and System Heterogeneity. IEEE J. Sel. Areas Inf. Theory 3(2): 197-205 (2022) - [c66]Arman Adibi, Aryan Mokhtari, Hamed Hassani:
Minimax Optimization: The Case of Convex-Submodular. AISTATS 2022: 3556-3580 - [c65]Sen Lin, Ming Shi, Anish Arora, Raef Bassily, Elisa Bertino, Constantine Caramanis, Kaushik R. Chowdhury, Eylem Ekici, Atilla Eryilmaz, Stratis Ioannidis, Nan Jiang, Gauri Joshi, Jim Kurose, Yingbin Liang, Zhiqiang Lin, Jia Liu, Mingyan Liu, Tommaso Melodia, Aryan Mokhtari, Rob Nowak, Sewoong Oh, Srini Parthasarathy, Chunyi Peng, Hulya Seferoglu, Ness B. Shroff, Sanjay Shakkottai, Kannan Srinivasan, Ameet Talwalkar, Aylin Yener, Lei Ying:
Leveraging Synergies Between AI and Networking to Build Next Generation Edge Networks. CIC 2022: 16-25 - [c64]Liam Collins, Aryan Mokhtari, Sanjay Shakkottai:
How Does the Task Landscape Affect MAML Performance? CoLLAs 2022: 23-59 - [c63]Matthew Faw, Isidoros Tziotis, Constantine Caramanis, Aryan Mokhtari, Sanjay Shakkottai, Rachel A. Ward:
The Power of Adaptivity in SGD: Self-Tuning Step Sizes with Unbounded Gradients and Affine Variance. COLT 2022: 313-355 - [c62]Amirhossein Reisizadeh, Isidoros Tziotis, Hamed Hassani, Aryan Mokhtari, Ramtin Pedarsani:
Adaptive Node Participation for Straggler-Resilient Federated Learning. ICASSP 2022: 8762-8766 - [c61]Liam Collins, Aryan Mokhtari, Sewoong Oh, Sanjay Shakkottai:
MAML and ANIL Provably Learn Representations. ICML 2022: 4238-4310 - [c60]Qiujiang Jin, Alec Koppel, Ketan Rajawat, Aryan Mokhtari:
Sharpened Quasi-Newton Methods: Faster Superlinear Rate and Larger Local Convergence Neighborhood. ICML 2022: 10228-10250 - [c59]Liam Collins, Hamed Hassani, Aryan Mokhtari, Sanjay Shakkottai:
FedAvg with Fine Tuning: Local Updates Lead to Representation Learning. NeurIPS 2022 - [c58]Mao Ye, Ruichen Jiang, Haoxiang Wang, Dhruv Choudhary, Xiaocong Du, Bhargav Bhushanam, Aryan Mokhtari, Arun Kejariwal, Qiang Liu:
Future gradient descent for adapting the temporal shifting data distribution in online recommendation systems. UAI 2022: 2256-2266 - [i54]Liam Collins, Aryan Mokhtari, Sewoong Oh, Sanjay Shakkottai:
MAML and ANIL Provably Learn Representations. CoRR abs/2202.03483 (2022) - [i53]Matthew Faw, Isidoros Tziotis, Constantine Caramanis, Aryan Mokhtari, Sanjay Shakkottai, Rachel A. Ward:
The Power of Adaptivity in SGD: Self-Tuning Step Sizes with Unbounded Gradients and Affine Variance. CoRR abs/2202.05791 (2022) - [i52]Mohammad Fereydounian, Aryan Mokhtari, Ramtin Pedarsani, Hamed Hassani:
Provably Private Distributed Averaging Consensus: An Information-Theoretic Approach. CoRR abs/2202.09398 (2022) - [i51]Ruichen Jiang, Aryan Mokhtari:
Generalized Optimistic Methods for Convex-Concave Saddle Point Problems. CoRR abs/2202.09674 (2022) - [i50]Liam Collins, Hamed Hassani, Aryan Mokhtari, Sanjay Shakkottai:
FedAvg with Fine Tuning: Local Updates Lead to Representation Learning. CoRR abs/2205.13692 (2022) - [i49]Isidoros Tziotis, Zebang Shen, Ramtin Pedarsani, Hamed Hassani, Aryan Mokhtari:
Straggler-Resilient Personalized Federated Learning. CoRR abs/2206.02078 (2022) - [i48]Ruichen Jiang, Nazanin Abolfazli, Aryan Mokhtari, Erfan Yazdandoost Hamedani:
Generalized Frank-Wolfe Algorithm for Bilevel Optimization. CoRR abs/2206.08868 (2022) - [i47]Mao Ye, Ruichen Jiang, Haoxiang Wang, Dhruv Choudhary, Xiaocong Du, Bhargav Bhushanam, Aryan Mokhtari, Arun Kejariwal, Qiang Liu:
Future Gradient Descent for Adapting the Temporal Shifting Data Distribution in Online Recommendation Systems. CoRR abs/2209.01143 (2022) - 2021
- [c57]Farzin Haddadpour, Mohammad Mahdi Kamani, Aryan Mokhtari, Mehrdad Mahdavi:
Federated Learning with Compression: Unified Analysis and Sharp Guarantees. AISTATS 2021: 2350-2358 - [c56]Liam Collins, Hamed Hassani, Aryan Mokhtari, Sanjay Shakkottai:
Exploiting Shared Representations for Personalized Federated Learning. ICML 2021: 2089-2099 - [c55]Alireza Fallah, Kristian Georgiev, Aryan Mokhtari, Asuman E. Ozdaglar:
On the Convergence Theory of Debiased Model-Agnostic Meta-Reinforcement Learning. NeurIPS 2021: 3096-3107 - [c54]Qiujiang Jin, Aryan Mokhtari:
Exploiting Local Convergence of Quasi-Newton Methods Globally: Adaptive Sample Size Approach. NeurIPS 2021: 3824-3835 - [c53]Alireza Fallah, Aryan Mokhtari, Asuman E. Ozdaglar:
Generalization of Model-Agnostic Meta-Learning Algorithms: Recurring and Unseen Tasks. NeurIPS 2021: 5469-5480 - [i46]Alireza Fallah, Aryan Mokhtari, Asuman E. Ozdaglar:
Generalization of Model-Agnostic Meta-Learning Algorithms: Recurring and Unseen Tasks. CoRR abs/2102.03832 (2021) - [i45]Liam Collins, Hamed Hassani, Aryan Mokhtari, Sanjay Shakkottai:
Exploiting Shared Representations for Personalized Federated Learning. CoRR abs/2102.07078 (2021) - [i44]Qiujiang Jin, Aryan Mokhtari:
Exploiting Local Convergence of Quasi-Newton Methods Globally: Adaptive Sample Size Approach. CoRR abs/2106.05445 (2021) - [i43]Arman Adibi, Aryan Mokhtari, Hamed Hassani:
Minimax Optimization: The Case of Convex-Submodular. CoRR abs/2111.01262 (2021) - 2020
- [j21]Aryan Mokhtari, Hamed Hassani, Amin Karbasi:
Stochastic Conditional Gradient Methods: From Convex Minimization to Submodular Maximization. J. Mach. Learn. Res. 21: 105:1-105:49 (2020) - [j20]Aryan Mokhtari, Alec Koppel, Martin Takác, Alejandro Ribeiro:
A Class of Parallel Doubly Stochastic Algorithms for Large-Scale Learning. J. Mach. Learn. Res. 21: 120:1-120:51 (2020) - [j19]Aryan Mokhtari, Alejandro Ribeiro:
Stochastic Quasi-Newton Methods. Proc. IEEE 108(11): 1906-1922 (2020) - [j18]Aryan Mokhtari, Asuman E. Ozdaglar, Sarath Pattathil:
Convergence Rate of 풪(1/k) for Optimistic Gradient and Extragradient Methods in Smooth Convex-Concave Saddle Point Problems. SIAM J. Optim. 30(4): 3230-3251 (2020) - [j17]Hamed Hassani, Amin Karbasi, Aryan Mokhtari, Zebang Shen:
Stochastic Conditional Gradient++: (Non)Convex Minimization and Continuous Submodular Maximization. SIAM J. Optim. 30(4): 3315-3344 (2020) - [j16]Aryan Mokhtari, Alec Koppel:
High-Dimensional Nonconvex Stochastic Optimization by Doubly Stochastic Successive Convex Approximation. IEEE Trans. Signal Process. 68: 6287-6302 (2020) - [c52]Alireza Fallah, Aryan Mokhtari, Asuman E. Ozdaglar:
On the Convergence Theory of Gradient-Based Model-Agnostic Meta-Learning Algorithms. AISTATS 2020: 1082-1092 - [c51]Aryan Mokhtari, Asuman E. Ozdaglar, Sarath Pattathil:
A Unified Analysis of Extra-gradient and Optimistic Gradient Methods for Saddle Point Problems: Proximal Point Approach. AISTATS 2020: 1497-1507 - [c50]Saeed Soori, Konstantin Mishchenko, Aryan Mokhtari, Maryam Mehri Dehnavi, Mert Gürbüzbalaban:
DAve-QN: A Distributed Averaged Quasi-Newton Method with Local Superlinear Convergence Rate. AISTATS 2020: 1965-1976 - [c49]Amirhossein Reisizadeh, Aryan Mokhtari, Hamed Hassani, Ali Jadbabaie, Ramtin Pedarsani:
FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization. AISTATS 2020: 2021-2031 - [c48]Majid Jahani, Xi He, Chenxin Ma, Aryan Mokhtari, Dheevatsa Mudigere, Alejandro Ribeiro, Martin Takác:
Efficient Distributed Hessian Free Algorithm for Large-scale Empirical Risk Minimization via Accumulating Sample Strategy. AISTATS 2020: 2634-2644 - [c47]Mingrui Zhang, Lin Chen, Aryan Mokhtari, Hamed Hassani, Amin Karbasi:
Quantized Frank-Wolfe: Faster Optimization, Lower Communication, and Projection Free. AISTATS 2020: 3696-3706 - [c46]Mingrui Zhang, Zebang Shen, Aryan Mokhtari, Hamed Hassani, Amin Karbasi:
One Sample Stochastic Frank-Wolfe. AISTATS 2020: 4012-4023 - [c45]Hossein Taheri, Aryan Mokhtari, Hamed Hassani, Ramtin Pedarsani:
Quantized Decentralized Stochastic Learning over Directed Graphs. ICML 2020: 9324-9333 - [c44]Alireza Fallah, Aryan Mokhtari, Asuman E. Ozdaglar:
Personalized Federated Learning with Theoretical Guarantees: A Model-Agnostic Meta-Learning Approach. NeurIPS 2020 - [c43]Arman Adibi, Aryan Mokhtari, Hamed Hassani:
Submodular Meta-Learning. NeurIPS 2020 - [c42]Liam Collins, Aryan Mokhtari, Sanjay Shakkottai:
Task-Robust Model-Agnostic Meta-Learning. NeurIPS 2020 - [c41]Isidoros Tziotis, Constantine Caramanis, Aryan Mokhtari:
Second Order Optimality in Decentralized Non-Convex Optimization via Perturbed Gradient Tracking. NeurIPS 2020 - [i42]Liam Collins, Aryan Mokhtari, Sanjay Shakkottai:
Distribution-Agnostic Model-Agnostic Meta-Learning. CoRR abs/2002.04766 (2020) - [i41]Alireza Fallah, Aryan Mokhtari, Asuman E. Ozdaglar:
Provably Convergent Policy Gradient Methods for Model-Agnostic Meta-Reinforcement Learning. CoRR abs/2002.05135 (2020) - [i40]Alireza Fallah, Aryan Mokhtari, Asuman E. Ozdaglar:
Personalized Federated Learning: A Meta-Learning Approach. CoRR abs/2002.07948 (2020) - [i39]Hossein Taheri, Aryan Mokhtari, Hamed Hassani, Ramtin Pedarsani:
Quantized Push-sum for Gossip and Decentralized Optimization over Directed Graphs. CoRR abs/2002.09964 (2020) - [i38]Qiujiang Jin, Aryan Mokhtari:
Non-asymptotic Superlinear Convergence of Standard Quasi-Newton Methods. CoRR abs/2003.13607 (2020) - [i37]Mohammad Fereydounian, Zebang Shen, Aryan Mokhtari, Amin Karbasi, Hamed Hassani:
Safe Learning under Uncertain Objectives and Constraints. CoRR abs/2006.13326 (2020) - [i36]Farzin Haddadpour, Mohammad Mahdi Kamani, Aryan Mokhtari, Mehrdad Mahdavi:
Federated Learning with Compression: Unified Analysis and Sharp Guarantees. CoRR abs/2007.01154 (2020) - [i35]Arman Adibi, Aryan Mokhtari, Hamed Hassani:
Submodular Meta-Learning. CoRR abs/2007.05852 (2020) - [i34]Liam Collins, Aryan Mokhtari, Sanjay Shakkottai:
Why Does MAML Outperform ERM? An Optimization Perspective. CoRR abs/2010.14672 (2020) - [i33]Amirhossein Reisizadeh, Isidoros Tziotis, Hamed Hassani, Aryan Mokhtari, Ramtin Pedarsani:
Straggler-Resilient Federated Learning: Leveraging the Interplay Between Statistical Accuracy and System Heterogeneity. CoRR abs/2012.14453 (2020)
2010 – 2019
- 2019
- [j15]Santiago Paternain, Aryan Mokhtari, Alejandro Ribeiro:
A Newton-Based Method for Nonconvex Optimization with Fast Evasion of Saddle Points. SIAM J. Optim. 29(1): 343-368 (2019) - [j14]Amirhossein Reisizadeh, Aryan Mokhtari, Hamed Hassani, Ramtin Pedarsani:
An Exact Quantized Decentralized Gradient Descent Algorithm. IEEE Trans. Signal Process. 67(19): 4934-4947 (2019) - [j13]Mark Eisen, Aryan Mokhtari, Alejandro Ribeiro:
A Primal-Dual Quasi-Newton Method for Exact Consensus Optimization. IEEE Trans. Signal Process. 67(23): 5983-5997 (2019) - [c40]Aryan Mokhtari, Asuman E. Ozdaglar, Ali Jadbabaie:
Efficient Nonconvex Empirical Risk Minimization via Adaptive Sample Size Methods. AISTATS 2019: 2485-2494 - [c39]Jingzhao Zhang, César A. Uribe, Aryan Mokhtari, Ali Jadbabaie:
Achieving Acceleration in Distributed Optimization via Direct Discretization of the Heavy-Ball ODE. ACC 2019: 3408-3413 - [c38]Amirhossein Reisizadeh, Hossein Taheri, Aryan Mokhtari, Hamed Hassani, Ramtin Pedarsani:
Robust and Communication-Efficient Collaborative Learning. NeurIPS 2019: 8386-8397 - [c37]Amin Karbasi, Hamed Hassani, Aryan Mokhtari, Zebang Shen:
Stochastic Continuous Greedy ++: When Upper and Lower Bounds Match. NeurIPS 2019: 13066-13076 - [i32]Aryan Mokhtari, Asuman E. Ozdaglar, Sarath Pattathil:
A Unified Analysis of Extra-gradient and Optimistic Gradient Methods for Saddle Point Problems: Proximal Point Approach. CoRR abs/1901.08511 (2019) - [i31]Mingrui Zhang, Lin Chen, Aryan Mokhtari, Hamed Hassani, Amin Karbasi:
Quantized Frank-Wolfe: Communication-Efficient Distributed Optimization. CoRR abs/1902.06332 (2019) - [i30]Hamed Hassani, Amin Karbasi, Aryan Mokhtari, Zebang Shen:
Stochastic Conditional Gradient++. CoRR abs/1902.06992 (2019) - [i29]Aryan Mokhtari, Asuman E. Ozdaglar, Sarath Pattathil:
Proximal Point Approximations Achieving a Convergence Rate of O(1/k) for Smooth Convex-Concave Saddle Point Problems: Optimistic Gradient and Extra-gradient Methods. CoRR abs/1906.01115 (2019) - [i28]Amirhossein Reisizadeh, Hossein Taheri, Aryan Mokhtari, Hamed Hassani, Ramtin Pedarsani:
Robust and Communication-Efficient Collaborative Learning. CoRR abs/1907.10595 (2019) - [i27]Alireza Fallah, Aryan Mokhtari, Asuman E. Ozdaglar:
On the Convergence Theory of Gradient-Based Model-Agnostic Meta-Learning Algorithms. CoRR abs/1908.10400 (2019) - [i26]Amirhossein Reisizadeh, Aryan Mokhtari, Hamed Hassani, Ali Jadbabaie, Ramtin Pedarsani:
FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization. CoRR abs/1909.13014 (2019) - [i25]Mingrui Zhang, Zebang Shen, Aryan Mokhtari, Hamed Hassani, Amin Karbasi:
One Sample Stochastic Frank-Wolfe. CoRR abs/1910.04322 (2019) - [i24]Weijie Liu, Aryan Mokhtari, Asuman E. Ozdaglar, Sarath Pattathil, Zebang Shen, Nenggan Zheng:
A Decentralized Proximal Point-type Method for Saddle Point Problems. CoRR abs/1910.14380 (2019) - 2018
- [j12]Aryan Mokhtari, Mert Gürbüzbalaban, Alejandro Ribeiro:
Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate. SIAM J. Optim. 28(2): 1420-1447 (2018) - [j11]Aryan Mokhtari, Mark Eisen, Alejandro Ribeiro:
IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate. SIAM J. Optim. 28(2): 1670-1698 (2018) - [c36]Mark Eisen, Aryan Mokhtari, Alejandro Ribeiro:
Large Scale Empirical Risk Minimization via Truncated Adaptive Newton Method. AISTATS 2018: 1447-1455 - [c35]Aryan Mokhtari, Hamed Hassani, Amin Karbasi:
Conditional Gradient Method for Stochastic Submodular Maximization: Closing the Gap. AISTATS 2018: 1886-1895 - [c34]Santiago Paternain, Aryan Mokhtari, Alejandro Ribeiro:
A Newton Method for Faster Navigation in Cluttered Environments. CDC 2018: 4084-4090 - [c33]Amirhossein Reisizadeh, Aryan Mokhtari, Hamed Hassani, Ramtin Pedarsani:
Quantized Decentralized Consensus Optimization. CDC 2018: 5838-5843 - [c32]Alec Koppel, Aryan Mokhtari, Alejandro Ribeiro:
Parallel Stochastic Successive Convex Approximation Method for Large-Scale Dictionary Learning. ICASSP 2018: 2771-2775 - [c31]Aryan Mokhtari, Hamed Hassani, Amin Karbasi:
Decentralized Submodular Maximization: Bridging Discrete and Continuous Settings. ICML 2018: 3613-3622 - [c30]Zebang Shen, Aryan Mokhtari, Tengfei Zhou, Peilin Zhao, Hui Qian:
Towards More Efficient Stochastic Decentralized Learning: Faster Convergence and Sparse Communication. ICML 2018: 4631-4640 - [c29]Aryan Mokhtari, Asuman E. Ozdaglar, Ali Jadbabaie:
Escaping Saddle Points in Constrained Optimization. NeurIPS 2018: 3633-3643 - [c28]Jingzhao Zhang, Aryan Mokhtari, Suvrit Sra, Ali Jadbabaie:
Direct Runge-Kutta Discretization Achieves Acceleration. NeurIPS 2018: 3904-3913 - [i23]Aryan Mokhtari, Hamed Hassani, Amin Karbasi:
Stochastic Conditional Gradient Methods: From Convex Minimization to Submodular Maximization. CoRR abs/1804.09554 (2018) - [i22]Jingzhao Zhang, Aryan Mokhtari, Suvrit Sra, Ali Jadbabaie:
Direct Runge-Kutta Discretization Achieves Acceleration. CoRR abs/1805.00521 (2018) - [i21]Zebang Shen, Aryan Mokhtari, Tengfei Zhou, Peilin Zhao, Hui Qian:
Towards More Efficient Stochastic Decentralized Learning: Faster Convergence and Sparse Communication. CoRR abs/1805.09969 (2018) - [i20]Amirhossein Reisizadeh, Aryan Mokhtari, S. Hamed Hassani, Ramtin Pedarsani:
Quantized Decentralized Consensus Optimization. CoRR abs/1806.11536 (2018) - [i19]Aryan Mokhtari, Asuman E. Ozdaglar, Ali Jadbabaie:
Escaping Saddle Points in Constrained Optimization. CoRR abs/1809.02162 (2018) - [i18]Majid Jahani, Xi He, Chenxin Ma, Aryan Mokhtari, Dheevatsa Mudigere, Alejandro Ribeiro, Martin Takác:
Efficient Distributed Hessian Free Algorithm for Large-scale Empirical Risk Minimization via Accumulating Sample Strategy. CoRR abs/1810.11507 (2018) - 2017
- [j10]Andrea Simonetto, Alec Koppel, Aryan Mokhtari, Geert Leus, Alejandro Ribeiro:
Decentralized Prediction-Correction Methods for Networked Time-Varying Convex Optimization. IEEE Trans. Autom. Control. 62(11): 5724-5738 (2017) - [j9]Aryan Mokhtari, Qing Ling, Alejandro Ribeiro:
Network Newton Distributed Optimization Methods. IEEE Trans. Signal Process. 65(1): 146-161 (2017) - [j8]Mark Eisen, Aryan Mokhtari, Alejandro Ribeiro:
Decentralized Quasi-Newton Methods. IEEE Trans. Signal Process. 65(10): 2613-2628 (2017) - [j7]Tianyi Chen, Aryan Mokhtari, Xin Wang, Alejandro Ribeiro, Georgios B. Giannakis:
Stochastic Averaging for Constrained Optimization With Application to Online Resource Allocation. IEEE Trans. Signal Process. 65(12): 3078-3093 (2017) - [c27]Mark Eisen, Aryan Mokhtari, Alejandro Ribeiro:
A primal-dual Quasi-Newton method for consensus optimization. ACSSC 2017: 298-302 - [c26]Aryan Mokhtari, Amir Ingber:
A Diagonal-Augmented quasi-Newton method with application to factorization machines. ICASSP 2017: 2671-2675 - [c25]Aryan Mokhtari, Mark Eisen, Alejandro Ribeiro:
An incremental quasi-Newton method with a local superlinear convergence rate. ICASSP 2017: 4039-4043 - [c24]Aryan Mokhtari, Mert Gürbüzbalaban, Alejandro Ribeiro:
A double incremental aggregated gradient method with linear convergence rate for large-scale optimization. ICASSP 2017: 4696-4700 - [c23]Aryan Mokhtari, Alec Koppel, Gesualdo Scutari, Alejandro Ribeiro:
Large-scale nonconvex stochastic optimization by Doubly Stochastic Successive Convex approximation. ICASSP 2017: 4701-4705 - [c22]Aryan Mokhtari, Alejandro Ribeiro:
First-Order Adaptive Sample Size Methods to Reduce Complexity of Empirical Risk Minimization. NIPS 2017: 2060-2068 - [i17]Aryan Mokhtari, Mark Eisen, Alejandro Ribeiro:
IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate. CoRR abs/1702.00709 (2017) - [i16]Mark Eisen, Aryan Mokhtari, Alejandro Ribeiro:
Large Scale Empirical Risk Minimization via Truncated Adaptive Newton Method. CoRR abs/1705.07957 (2017) - [i15]Aryan Mokhtari, Alejandro Ribeiro:
First-Order Adaptive Sample Size Methods to Reduce Complexity of Empirical Risk Minimization. CoRR abs/1709.00599 (2017) - [i14]Aryan Mokhtari, S. Hamed Hassani, Amin Karbasi:
Conditional Gradient Method for Stochastic Submodular Maximization: Closing the Gap. CoRR abs/1711.01660 (2017) - 2016
- [j6]Aryan Mokhtari, Alejandro Ribeiro:
DSA: Decentralized Double Stochastic Averaging Gradient Algorithm. J. Mach. Learn. Res. 17: 61:1-61:35 (2016) - [j5]Aryan Mokhtari, Wei Shi, Qing Ling, Alejandro Ribeiro:
A Decentralized Second-Order Method with Exact Linear Convergence Rate for Consensus Optimization. IEEE Trans. Signal Inf. Process. over Networks 2(4): 507-522 (2016) - [j4]Andrea Simonetto, Aryan Mokhtari, Alec Koppel, Geert Leus, Alejandro Ribeiro:
A Class of Prediction-Correction Methods for Time-Varying Convex Optimization. IEEE Trans. Signal Process. 64(17): 4576-4591 (2016) - [j3]Aryan Mokhtari, Wei Shi, Qing Ling, Alejandro Ribeiro:
DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers. IEEE Trans. Signal Process. 64(19): 5158-5173 (2016) - [c21]Aryan Mokhtari, Wei Shi, Qing Ling:
ESOM: Exact second-order method for consensus optimization. ACSSC 2016: 783-787 - [c20]Alec Koppel, Aryan Mokhtari, Alejandro Ribeiro:
Doubly stochastic algorithms for large-scale optimization. ACSSC 2016: 1705-1709 - [c19]Aryan Mokhtari, Alec Koppel, Alejandro Ribeiro:
Doubly random parallel stochastic methods for large scale learning. ACC 2016: 4847-4852 - [c18]Mark Eisen, Aryan Mokhtari, Alejandro Ribeiro:
A decentralized quasi-Newton method for dual formulations of consensus optimization. CDC 2016: 1951-1958 - [c17]Aryan Mokhtari, Wei Shi, Qing Ling, Alejandro Ribeiro:
A decentralized Second-Order Method for Dynamic Optimization. CDC 2016: 6036-6043 - [c16]Aryan Mokhtari, Shahin Shahrampour, Ali Jadbabaie, Alejandro Ribeiro:
Online optimization in dynamic environments: Improved regret rates for strongly convex problems. CDC 2016: 7195-7201 - [c15]Andrea Simonetto, Alec Koppel, Aryan Mokhtari, Geert Leus, Alejandro Ribeiro:
A Quasi-newton prediction-correction method for decentralized dynamic convex optimization. ECC 2016: 1934-1939 - [c14]Tianyi Chen, Aryan Mokhtari, Xin Wang, Alejandro Ribeiro, Georgios B. Giannakis:
A data-driven approach to stochastic network optimization. GlobalSIP 2016: 510-514 - [c13]Han Zhang, Wei Shi, Aryan Mokhtari, Alejandro Ribeiro, Qing Ling:
Decentralized constrained consensus optimization with primal dual splitting projection. GlobalSIP 2016: 565-569 - [c12]Mark Eisen, Aryan Mokhtari, Alejandro Ribeiro:
An asynchronous Quasi-Newton method for consensus optimization. GlobalSIP 2016: 570-574 - [c11]Aryan Mokhtari, Hadi Daneshmand, Aurélien Lucchi, Thomas Hofmann, Alejandro Ribeiro:
Adaptive Newton Method for Empirical Risk Minimization to Statistical Accuracy. NIPS 2016: 4062-4070 - [i13]Aryan Mokhtari, Wei Shi, Qing Ling, Alejandro Ribeiro:
A Decentralized Second-Order Method with Exact Linear Convergence Rate for Consensus Optimization. CoRR abs/1602.00596 (2016) - [i12]Andrea Simonetto, Alec Koppel, Aryan Mokhtari, Geert Leus, Alejandro Ribeiro:
Decentralized Prediction-Correction Methods for Networked Time-Varying Convex Optimization. CoRR abs/1602.01716 (2016) - [i11]Aryan Mokhtari, Shahin Shahrampour, Ali Jadbabaie, Alejandro Ribeiro:
Online Optimization in Dynamic Environments: Improved Regret Rates for Strongly Convex Problems. CoRR abs/1603.04954 (2016) - [i10]Aryan Mokhtari, Alec Koppel, Alejandro Ribeiro:
Doubly Random Parallel Stochastic Methods for Large Scale Learning. CoRR abs/1603.06782 (2016) - [i9]Mark Eisen, Aryan Mokhtari, Alejandro Ribeiro:
A Decentralized Quasi-Newton Method for Dual Formulations of Consensus Optimization. CoRR abs/1603.07195 (2016) - [i8]Aryan Mokhtari, Alejandro Ribeiro:
Adaptive Newton Method for Empirical Risk Minimization to Statistical Accuracy. CoRR abs/1605.07659 (2016) - [i7]Aryan Mokhtari, Alec Koppel, Alejandro Ribeiro:
A Class of Parallel Doubly Stochastic Algorithms for Large-Scale Learning. CoRR abs/1606.04991 (2016) - [i6]Tianyi Chen, Aryan Mokhtari, Xin Wang, Alejandro Ribeiro, Georgios B. Giannakis:
Stochastic Averaging for Constrained Optimization with Application to Online Resource Allocation. CoRR abs/1610.02143 (2016) - [i5]Aryan Mokhtari, Mert Gürbüzbalaban, Alejandro Ribeiro:
Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate. CoRR abs/1611.00347 (2016) - 2015
- [j2]Aryan Mokhtari, Alejandro Ribeiro:
Global convergence of online limited memory BFGS. J. Mach. Learn. Res. 16: 3151-3181 (2015) - [c10]Aryan Mokhtari, Alejandro Ribeiro:
Decentralized double stochastic averaging gradient. ACSSC 2015: 406-410 - [c9]Andrea Simonetto, Alec Koppel, Aryan Mokhtari, Geert Leus, Alejandro Ribeiro:
Prediction-correction methods for time-varying convex optimization. ACSSC 2015: 666-670 - [c8]Andrea Simonetto, Aryan Mokhtari, Alec Koppel, Geert Leus, Alejandro Ribeiro:
A decentralized prediction-correction method for networked time-varying convex optimization. CAMSAP 2015: 509-512 - [c7]Aryan Mokhtari, Wei Shi, Qing Ling, Alejandro Ribeiro:
Decentralized quadratically approximated alternating direction method of multipliers. GlobalSIP 2015: 795-799 - [c6]Alec Koppel, Andrea Simonetto, Aryan Mokhtari, Geert Leus, Alejandro Ribeiro:
Target tracking with dynamic convex optimization. GlobalSIP 2015: 1210-1214 - [c5]Aryan Mokhtari, Qing Ling, Alejandro Ribeiro:
An approximate Newton method for distributed optimization. ICASSP 2015: 2959-2963 - [i4]Andrea Simonetto, Aryan Mokhtari, Alec Koppel, Geert Leus, Alejandro Ribeiro:
A Class of Prediction-Correction Methods for Time-Varying Convex Optimization. CoRR abs/1509.05196 (2015) - 2014
- [j1]Aryan Mokhtari, Alejandro Ribeiro:
RES: Regularized Stochastic BFGS Algorithm. IEEE Trans. Signal Process. 62(23): 6089-6104 (2014) - [c4]Aryan Mokhtari, Qing Ling, Alejandro Ribeiro:
Network Newton. ACSSC 2014: 1621-1625 - [c3]Aryan Mokhtari, Alejandro Ribeiro:
A quasi-Newton method for large scale support vector machines. ICASSP 2014: 8302-8306 - [i3]Aryan Mokhtari, Alejandro Ribeiro:
RES: Regularized Stochastic BFGS Algorithm. CoRR abs/1401.7625 (2014) - [i2]Aryan Mokhtari, Alejandro Ribeiro:
A Quasi-Newton Method for Large Scale Support Vector Machines. CoRR abs/1402.4861 (2014) - [i1]Aryan Mokhtari, Alejandro Ribeiro:
Global Convergence of Online Limited Memory BFGS. CoRR abs/1409.2045 (2014) - 2013
- [c2]Aryan Mokhtari, Alejandro Ribeiro:
Regularized stochastic BFGS algorithm. GlobalSIP 2013: 1109-1112 - [c1]Aryan Mokhtari, Alejandro Ribeiro:
A dual stochastic DFP algorithm for optimal resource allocation in wireless systems. SPAWC 2013: 21-25
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-07 21:25 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint