default search action
Mikhail Belkin
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c67]Amirhesam Abedsoltan, Parthe Pandit, Luis Rademacher, Mikhail Belkin:
On the Nyström Approximation for Preconditioning in Kernel Machines. AISTATS 2024: 3718-3726 - [c66]James B. Simon, Dhruva Karkada, Nikhil Ghosh, Mikhail Belkin:
More is Better: when Infinite Overparameterization is Optimal and Overfitting is Obligatory. ICLR 2024 - [c65]Libin Zhu, Chaoyue Liu, Adityanarayanan Radhakrishnan, Mikhail Belkin:
Quadratic models for understanding catapult dynamics of neural networks. ICLR 2024 - [c64]Libin Zhu, Chaoyue Liu, Adityanarayanan Radhakrishnan, Mikhail Belkin:
Catapults in SGD: spikes in the training loss and their impact on generalization through feature learning. ICML 2024 - [i66]Adityanarayanan Radhakrishnan, Mikhail Belkin, Dmitriy Drusvyatskiy:
Linear Recursive Feature Machines provably recover low-rank matrices. CoRR abs/2401.04553 (2024) - [i65]Yijiang River Dong, Hongzhou Lin, Mikhail Belkin, Ramón Huerta, Ivan Vulic:
Unmemorization in Large Language Models via Self-Distillation and Deliberate Imagination. CoRR abs/2402.10052 (2024) - [i64]Daniel Beaglehole, Peter Súkeník, Marco Mondelli, Mikhail Belkin:
Average gradient outer product as a mechanism for deep neural collapse. CoRR abs/2402.13728 (2024) - [i63]Neil Mallinar, Daniel Beaglehole, Libin Zhu, Adityanarayanan Radhakrishnan, Parthe Pandit, Mikhail Belkin:
Emergence in non-neural models: grokking modular arithmetic via average gradient outer product. CoRR abs/2407.20199 (2024) - 2023
- [j16]Daniel Beaglehole, Mikhail Belkin, Parthe Pandit:
On the Inconsistency of Kernel Ridgeless Regression in Fixed Dimensions. SIAM J. Math. Data Sci. 5(4): 854-872 (2023) - [j15]Nikhil Ghosh, Mikhail Belkin:
A Universal Trade-off Between the Model Size, Test Loss, and Training Loss of Linear Predictors. SIAM J. Math. Data Sci. 5(4): 977-1004 (2023) - [c63]Amirhesam Abedsoltan, Mikhail Belkin, Parthe Pandit:
Toward Large Kernel Models. ICML 2023: 61-78 - [c62]Like Hui, Mikhail Belkin, Stephen Wright:
Cut your Losses with Squentropy. ICML 2023: 14114-14131 - [c61]Arindam Banerjee, Pedro Cisneros-Velarde, Libin Zhu, Mikhail Belkin:
Neural tangent kernel at initialization: linear width suffices. UAI 2023: 110-118 - [i62]Amirhesam Abedsoltan, Mikhail Belkin, Parthe Pandit:
Toward Large Kernel Models. CoRR abs/2302.02605 (2023) - [i61]Like Hui, Mikhail Belkin, Stephen Wright:
Cut your Losses with Squentropy. CoRR abs/2302.03952 (2023) - [i60]Chaoyue Liu, Amirhesam Abedsoltan, Mikhail Belkin:
On Emergence of Clean-Priority Learning in Early Stopped Neural Networks. CoRR abs/2306.02533 (2023) - [i59]Chaoyue Liu, Dmitriy Drusvyatskiy, Mikhail Belkin, Damek Davis, Yi-An Ma:
Aiming towards the minimizers: fast convergence of SGD for overparametrized problems. CoRR abs/2306.02601 (2023) - [i58]Libin Zhu, Chaoyue Liu, Adityanarayanan Radhakrishnan, Mikhail Belkin:
Catapults in SGD: spikes in the training loss and their impact on generalization through feature learning. CoRR abs/2306.04815 (2023) - [i57]Daniel Beaglehole, Adityanarayanan Radhakrishnan, Parthe Pandit, Mikhail Belkin:
Mechanism of feature learning in convolutional neural networks. CoRR abs/2309.00570 (2023) - [i56]James B. Simon, Dhruva Karkada, Nikhil Ghosh, Mikhail Belkin:
More is Better in Modern Machine Learning: when Infinite Overparameterization is Optimal and Overfitting is Obligatory. CoRR abs/2311.14646 (2023) - [i55]Amirhesam Abedsoltan, Mikhail Belkin, Parthe Pandit, Luis Rademacher:
On the Nystrom Approximation for Preconditioning in Kernel Machines. CoRR abs/2312.03311 (2023) - 2022
- [i54]Yuan Cao, Zixiang Chen, Mikhail Belkin, Quanquan Gu:
Benign Overfitting in Two-layer Convolutional Neural Networks. CoRR abs/2202.06526 (2022) - [i53]Like Hui, Mikhail Belkin, Preetum Nakkiran:
Limitations of Neural Collapse for Understanding Generalization in Deep Learning. CoRR abs/2202.08384 (2022) - [i52]Chaoyue Liu, Libin Zhu, Mikhail Belkin:
Transition to Linearity of Wide Neural Networks is an Emerging Property of Assembling Weak Models. CoRR abs/2203.05104 (2022) - [i51]Adityanarayanan Radhakrishnan, Mikhail Belkin, Caroline Uhler:
Wide and Deep Neural Networks Achieve Optimality for Classification. CoRR abs/2204.14126 (2022) - [i50]Libin Zhu, Chaoyue Liu, Mikhail Belkin:
Transition to Linearity of General Neural Networks with Directed Acyclic Graph Architecture. CoRR abs/2205.11786 (2022) - [i49]Libin Zhu, Chaoyue Liu, Adityanarayanan Radhakrishnan, Mikhail Belkin:
Quadratic models for understanding neural network dynamics. CoRR abs/2205.11787 (2022) - [i48]Daniel Beaglehole, Mikhail Belkin, Parthe Pandit:
Kernel Ridgeless Regression is Inconsistent for Low Dimensions. CoRR abs/2205.13525 (2022) - [i47]Libin Zhu, Parthe Pandit, Mikhail Belkin:
A note on Linear Bottleneck networks and their Transition to Multilinearity. CoRR abs/2206.15058 (2022) - [i46]Neil Mallinar, James B. Simon, Amirhesam Abedsoltan, Parthe Pandit, Mikhail Belkin, Preetum Nakkiran:
Benign, Tempered, or Catastrophic: A Taxonomy of Overfitting. CoRR abs/2207.06569 (2022) - [i45]Nikhil Ghosh, Mikhail Belkin:
A Universal Trade-off Between the Model Size, Test Loss, and Training Loss of Linear Predictors. CoRR abs/2207.11621 (2022) - [i44]Arindam Banerjee, Pedro Cisneros-Velarde, Libin Zhu, Mikhail Belkin:
Restricted Strong Convexity of Deep Learning Models with Smooth Activations. CoRR abs/2209.15106 (2022) - [i43]Adityanarayanan Radhakrishnan, Daniel Beaglehole, Parthe Pandit, Mikhail Belkin:
Feature learning in neural networks and kernel machines that recursively learn features. CoRR abs/2212.13881 (2022) - 2021
- [j14]Mikhail Belkin:
Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation. Acta Numer. 30: 203-248 (2021) - [j13]Vidya Muthukumar, Adhyyan Narang, Vignesh Subramanian, Mikhail Belkin, Daniel Hsu, Anant Sahai:
Classification vs regression in overparameterized regimes: Does the loss function matter? J. Mach. Learn. Res. 22: 222:1-222:69 (2021) - [c60]Like Hui, Mikhail Belkin:
Evaluation of Neural Architectures trained with square Loss vs Cross-Entropy in Classification Tasks. ICLR 2021 - [c59]Yuan Cao, Quanquan Gu, Mikhail Belkin:
Risk Bounds for Over-parameterized Maximum Margin Classification on Sub-Gaussian Mixtures. NeurIPS 2021: 8407-8418 - [c58]Lin Chen, Yifei Min, Mikhail Belkin, Amin Karbasi:
Multiple Descent: Design Your Own Generalization Curve. NeurIPS 2021: 8898-8912 - [e1]Mikhail Belkin, Samory Kpotufe:
Conference on Learning Theory, COLT 2021, 15-19 August 2021, Boulder, Colorado, USA. Proceedings of Machine Learning Research 134, PMLR 2021 [contents] - [i42]Yuan Cao, Quanquan Gu, Mikhail Belkin:
Risk Bounds for Over-parameterized Maximum Margin Classification on Sub-Gaussian Mixtures. CoRR abs/2104.13628 (2021) - [i41]Mikhail Belkin:
Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation. CoRR abs/2105.14368 (2021) - [i40]Adityanarayanan Radhakrishnan, George Stefanakis, Mikhail Belkin, Caroline Uhler:
Simple, Fast, and Flexible Framework for Matrix Completion with Infinite Width Neural Networks. CoRR abs/2108.00131 (2021) - [i39]Adityanarayanan Radhakrishnan, Mikhail Belkin, Caroline Uhler:
Local Quadratic Convergence of Stochastic Gradient Descent with Adaptive Step Size. CoRR abs/2112.14872 (2021) - 2020
- [j12]Qichao Que, Mikhail Belkin:
Back to the Future: Radial Basis Function Network Revisited. IEEE Trans. Pattern Anal. Mach. Intell. 42(8): 1856-1867 (2020) - [j11]Adityanarayanan Radhakrishnan, Mikhail Belkin, Caroline Uhler:
Overparameterized neural networks implement associative memory. Proc. Natl. Acad. Sci. USA 117(44): 27162-27170 (2020) - [j10]Mikhail Belkin, Daniel Hsu, Ji Xu:
Two Models of Double Descent for Weak Features. SIAM J. Math. Data Sci. 2(4): 1167-1180 (2020) - [c57]Chaoyue Liu, Mikhail Belkin:
Accelerating SGD with momentum for over-parameterized learning. ICLR 2020 - [c56]Chaoyue Liu, Libin Zhu, Mikhail Belkin:
On the linearity of large non-linear models: when and why the tangent kernel is constant. NeurIPS 2020 - [i38]Chaoyue Liu, Libin Zhu, Mikhail Belkin:
Toward a theory of optimization for over-parameterized systems of non-linear equations: the lessons of deep learning. CoRR abs/2003.00307 (2020) - [i37]Vidya Muthukumar, Adhyyan Narang, Vignesh Subramanian, Mikhail Belkin, Daniel J. Hsu, Anant Sahai:
Classification vs regression in overparameterized regimes: Does the loss function matter? CoRR abs/2005.08054 (2020) - [i36]Like Hui, Mikhail Belkin:
Evaluation of Neural Architectures Trained with Square Loss vs Cross-Entropy in Classification Tasks. CoRR abs/2006.07322 (2020) - [i35]Lin Chen, Yifei Min, Mikhail Belkin, Amin Karbasi:
Multiple Descent: Design Your Own Generalization Curve. CoRR abs/2008.01036 (2020) - [i34]Adityanarayanan Radhakrishnan, Mikhail Belkin, Caroline Uhler:
Linear Convergence and Implicit Regularization of Generalized Mirror Descent with Time-Dependent Mirrors. CoRR abs/2009.08574 (2020) - [i33]Chaoyue Liu, Libin Zhu, Mikhail Belkin:
On the linearity of large non-linear models: when and why the tangent kernel is constant. CoRR abs/2010.01092 (2020)
2010 – 2019
- 2019
- [c55]Mikhail Belkin, Alexander Rakhlin, Alexandre B. Tsybakov:
Does data interpolation contradict statistical optimality? AISTATS 2019: 1611-1619 - [c54]Like Hui, Siyuan Ma, Mikhail Belkin:
Kernel Machines Beat Deep Neural Networks on Mask-Based Single-Channel Speech Enhancement. INTERSPEECH 2019: 2748-2752 - [c53]Siyuan Ma, Mikhail Belkin:
Kernel Machines That Adapt To Gpus For Effective Large Batch Training. SysML 2019 - [i32]Mikhail Belkin, Daniel Hsu, Ji Xu:
Two models of double descent for weak features. CoRR abs/1903.07571 (2019) - [i31]Adityanarayanan Radhakrishnan, Mikhail Belkin, Caroline Uhler:
Overparameterized Neural Networks Can Implement Associative Memory. CoRR abs/1909.12362 (2019) - 2018
- [j9]Mikhail Belkin, Luis Rademacher, James R. Voss:
Eigenvectors of Orthogonally Decomposable Functions. SIAM J. Comput. 47(2): 547-615 (2018) - [c52]Justin Eldridge, Mikhail Belkin, Yusu Wang:
Unperturbed: spectral analysis beyond Davis-Kahan. ALT 2018: 321-358 - [c51]Mikhail Belkin:
Approximation beats concentration? An approximation view on inference with smooth radial kernels. COLT 2018: 1348-1361 - [c50]Mikhail Belkin, Siyuan Ma, Soumik Mandal:
To Understand Deep Learning We Need to Understand Kernel Learning. ICML 2018: 540-548 - [c49]Siyuan Ma, Raef Bassily, Mikhail Belkin:
The Power of Interpolation: Understanding the Effectiveness of SGD in Modern Over-parametrized Learning. ICML 2018: 3331-3340 - [c48]Mikhail Belkin, Daniel J. Hsu, Partha Mitra:
Overfitting or perfect fitting? Risk bounds for classification and regression rules that interpolate. NeurIPS 2018: 2306-2317 - [i30]Mikhail Belkin:
Approximation beats concentration? An approximation view on inference with smooth radial kernels. CoRR abs/1801.03437 (2018) - [i29]Mikhail Belkin, Siyuan Ma, Soumik Mandal:
To understand deep learning we need to understand kernel learning. CoRR abs/1802.01396 (2018) - [i28]Akshay Mehra, Jihun Hamm, Mikhail Belkin:
Fast Interactive Image Retrieval using large-scale unlabeled data. CoRR abs/1802.04204 (2018) - [i27]Chaoyue Liu, Mikhail Belkin:
Parametrized Accelerated Methods Free of Condition Number. CoRR abs/1802.10235 (2018) - [i26]Mikhail Belkin, Daniel Hsu, Partha Mitra:
Overfitting or perfect fitting? Risk bounds for classification and regression rules that interpolate. CoRR abs/1806.05161 (2018) - [i25]Siyuan Ma, Mikhail Belkin:
Learning kernels that adapt to GPU. CoRR abs/1806.06144 (2018) - [i24]Mikhail Belkin, Alexander Rakhlin, Alexandre B. Tsybakov:
Does data interpolation contradict statistical optimality? CoRR abs/1806.09471 (2018) - [i23]Adityanarayanan Radhakrishnan, Mikhail Belkin, Caroline Uhler:
Downsampling leads to Image Memorization in Convolutional Autoencoders. CoRR abs/1810.10333 (2018) - [i22]Chaoyue Liu, Mikhail Belkin:
MaSS: an Accelerated Stochastic Method for Over-parametrized Learning. CoRR abs/1810.13395 (2018) - [i21]Like Hui, Siyuan Ma, Mikhail Belkin:
Kernel Machines Beat Deep Neural Networks on Mask-based Single-channel Speech Enhancement. CoRR abs/1811.02095 (2018) - [i20]Raef Bassily, Mikhail Belkin, Siyuan Ma:
On exponential convergence of SGD in non-convex over-parametrized learning. CoRR abs/1811.02564 (2018) - [i19]Mikhail Belkin, Daniel Hsu, Siyuan Ma, Soumik Mandal:
Reconciling modern machine learning and the bias-variance trade-off. CoRR abs/1812.11118 (2018) - 2017
- [c47]Siyuan Ma, Mikhail Belkin:
Diving into the shallows: a computational perspective on large-scale shallow learning. NIPS 2017: 3778-3787 - [i18]Siyuan Ma, Mikhail Belkin:
Diving into the shallows: a computational perspective on large-scale shallow learning. CoRR abs/1703.10622 (2017) - [i17]Justin Eldridge, Mikhail Belkin, Yusu Wang:
Unperturbed: spectral analysis beyond Davis-Kahan. CoRR abs/1706.06516 (2017) - [i16]Siyuan Ma, Raef Bassily, Mikhail Belkin:
The Power of Interpolation: Understanding the Effectiveness of SGD in Modern Over-parametrized Learning. CoRR abs/1712.06559 (2017) - 2016
- [c46]James R. Voss, Mikhail Belkin, Luis Rademacher:
The Hidden Convexity of Spectral Clustering. AAAI 2016: 2108-2114 - [c45]Qichao Que, Mikhail Belkin:
Back to the Future: Radial Basis Function Networks Revisited. AISTATS 2016: 1375-1383 - [c44]Mikhail Belkin, Luis Rademacher, James R. Voss:
Basis Learning as an Algorithmic Primitive. COLT 2016: 446-487 - [c43]Jihun Hamm, Yingjun Cao, Mikhail Belkin:
Learning privately from multiparty data. ICML 2016: 555-563 - [c42]Justin Eldridge, Mikhail Belkin, Yusu Wang:
Graphons, mergeons, and so on! NIPS 2016: 2307-2315 - [c41]Chaoyue Liu, Mikhail Belkin:
Clustering with Bregman Divergences: an Asymptotic Analysis. NIPS 2016: 2343-2351 - [i15]Jihun Hamm, Paul Cao, Mikhail Belkin:
Learning Privately from Multiparty Data. CoRR abs/1602.03552 (2016) - [i14]Justin Eldridge, Mikhail Belkin, Yusu Wang:
Graphons, mergeons, and so on! CoRR abs/1607.01718 (2016) - 2015
- [j8]Mikhail Belkin, Kaushik Sinha:
Polynomial Learning of Distribution Families. SIAM J. Comput. 44(4): 889-911 (2015) - [c40]Justin Eldridge, Mikhail Belkin, Yusu Wang:
Beyond Hartigan Consistency: Merge Distortion Metric for Hierarchical Clustering. COLT 2015: 588-606 - [c39]Mikhail Belkin, Vladimir Iakovlev:
Microwave-Band Circuit-Level Semiconductor Laser Modeling. EMS 2015: 443-445 - [c38]Jihun Hamm, Adam C. Champion, Guoxing Chen, Mikhail Belkin, Dong Xuan:
Crowd-ML: A Privacy-Preserving Learning Framework for a Crowd of Smart Devices. ICDCS 2015: 11-20 - [c37]James R. Voss, Mikhail Belkin, Luis Rademacher:
A Pseudo-Euclidean Iteration for Optimal Recovery in Noisy ICA. NIPS 2015: 2872-2880 - [i13]Jihun Hamm, Adam C. Champion, Guoxing Chen, Mikhail Belkin, Dong Xuan:
Crowd-ML: A Privacy-Preserving Learning Framework for a Crowd of Smart Devices. CoRR abs/1501.02484 (2015) - [i12]James R. Voss, Mikhail Belkin, Luis Rademacher:
Optimal Recovery in Noisy ICA. CoRR abs/1502.04148 (2015) - [i11]Jihun Hamm, Mikhail Belkin:
Probabilistic Zero-shot Classification with Semantic Rankings. CoRR abs/1502.08039 (2015) - 2014
- [c36]Joseph Anderson, Mikhail Belkin, Navin Goyal, Luis Rademacher, James R. Voss:
The More, the Merrier: the Blessing of Dimensionality for Learning Large Gaussian Mixtures. COLT 2014: 1135-1164 - [c35]Qichao Que, Mikhail Belkin, Yusu Wang:
Learning with Fredholm Kernels. NIPS 2014: 2951-2959 - [i10]Mikhail Belkin, Luis Rademacher, James R. Voss:
The Hidden Convexity of Spectral Clustering. CoRR abs/1403.0667 (2014) - [i9]Mikhail Belkin, Luis Rademacher, James R. Voss:
Learning a Hidden Basis Through Imperfect Measurements: An Algorithmic Primitive. CoRR abs/1411.1420 (2014) - 2013
- [j7]Mikhail Belkin, Hariharan Narayanan, Partha Niyogi:
Heat flow and a faster algorithm to compute the surface area of a convex body. Random Struct. Algorithms 43(4): 407-428 (2013) - [c34]Mikhail Belkin, Luis Rademacher, James R. Voss:
Blind Signal Separation in the Presence of Gaussian Noise. COLT 2013: 270-287 - [c33]Qichao Que, Mikhail Belkin:
Inverse Density as an Inverse Problem: the Fredholm Equation Approach. NIPS 2013: 1484-1492 - [c32]James R. Voss, Luis Rademacher, Mikhail Belkin:
Fast Algorithms for Gaussian Noise Invariant Independent Component Analysis. NIPS 2013: 2544-2552 - [i8]Qichao Que, Mikhail Belkin:
Inverse Density as an Inverse Problem: The Fredholm Equation Approach. CoRR abs/1304.5575 (2013) - [i7]Joseph Anderson, Mikhail Belkin, Navin Goyal, Luis Rademacher, James R. Voss:
The More, the Merrier: the Blessing of Dimensionality for Learning Large Gaussian Mixtures. CoRR abs/1311.2891 (2013) - 2012
- [c31]Yuwen Zhuang, Mikhail Belkin, Simon Dennis:
Metric Based Automatic Event Segmentation. MobiCASE 2012: 129-148 - [c30]Jihun Hamm, Benjamin Stone, Mikhail Belkin, Simon Dennis:
Automatic Annotation of Daily Activity from Smartphone-Based Multisensory Streams. MobiCASE 2012: 328-342 - [c29]Mikhail Belkin, Qichao Que, Yusu Wang, Xueyuan Zhou:
Toward Understanding Complex Spaces: Graph Laplacians on Manifolds with Singularities and Boundaries. COLT 2012: 36.1-36.26 - [i6]Mikhail Belkin, Luis Rademacher, James R. Voss:
Blind Signal Separation in the Presence of Gaussian Noise. CoRR abs/1211.1716 (2012) - [i5]Mikhail Belkin, Qichao Que, Yusu Wang, Xueyuan Zhou:
Graph Laplacians on Singular Manifolds: Toward understanding complex spaces: graph Laplacians on manifolds with singularities and boundaries. CoRR abs/1211.6727 (2012) - 2011
- [j6]Stefano Melacci, Mikhail Belkin:
Laplacian Support Vector Machines Trained in the Primal. J. Mach. Learn. Res. 12: 1149-1184 (2011) - [c28]Xueyuan Zhou, Mikhail Belkin, Nathan Srebro:
An iterated graph laplacian approach for ranking on manifolds. KDD 2011: 877-885 - [c27]Xiaoyin Ge, Issam Safa, Mikhail Belkin, Yusu Wang:
Data Skeletonization via Reeb Graphs. NIPS 2011: 837-845 - [c26]Xueyuan Zhou, Mikhail Belkin:
Semi-supervised Learning by Higher Order Regularization. AISTATS 2011: 892-900 - [i4]Xueyuan Zhou, Mikhail Belkin:
Behavior of Graph Laplacians on Manifolds with Boundary. CoRR abs/1105.3931 (2011) - 2010
- [j5]Lorenzo Rosasco, Mikhail Belkin, Ernesto De Vito:
On Learning with Integral Operators. J. Mach. Learn. Res. 11: 905-934 (2010) - [c25]Mikhail Belkin, Kaushik Sinha:
Toward Learning Gaussian Mixtures with Arbitrary Separation. COLT 2010: 407-419 - [c24]Mikhail Belkin, Kaushik Sinha:
Polynomial Learning of Distribution Families. FOCS 2010: 103-112 - [c23]Andrew R. Plummer, Mary E. Beckman, Mikhail Belkin, Eric Fosler-Lussier, Benjamin Munson:
Learning speaker normalization using semisupervised manifold alignment. INTERSPEECH 2010: 2918-2921 - [i3]Mikhail Belkin, Kaushik Sinha:
Polynomial Learning of Distribution Families. CoRR abs/1004.4864 (2010)
2000 – 2009
- 2009
- [c22]Kaushik Sinha, Mikhail Belkin:
Semi-Supervised Learning Using Sparse Eigenfunction Bases. AAAI Fall Symposium: Manifold Learning and Its Applications 2009 - [c21]Lorenzo Rosasco, Mikhail Belkin, Ernesto De Vito:
A Note on Learning with Integral Operators. COLT 2009 - [c20]Kaushik Sinha, Mikhail Belkin:
Semi-supervised Learning using Sparse Eigenfunction Bases. NIPS 2009: 1687-1695 - [c19]Mikhail Belkin, Jian Sun, Yusu Wang:
Constructing Laplace operator from point clouds in Rd. SODA 2009: 1031-1040 - [i2]Mikhail Belkin, Kaushik Sinha:
Learning Gaussian Mixtures with Arbitrary Separation. CoRR abs/0907.1054 (2009) - 2008
- [j4]Mikhail Belkin, Partha Niyogi:
Towards a theoretical foundation for Laplacian-based manifold methods. J. Comput. Syst. Sci. 74(8): 1289-1308 (2008) - [c18]Mikhail Belkin, Jian Sun, Yusu Wang:
Discrete laplace operator on meshed surfaces. SCG 2008: 278-287 - [c17]Tao Shi, Mikhail Belkin, Bin Yu:
Data spectroscopy: learning mixture models using eigenspaces of convolution operators. ICML 2008: 936-943 - [c16]Lei Ding, Mikhail Belkin:
Probabilistic mixtures of differential profiles for shape recognition. ICPR 2008: 1-4 - [c15]Lei Ding, Mikhail Belkin:
Component based shape retrieval using differential profiles. Multimedia Information Retrieval 2008: 216-222 - 2007
- [c14]Kaushik Sinha, Mikhail Belkin:
The Value of Labeled and Unlabeled Examples when the Model is Imperfect. NIPS 2007: 1361-1368 - 2006
- [j3]Mikhail Belkin, Partha Niyogi, Vikas Sindhwani:
Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples. J. Mach. Learn. Res. 7: 2399-2434 (2006) - [c13]Mikhail Belkin, Hariharan Narayanan, Partha Niyogi:
Heat Flow and a Faster Algorithm to Compute the Surface Area of a Convex Body. FOCS 2006: 47-56 - [c12]Mikhail Belkin, Partha Niyogi:
Convergence of Laplacian Eigenmaps. NIPS 2006: 129-136 - [c11]Hariharan Narayanan, Mikhail Belkin, Partha Niyogi:
On the Relation Between Low Density Separation, Spectral Clustering and Graph Cuts. NIPS 2006: 1025-1032 - 2005
- [c10]Mikhail Belkin, Partha Niyogi:
Towards a Theoretical Foundation for Laplacian-Based Manifold Methods. COLT 2005: 486-500 - [c9]Vikas Sindhwani, Partha Niyogi, Mikhail Belkin:
Beyond the point cloud: from transductive to semi-supervised learning. ICML 2005: 824-831 - [c8]Yasemin Altun, David A. McAllester, Mikhail Belkin:
Margin Semi-Supervised Learning for Structured Variables. NIPS 2005: 33-40 - 2004
- [j2]Mikhail Belkin, Partha Niyogi:
Semi-Supervised Learning on Riemannian Manifolds. Mach. Learn. 56(1-3): 209-239 (2004) - [c7]Ulrike von Luxburg, Olivier Bousquet, Mikhail Belkin:
On the Convergence of Spectral Clustering on Random Samples: The Normalized Case. COLT 2004: 457-471 - [c6]Mikhail Belkin, Irina Matveeva, Partha Niyogi:
Regularization and Semi-supervised Learning on Large Graphs. COLT 2004: 624-638 - [c5]Mikhail Belkin, Irina Matveeva, Partha Niyogi:
Tikhonov regularization and semi-supervised learning on large graphs. ICASSP (3) 2004: 1000-1003 - [c4]Ulrike von Luxburg, Olivier Bousquet, Mikhail Belkin:
Limits of Spectral Clustering. NIPS 2004: 857-864 - 2003
- [j1]Mikhail Belkin, Partha Niyogi:
Laplacian Eigenmaps for Dimensionality Reduction and Data Representation. Neural Comput. 15(6): 1373-1396 (2003) - 2002
- [c3]Mikhail Belkin, Partha Niyogi:
Using Manifold Stucture for Partially Labeled Classification. NIPS 2002: 929-936 - [c2]Mikhail Belkin, John A. Goldsmith:
Using eigenvectors of the bigram graph to infer morpheme identity. SIGMORPHON 2002: 41-47 - [i1]Mikhail Belkin, John A. Goldsmith:
Using eigenvectors of the bigram graph to infer morpheme identity. CoRR cs.CL/0207002 (2002) - 2001
- [c1]Mikhail Belkin, Partha Niyogi:
Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering. NIPS 2001: 585-591
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-07 22:11 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint