Jordan, Michael I.

313 publications

ICML 2025 AutoEval Done Right: Using Synthetic Data for Model Evaluation Pierre Boyeau, Anastasios Nikolas Angelopoulos, Tianle Li, Nir Yosef, Jitendra Malik, Michael I. Jordan
NeurIPS 2025 Backward Conformal Prediction Etienne Gauthier, Francis Bach, Michael I. Jordan
NeurIPS 2025 Conformal Prediction Under Lévy-Prokhorov Distribution Shifts: Robustness to Local and Global Perturbations Liviu Aolaritei, Qianyu Julie Zhu, Zheyu Oliver Wang, Michael I. Jordan, Youssef Marzouk
NeurIPS 2025 Generalization or Hallucination? Understanding Out-of-Context Reasoning in Transformers Yixiao Huang, Hanlin Zhu, Tianyu Guo, Jiantao Jiao, Somayeh Sojoudi, Michael I. Jordan, Stuart Russell, Song Mei
JMLR 2025 Instability, Computational Efficiency and Statistical Accuracy Nhat Ho, Koulik Khamaru, Raaz Dwivedi, Martin J. Wainwright, Michael I. Jordan, Bin Yu
ICML 2025 Prediction-Aware Learning in Multi-Agent Systems Aymeric Capitaine, Etienne Boursier, Eric Moulines, Michael I. Jordan, Alain Oliviero Durmus
ICML 2025 Statistical Collusion by Collectives on Learning Platforms Etienne Gauthier, Francis Bach, Michael I. Jordan
JMLR 2025 Two-Timescale Gradient Descent Ascent Algorithms for Nonconvex Minimax Optimization Tianyi Lin, Chi Jin, Michael I. Jordan
NeurIPS 2025 Valid Selection Among Conformal Sets Mahmoud Hegazy, Liviu Aolaritei, Michael I. Jordan, Aymeric Dieuleveut
NeurIPS 2024 Data Acquisition via Experimental Design for Data Markets Charles Lu, Baihe Huang, Sai Praneeth Karimireddy, Praneeth Vepakomma, Michael I. Jordan, Ramesh Raskar
JMLR 2024 Desiderata for Representation Learning: A Causal Perspective Yixin Wang, Michael I. Jordan
NeurIPS 2024 Dimension-Free Private Mean Estimation for Anisotropic Distributions Yuval Dagan, Michael I. Jordan, Xuelin Yang, Lydia Zakynthinou, Nikita Zhivotovskiy
NeurIPS 2024 Fair Allocation in Dynamic Mechanism Design Alireza Fallah, Michael I. Jordan, Annie Ulichney
NeurIPS 2024 Fairness-Aware Meta-Learning via Nash Bargaining Yi Zeng, Xuelin Yang, Li Chen, Cristian Canton Ferrer, Ming Jin, Michael I. Jordan, Ruoxi Jia
JMLR 2024 Learning Dynamic Mechanisms in Unknown Environments: A Reinforcement Learning Approach Shuang Qiu, Boxiang Lyu, Qinglin Meng, Zhaoran Wang, Zhuoran Yang, Michael I. Jordan
NeurIPS 2024 Learning to Mitigate Externalities: The Coase Theorem with Hindsight Rationality Antoine Scheid, Aymeric Capitaine, Etienne Boursier, Eric Moulines, Michael I. Jordan, Alain Durmus
NeurIPSW 2024 Statistical Inference in Latent Convex Objectives with Stream Data Rohan Chauhan, Emmanouil-Vasileios Vlatakis-Gkaragkounis, Michael I. Jordan
NeurIPS 2024 Unravelling in Collaborative Learning Aymeric Capitaine, Etienne Boursier, Antoine Scheid, Eric Moulines, Michael I. Jordan, El-Mahdi El-Mhamdi, Alain Durmus
AISTATS 2023 A Statistical Analysis of Polyak-Ruppert Averaged Q-Learning Xiang Li, Wenhao Yang, Jiadong Liang, Zhihua Zhang, Michael I. Jordan
NeurIPS 2023 A Unifying Perspective on Multi-Calibration: Game Dynamics for Multi-Objective Learning Nika Haghtalab, Michael I. Jordan, Eric Zhao
AISTATS 2023 Byzantine-Robust Federated Learning with Optimal Statistical Rates Banghua Zhu, Lun Wang, Qi Pang, Shuai Wang, Jiantao Jiao, Dawn Song, Michael I. Jordan
JMLR 2023 Can Reinforcement Learning Find Stackelberg-Nash Equilibria in General-Sum Markov Games with Myopically Rational Followers? Han Zhong, Zhuoran Yang, Zhaoran Wang, Michael I. Jordan
NeurIPS 2023 Class-Conditional Conformal Prediction with Many Classes Tiffany Ding, Anastasios Angelopoulos, Stephen Bates, Michael I. Jordan, Ryan J Tibshirani
AAAI 2023 Competition, Alignment, and Equilibria in Digital Marketplaces Meena Jagadeesan, Michael I. Jordan, Nika Haghtalab
NeurIPS 2023 Doubly-Robust Self-Training Banghua Zhu, Mingyu Ding, Philip Jacobson, Ming Wu, Wei Zhan, Michael I. Jordan, Jiantao Jiao
AISTATS 2023 Finding Regularized Competitive Equilibria of Heterogeneous Agent Macroeconomic Models via Reinforcement Learning Ruitu Xu, Yifei Min, Tianhao Wang, Michael I. Jordan, Zhaoran Wang, Zhuoran Yang
JMLR 2023 First-Order Algorithms for Nonlinear Generalized Nash Equilibrium Problems Michael I. Jordan, Tianyi Lin, Manolis Zampetakis
NeurIPS 2023 Improved Bayes Risk Can Yield Reduced Social Welfare Under Competition Meena Jagadeesan, Michael I. Jordan, Jacob Steinhardt, Nika Haghtalab
JMLR 2023 Instance-Dependent Confidence and Early Stopping for Reinforcement Learning Eric Xia, Koulik Khamaru, Martin J. Wainwright, Michael I. Jordan
UAI 2023 Nonconvex Stochastic Scaled Gradient Descent and Generalized Eigenvector Problems Chris Junchi Li, Michael I Jordan
NeurIPS 2023 On Learning Necessary and Sufficient Causal Graphs Hengrui Cai, Yixin Wang, Michael I. Jordan, Rui Song
JMLR 2023 On Learning Rates and Schrödinger Operators Bin Shi, Weijie Su, Michael I. Jordan
NeurIPS 2023 Optimal Extragradient-Based Algorithms for Stochastic Variational Inequalities with Separable Structure Angela Yuan, Chris Junchi Li, Gauthier Gidel, Michael I. Jordan, Quanquan Gu, Simon S Du
NeurIPS 2023 Towards Optimal Caching and Model Selection for Large Model Inference Banghua Zhu, Ying Sheng, Lianmin Zheng, Clark Barrett, Michael I. Jordan, Jiantao Jiao
JMLR 2023 VCG Mechanism Design with Unknown Agent Values Under Stochastic Bandit Feedback Kirthevasan Kandasamy, Joseph E Gonzalez, Michael I Jordan, Ion Stoica
JMLR 2022 Active Learning for Nonlinear System Identification with Guarantees Horia Mania, Michael I. Jordan, Benjamin Recht
JMLR 2022 Convergence Rates for Gaussian Mixtures of Experts Nhat Ho, Chiao-Yu Yang, Michael I. Jordan
NeurIPS 2022 Empirical Gateaux Derivatives for Causal Inference Michael I. Jordan, Yixin Wang, Angela Zhou
NeurIPS 2022 First-Order Algorithms for Min-Max Optimization in Geodesic Metric Spaces Michael I. Jordan, Tianyi Lin, Emmanouil-Vasileios Vlatakis-Gkaragkounis
NeurIPS 2022 Gradient-Free Methods for Deterministic and Stochastic Nonsmooth Nonconvex Optimization Tianyi Lin, Zeyu Zheng, Michael I. Jordan
NeurIPS 2022 Learn to Match with No Regret: Reinforcement Learning in Markov Matching Markets Yifei Min, Tianhao Wang, Ruitu Xu, Zhaoran Wang, Michael I. Jordan, Zhuoran Yang
NeurIPS 2022 Learning Two-Player Markov Games: Neural Function Approximation and Correlated Equilibrium Chris Junchi Li, Dongruo Zhou, Quanquan Gu, Michael I. Jordan
NeurIPS 2022 Off-Policy Evaluation with Policy-Dependent Optimization Response Wenshuo Guo, Michael I. Jordan, Angela Zhou
JMLR 2022 On Constraints in First-Order Optimization: A View from Non-Smooth Dynamical Systems Michael Muehlebach, Michael I. Jordan
JMLR 2022 On the Complexity of Approximating Multimarginal Optimal Transport Tianyi Lin, Nhat Ho, Marco Cuturi, Michael I. Jordan
JMLR 2022 On the Efficiency of Entropic Regularized Algorithms for Optimal Transport Tianyi Lin, Nhat Ho, Michael I. Jordan
NeurIPS 2022 On-Demand Sampling: Learning Optimally from Multiple Distributions Nika Haghtalab, Michael I. Jordan, Eric Zhao
NeurIPS 2022 Rank Diminishing in Deep Neural Networks Ruili Feng, Kecheng Zheng, Yukun Huang, Deli Zhao, Michael I. Jordan, Zheng-Jun Zha
JMLR 2022 Ranking and Tuning Pre-Trained Models: A New Paradigm for Exploiting Model Hubs Kaichao You, Yong Liu, Ziyang Zhang, Jianmin Wang, Michael I. Jordan, Mingsheng Long
NeurIPS 2022 Robust Calibration with Multi-Domain Temperature Scaling Yaodong Yu, Stephen Bates, Yi Ma, Michael I. Jordan
NeurIPS 2022 TCT: Convexifying Federated Learning Using Bootstrapped Neural Tangent Kernels Yaodong Yu, Alexander Wei, Sai Praneeth Karimireddy, Yi Ma, Michael I. Jordan
AISTATS 2021 Efficient Methods for Structured Nonconvex-Nonconcave Min-Max Optimization Jelena Diakonikolas, Constantinos Daskalakis, Michael I. Jordan
AISTATS 2021 On Projection Robust Optimal Transport: Sample Complexity and Model Misspecification Tianyi Lin, Zeyu Zheng, Elynn Chen, Marco Cuturi, Michael I. Jordan
JMLR 2021 A Lyapunov Analysis of Accelerated Methods in Optimization Ashia C. Wilson, Ben Recht, Michael I. Jordan
JMLR 2021 Asynchronous Online Testing of Multiple Hypotheses Tijana Zrnic, Aaditya Ramdas, Michael I. Jordan
JMLR 2021 Bandit Learning in Decentralized Matching Markets Lydia T. Liu, Feng Ruan, Horia Mania, Michael I. Jordan
JMLR 2021 High-Order Langevin Diffusion Yields an Accelerated MCMC Algorithm Wenlong Mou, Yi-An Ma, Martin J. Wainwright, Peter L. Bartlett, Michael I. Jordan
NeurIPS 2021 Learning Equilibria in Matching Markets from Bandit Feedback Meena Jagadeesan, Alexander Wei, Yixin Wang, Michael I. Jordan, Jacob Steinhardt
JMLR 2021 Learning Strategies in Decentralized Matching Markets Under Uncertain Preferences Xiaowu Dai, Michael I. Jordan
AAAI 2021 Learning from eXtreme Bandit Feedback Romain Lopez, Inderjit S. Dhillon, Michael I. Jordan
NeurIPS 2021 Learning in Multi-Stage Decentralized Matching Markets Xiaowu Dai, Michael I. Jordan
NeurIPS 2021 On Component Interactions in Two-Stage Recommender Systems Jiri Hron, Karl Krauth, Michael I. Jordan, Niki Kilbertus
NeurIPS 2021 On the Theory of Reinforcement Learning with Once-per-Episode Feedback Niladri Chatterji, Aldo Pacchiano, Peter L. Bartlett, Michael I. Jordan
JMLR 2021 Optimization with Momentum: Dynamical, Control-Theoretic, and Symplectic Perspectives Michael Muehlebach, Michael I. Jordan
NeurIPS 2021 Robust Learning of Optimal Auctions Wenshuo Guo, Michael I. Jordan, Emmanouil Zampetakis
AAAI 2021 Robustness Guarantees for Mode Estimation with an Application to Bandits Aldo Pacchiano, Heinrich Jiang, Michael I. Jordan
NeurIPS 2021 Tactical Optimism and Pessimism for Deep Reinforcement Learning Ted Moskovitz, Jack Parker-Holder, Aldo Pacchiano, Michael Arbel, Michael I. Jordan
NeurIPS 2021 Test-Time Collective Prediction Celestine Mendler-Dünner, Wenshuo Guo, Stephen Bates, Michael I. Jordan
UAI 2021 Variational Refinement for Importance Sampling Using the Forward Kullback-Leibler Divergence Ghassen Jerfel, Serena Wang, Clara Wong-Fannjiang, Katherine A. Heller, Yian Ma, Michael I. Jordan
NeurIPS 2021 Wasserstein Flow Meets Replicator Dynamics: A Mean-Field Analysis of Representation Learning in Actor-Critic Yufeng Zhang, Siyu Chen, Zhuoran Yang, Michael I. Jordan, Zhaoran Wang
NeurIPS 2021 Who Leads and Who Follows in Strategic Classification? Tijana Zrnic, Eric Mazumdar, Shankar Sastry, Michael I. Jordan
AAAI 2020 Cost-Effective Incentive Allocation via Structured Counterfactual Inference Romain Lopez, Chenchen Li, Xiang Yan, Junwu Xiong, Michael I. Jordan, Yuan Qi, Le Song
NeurIPS 2020 Decision-Making with Auto-Encoding Variational Bayes Romain Lopez, Pierre Boyeau, Nir Yosef, Michael I. Jordan, Jeffrey Regier
NeurIPS 2020 Fixed-Support Wasserstein Barycenters: Computational Hardness and Fast Algorithm Tianyi Lin, Nhat Ho, Xi Chen, Marco Cuturi, Michael I. Jordan
JMLR 2020 Greedy Attack and Gumbel Attack: Generating Adversarial Examples for Discrete Data Puyudi Yang, Jianbo Chen, Cho-Jui Hsieh, Jane-Ling Wang, Michael I. Jordan
ICLR 2020 How Does Learning Rate Decay Help Modern Neural Networks? Kaichao You, Mingsheng Long, Jianmin Wang, Michael I. Jordan
AAAI 2020 LS-Tree: Model Interpretation When the Data Are Linguistic Jianbo Chen, Michael I. Jordan
AISTATS 2020 Langevin Monte Carlo Without Smoothness Niladri Chatterji, Jelena Diakonikolas, Michael I. Jordan, Peter Bartlett
AAAI 2020 ML-LOO: Detecting Adversarial Examples with Feature Attribution Puyudi Yang, Jianbo Chen, Cho-Jui Hsieh, Jane-Ling Wang, Michael I. Jordan
COLT 2020 Near-Optimal Algorithms for Minimax Optimization Tianyi Lin, Chi Jin, Michael I. Jordan
COLT 2020 On Linear Stochastic Approximation: Fine-Grained Polyak-Ruppert and Non-Asymptotic Concentration Wenlong Mou, Chris Junchi Li, Martin J Wainwright, Peter L Bartlett, Michael I Jordan
NeurIPS 2020 On the Theory of Transfer Learning: The Importance of Task Diversity Nilesh Tripuraneni, Michael I. Jordan, Chi Jin
AISTATS 2020 Post-Estimation Smoothing: A Simple Baseline for Learning with Side Information Esther Rolf, Michael I. Jordan, Benjamin Recht
NeurIPS 2020 Projection Robust Wasserstein Distance and Riemannian Optimization Tianyi Lin, Chenyou Fan, Nhat Ho, Marco Cuturi, Michael I. Jordan
NeurIPS 2020 Provably Efficient Reinforcement Learning with Kernel and Neural Function Approximations Zhuoran Yang, Chi Jin, Zhaoran Wang, Mengdi Wang, Michael I. Jordan
COLT 2020 Provably Efficient Reinforcement Learning with Linear Function Approximation Chi Jin, Zhuoran Yang, Zhaoran Wang, Michael I Jordan
NeurIPS 2020 Robust Optimization for Fairness with Noisy Protected Groups Serena Wang, Wenshuo Guo, Harikrishna Narasimhan, Andrew Cotter, Maya Gupta, Michael I. Jordan
NeurIPS 2020 Transferable Calibration with Lower Bias and Variance in Domain Adaptation Ximei Wang, Mingsheng Long, Jianmin Wang, Michael I. Jordan
ICLR 2020 Variance Reduction with Sparse Gradients Melih Elibol, Lihua Lei, Michael I. Jordan
NeurIPS 2019 Acceleration via Symplectic Discretization of High-Resolution Differential Equations Bin Shi, Simon S Du, Weijie Su, Michael I Jordan
ICLR 2019 L-Shapley and C-Shapley: Efficient Model Interpretation for Structured Data Jianbo Chen, Le Song, Martin J. Wainwright, Michael I. Jordan
NeurIPS 2019 Transferable Normalization: Towards Improving Transferability of Deep Neural Networks Ximei Wang, Ying Jin, Mingsheng Long, Jianmin Wang, Michael I Jordan
COLT 2018 Accelerated Gradient Descent Escapes Saddle Points Faster than Gradient Descent Chi Jin, Praneeth Netrapalli, Michael I. Jordan
COLT 2018 Averaging Stochastic Gradient Descent on Riemannian Manifolds Nilesh Tripuraneni, Nicolas Flammarion, Francis R. Bach, Michael I. Jordan
NeurIPS 2018 Conditional Adversarial Domain Adaptation Mingsheng Long, Zhangjie Cao, Jianmin Wang, Michael I Jordan
JMLR 2018 Covariances, Robustness, and Variational Bayes Ryan Giordano, Tamara Broderick, Michael I. Jordan
COLT 2018 Detection Limits in the High-Dimensional Spiked Rectangular Model Ahmed El Alaoui, Michael I. Jordan
NeurIPS 2018 Gen-Oja: Simple & Efficient Algorithm for Streaming Generalized Eigenvector Computation Kush Bhatia, Aldo Pacchiano, Nicolas Flammarion, Peter L Bartlett, Michael I Jordan
NeurIPS 2018 Generalized Zero-Shot Learning with Deep Calibration Network Shichen Liu, Mingsheng Long, Jianmin Wang, Michael I Jordan
NeurIPS 2018 Information Constraints on Auto-Encoding Variational Bayes Romain Lopez, Jeffrey Regier, Michael I Jordan, Nir Yosef
NeurIPS 2018 Is Q-Learning Provably Efficient? Chi Jin, Zeyuan Allen-Zhu, Sebastien Bubeck, Michael I Jordan
COLT 2018 Learning Without Mixing: Towards a Sharp Analysis of Linear System Identification Max Simchowitz, Horia Mania, Stephen Tu, Michael I. Jordan, Benjamin Recht
NeurIPS 2018 On the Local Minima of the Empirical Risk Chi Jin, Lydia T. Liu, Rong Ge, Michael I Jordan
NeurIPS 2018 Stochastic Cubic Regularization for Fast Nonconvex Optimization Nilesh Tripuraneni, Mitchell Stern, Chi Jin, Jeffrey Regier, Michael I Jordan
NeurIPS 2018 Theoretical Guarantees for EM Under Misspecified Gaussian Mixture Models Raaz Dwivedi, Nhật Hồ, Koulik Khamaru, Martin J. Wainwright, Michael I Jordan
COLT 2018 Underdamped Langevin MCMC: A Non-Asymptotic Analysis Xiang Cheng, Niladri S. Chatterji, Peter L. Bartlett, Michael I. Jordan
ICML 2017 Breaking Locality Accelerates Block Gauss-Seidel Stephen Tu, Shivaram Venkataraman, Ashia C. Wilson, Alex Gittens, Michael I. Jordan, Benjamin Recht
ICML 2017 Deep Transfer Learning with Joint Adaptation Networks Mingsheng Long, Han Zhu, Jianmin Wang, Michael I. Jordan
NeurIPS 2017 Fast Black-Box Variational Inference Through Stochastic Trust-Region Optimization Jeffrey Regier, Michael I Jordan, Jon McAuliffe
NeurIPS 2017 Gradient Descent Can Take Exponential Time to Escape Saddle Points Simon S Du, Chi Jin, Jason Lee, Michael I Jordan, Aarti Singh, Barnabas Poczos
ICML 2017 How to Escape Saddle Points Efficiently Chi Jin, Rong Ge, Praneeth Netrapalli, Sham M. Kakade, Michael I. Jordan
NeurIPS 2017 Kernel Feature Selection via Conditional Covariance Minimization Jianbo Chen, Mitchell Stern, Martin J. Wainwright, Michael I Jordan
AISTATS 2017 Less than a Single Pass: Stochastically Controlled Stochastic Gradient Lihua Lei, Michael I. Jordan
NeurIPS 2017 Non-Convex Finite-Sum Optimization via SCSG Methods Lihua Lei, Cheng Ju, Jianbo Chen, Michael I Jordan
AISTATS 2017 On the Learnability of Fully-Connected Neural Networks Yuchen Zhang, Jason D. Lee, Martin J. Wainwright, Michael I. Jordan
NeurIPS 2017 Online Control of the False Discovery Rate with Decaying Memory Aaditya Ramdas, Fanny Yang, Martin J. Wainwright, Michael I Jordan
AISTATS 2016 A Linearly-Convergent Stochastic L-BFGS Algorithm Philipp Moritz, Robert Nishihara, Michael I. Jordan
NeurIPS 2016 Cyclades: Conflict-Free Asynchronous Machine Learning Xinghao Pan, Maximilian Lam, Stephen Tu, Dimitris Papailiopoulos, Ce Zhang, Michael I Jordan, Kannan Ramchandran, Christopher Ré
COLT 2016 Gradient Descent Only Converges to Minimizers Jason D. Lee, Max Simchowitz, Michael I. Jordan, Benjamin Recht
ICLR 2016 High-Dimensional Continuous Control Using Generalized Advantage Estimation John Schulman, Philipp Moritz, Sergey Levine, Michael I. Jordan, Pieter Abbeel
ICML 2016 L1-Regularized Neural Networks Are Improperly Learnable in Polynomial Time Yuchen Zhang, Jason D. Lee, Michael I. Jordan
NeurIPS 2016 Local Maxima in the Likelihood of Gaussian Mixture Models: Structural Results and Algorithmic Consequences Chi Jin, Yuchen Zhang, Sivaraman Balakrishnan, Martin J. Wainwright, Michael I Jordan
ICLR 2016 SparkNet: Training Deep Networks in Spark Philipp Moritz, Robert Nishihara, Ion Stoica, Michael I. Jordan
JMLR 2016 Spectral Methods Meet EM: A Provably Optimal Algorithm for Crowdsourcing Yuchen Zhang, Xi Chen, Dengyong Zhou, Michael I. Jordan
AAAI 2016 The Constrained Laplacian Rank Algorithm for Graph-Based Clustering Feiping Nie, Xiaoqian Wang, Michael I. Jordan, Heng Huang
NeurIPS 2016 Unsupervised Domain Adaptation with Residual Transfer Networks Mingsheng Long, Han Zhu, Jianmin Wang, Michael I Jordan
JMLR 2015 Distributed Matrix Completion and Robust Factorization Lester Mackey, Ameet Talwalkar, Michael I. Jordan
NeurIPS 2015 Linear Response Methods for Accurate Covariance Estimates from Mean Field Variational Bayes Ryan J Giordano, Tamara Broderick, Michael I Jordan
NeurIPS 2015 On the Accuracy of Self-Normalized Log-Linear Models Jacob Andreas, Maxim Rabinovich, Michael I Jordan, Dan Klein
NeurIPS 2015 Parallel Correlation Clustering on Big Graphs Xinghao Pan, Dimitris Papailiopoulos, Samet Oymak, Benjamin Recht, Kannan Ramchandran, Michael I Jordan
NeurIPS 2015 Variational Consensus Monte Carlo Maxim Rabinovich, Elaine Angelino, Michael I Jordan
NeurIPS 2014 Communication-Efficient Distributed Dual Coordinate Ascent Martin Jaggi, Virginia Smith, Martin Takac, Jonathan Terhorst, Sanjay Krishnan, Thomas Hofmann, Michael I Jordan
COLT 2014 Lower Bounds on the Performance of Polynomial-Time Algorithms for Sparse Linear Regression Yuchen Zhang, Martin J. Wainwright, Michael I. Jordan
NeurIPS 2014 On the Convergence Rate of Decomposable Submodular Function Minimization Robert Nishihara, Stefanie Jegelka, Michael I Jordan
NeurIPS 2014 Parallel Double Greedy Submodular Maximization Xinghao Pan, Stefanie Jegelka, Joseph E Gonzalez, Joseph K. Bradley, Michael I Jordan
JMLR 2014 Particle Gibbs with Ancestor Sampling Fredrik Lindsten, Michael I. Jordan, Thomas B. Schön
NeurIPS 2014 Spectral Methods Meet EM: A Provably Optimal Algorithm for Crowdsourcing Yuchen Zhang, Xi Chen, Dengyong Zhou, Michael I Jordan
NeurIPS 2013 A Comparative Framework for Preconditioned Lasso Algorithms Fabian L. Wauthier, Nebojsa Jojic, Michael I Jordan
ICLR 2013 A Nested HDP for Hierarchical Topic Models John W. Paisley, Chong Wang, David M. Blei, Michael I. Jordan
ICCV 2013 Distributed Low-Rank Subspace Segmentation Ameet Talwalkar, Lester Mackey, Yadong Mu, Shih-Fu Chang, Michael I. Jordan
NeurIPS 2013 Estimation, Optimization, and Parallelism When Data Is Sparse John Duchi, Michael I Jordan, Brendan McMahan
NeurIPS 2013 Information-Theoretic Lower Bounds for Distributed Statistical Estimation with Communication Constraints Yuchen Zhang, John Duchi, Michael I Jordan, Martin J. Wainwright
NeurIPS 2013 Local Privacy and Minimax Bounds: Sharp Rates for Probability Estimation John Duchi, Martin J. Wainwright, Michael I Jordan
NeurIPS 2013 Optimistic Concurrency Control for Distributed Unsupervised Learning Xinghao Pan, Joseph E Gonzalez, Stefanie Jegelka, Tamara Broderick, Michael I Jordan
NeurIPS 2013 Streaming Variational Bayes Tamara Broderick, Nicholas Boyd, Andre Wibisono, Ashia C Wilson, Michael I Jordan
NeurIPS 2012 Ancestor Sampling for Particle Gibbs Fredrik Lindsten, Thomas Schön, Michael I. Jordan
JMLR 2012 Coherence Functions with Applications in Large-Margin Classification Methods Zhihua Zhang, Dehua Liu, Guang Dai, Michael I. Jordan
JMLR 2012 EP-GIG Priors and Applications in Bayesian Sparse Learning Zhihua Zhang, Shusen Wang, Dehua Liu, Michael I. Jordan
NeurIPS 2012 Finite Sample Convergence Rates of Zero-Order Stochastic Optimization Methods Andre Wibisono, Martin J. Wainwright, Michael I. Jordan, John C. Duchi
ICML 2012 Nonparametric Link Prediction in Dynamic Networks Purnamrita Sarkar, Deepayan Chakrabarti, Michael I. Jordan
NeurIPS 2012 Privacy Aware Learning Martin J. Wainwright, Michael I. Jordan, John C. Duchi
ICML 2012 Revisiting K-Means: New Algorithms via Bayesian Nonparametrics Brian Kulis, Michael I. Jordan
NeurIPS 2012 Small-Variance Asymptotics for Exponential Family Dirichlet Process Mixture Models Ke Jiang, Brian Kulis, Michael I. Jordan
ICML 2012 The Big Data Bootstrap Ariel Kleiner, Ameet Talwalkar, Purnamrita Sarkar, Michael I. Jordan
ICML 2012 Variational Bayesian Inference with Stochastic Search John W. Paisley, David M. Blei, Michael I. Jordan
ICML 2011 A Unified Probabilistic Model for Global and Local Unsupervised Feature Selection Yue Guan, Jennifer G. Dy, Michael I. Jordan
NeurIPS 2011 Bayesian Bias Mitigation for Crowdsourcing Fabian L. Wauthier, Michael I. Jordan
JMLR 2011 Bayesian Generalized Kernel Mixed Models Zhihua Zhang, Guang Dai, Michael I. Jordan
AISTATS 2011 Dimensionality Reduction for Spectral Clustering Donglin Niu, Jennifer Dy, Michael I. Jordan
NeurIPS 2011 Divide-and-Conquer Matrix Factorization Lester W. Mackey, Michael I. Jordan, Ameet Talwalkar
CVPR 2011 Supervised Hierarchical Pitman-Yor Process for Natural Scene Segmentation Alex Shyr, Trevor Darrell, Michael I. Jordan, Raquel Urtasun
ICML 2010 An Analysis of the Convergence of Graph Laplacians Daniel Ting, Ling Huang, Michael I. Jordan
AISTATS 2010 Bayesian Generalized Kernel Models Zhihua Zhang, Guang Dai, Donghui Wang, Michael I. Jordan
ICML 2010 Detecting Large-Scale System Problems by Mining Console Logs Wei Xu, Ling Huang, Armando Fox, David A. Patterson, Michael I. Jordan
NeurIPS 2010 Heavy-Tailed Process Priors for Selective Shrinkage Fabian L. Wauthier, Michael I. Jordan
AISTATS 2010 Inference and Learning in Networks of Queues Charles Sutton, Michael I. Jordan
ICML 2010 Learning Programs: A Hierarchical Bayesian Approach Percy Liang, Michael I. Jordan, Dan Klein
AISTATS 2010 Matrix-Variate Dirichlet Process Mixture Models Zhihua Zhang, Guang Dai, Michael I. Jordan
ICML 2010 Mixed Membership Matrix Factorization Lester W. Mackey, David J. Weiss, Michael I. Jordan
UAI 2010 Modeling Events with Cascades of Poisson Processes Aleksandr Simma, Michael I. Jordan
ICML 2010 Multiple Non-Redundant Spectral Clustering Views Donglin Niu, Jennifer G. Dy, Michael I. Jordan
ICML 2010 On the Consistency of Ranking Algorithms John C. Duchi, Lester W. Mackey, Michael I. Jordan
NeurIPS 2010 Random Conic Pursuit for Semidefinite Programming Ariel Kleiner, Ali Rahimi, Michael I. Jordan
JMLR 2010 Regularized Discriminant Analysis, Ridge Regression and Beyond Zhihua Zhang, Guang Dai, Congfu Xu, Michael I. Jordan
CVPR 2010 Sufficient Dimension Reduction for Visual Sequence Classification Alex Shyr, Raquel Urtasun, Michael I. Jordan
NeurIPS 2010 Tree-Structured Stick Breaking for Hierarchical Data Zoubin Ghahramani, Michael I. Jordan, Ryan P. Adams
NeurIPS 2010 Unsupervised Kernel Dimension Reduction Meihong Wang, Fei Sha, Michael I. Jordan
NeurIPS 2010 Variational Inference over Combinatorial Spaces Alexandre Bouchard-côté, Michael I. Jordan
ECML-PKDD 2009 A Flexible and Efficient Algorithm for Regularized Fisher Discriminant Analysis Zhihua Zhang, Guang Dai, Michael I. Jordan
NeurIPS 2009 Asymptotically Optimal Regularization in Smooth Parametric Models Percy Liang, Guillaume Bouchard, Francis R. Bach, Michael I. Jordan
AISTATS 2009 Latent Variable Models for Dimensionality Reduction Zhihua Zhang, Michael I. Jordan
ICML 2009 Learning from Measurements in Exponential Families Percy Liang, Michael I. Jordan, Dan Klein
NeurIPS 2009 Nonparametric Latent Feature Models for Link Prediction Kurt Miller, Michael I. Jordan, Thomas L. Griffiths
UAI 2009 Optimization of Structured Mean Field Objectives Alexandre Bouchard-Côté, Michael I. Jordan
NeurIPS 2009 Sharing Features Among Dynamical Systems with Beta Processes Emily B. Fox, Michael I. Jordan, Erik B. Sudderth, Alan S. Willsky
ICML 2008 An Asymptotic Analysis of Generative, Discriminative, and Pseudolikelihood Estimators Percy Liang, Michael I. Jordan
ICML 2008 An HDP-HMM for Systems with State Persistence Emily B. Fox, Erik B. Sudderth, Michael I. Jordan, Alan S. Willsky
NeurIPS 2008 DiscLDA: Discriminative Learning for Dimensionality Reduction and Classification Simon Lacoste-Julien, Fei Sha, Michael I. Jordan
NeurIPS 2008 Efficient Inference in Phylogenetic InDel Trees Alexandre Bouchard-côté, Dan Klein, Michael I. Jordan
FnTML 2008 Graphical Models, Exponential Families, and Variational Inference Martin J. Wainwright, Michael I. Jordan
NeurIPS 2008 High-Dimensional Support Union Recovery in Multivariate Regression Guillaume R. Obozinski, Martin J. Wainwright, Michael I. Jordan
NeurIPS 2008 Nonparametric Bayesian Learning of Switching Linear Dynamical Systems Emily B. Fox, Erik B. Sudderth, Michael I. Jordan, Alan S. Willsky
NeurIPS 2008 Posterior Consistency of the Silverman G-Prior in Bayesian Model Choice Zhihua Zhang, Michael I. Jordan, Dit-Yan Yeung
NeurIPS 2008 Shared Segmentation of Natural Scenes Using Dependent Pitman-Yor Processes Erik B. Sudderth, Michael I. Jordan
NeurIPS 2008 Spectral Clustering with Perturbed Data Ling Huang, Donghui Yan, Nina Taft, Michael I. Jordan
UAI 2008 The Phylogenetic Indian Buffet Process: A Non-Exchangeable Nonparametric Prior for Latent Features Kurt T. Miller, Thomas L. Griffiths, Michael I. Jordan
ICML 2007 A Permutation-Augmented Sampler for DP Mixture Models Percy Liang, Michael I. Jordan, Benjamin Taskar
NeurIPS 2007 Agreement-Based Learning Percy Liang, Dan Klein, Michael I. Jordan
NeurIPS 2007 Estimating Divergence Functionals and the Likelihood Ratio by Penalized Convex Risk Minimization Xuanlong Nguyen, Martin J. Wainwright, Michael I. Jordan
NeurIPS 2007 Feature Selection Methods for Improving Protein Structure Prediction with Rosetta Ben Blum, David Baker, Michael I. Jordan, Philip Bradley, Rhiju Das, David E Kim
AISTATS 2007 Hierarchical Beta Processes and the Indian Buffet Process Romain Thibaux, Michael I. Jordan
ICCV 2007 Learning Multiscale Representations of Natural Scenes Using Dirichlet Processes Jyri J. Kivinen, Erik B. Sudderth, Michael I. Jordan
ICML 2007 Regression on Manifolds Using Kernel Dimension Reduction Jens Nilsson, Fei Sha, Michael I. Jordan
ICML 2006 A Graphical Model for Predicting Protein Molecular Function Barbara E. Engelhardt, Michael I. Jordan, Steven E. Brenner
ICML 2006 Bayesian Multi-Population Haplotype Inference via a Hierarchical Dirichlet Process Mixture Eric P. Xing, Kyung-Ah Sohn, Michael I. Jordan, Yee Whye Teh
UAI 2006 Bayesian Multicategory Support Vector Machines Zhihua Zhang, Michael I. Jordan
NeurIPS 2006 In-Network PCA and Anomaly Detection Ling Huang, Xuanlong Nguyen, Minos Garofalakis, Michael I. Jordan, Anthony Joseph, Nina Taft
JMLR 2006 Learning Spectral Clustering, with Application to Speech Separation Francis R. Bach, Michael I. Jordan
ICML 2006 Statistical Debugging: Simultaneous Identification of Multiple Bugs Alice X. Zheng, Michael I. Jordan, Ben Liblit, Mayur Naik, Alex Aiken
JMLR 2006 Structured Prediction, Dual Extragradient and Bregman Projections Ben Taskar, Simon Lacoste-Julien, Michael I. Jordan
NeurIPS 2005 Divergences, Surrogate Loss Functions and Experimental Design Xuanlong Nguyen, Martin J. Wainwright, Michael I. Jordan
ICML 2005 Predictive Low-Rank Decomposition for Kernel Methods Francis R. Bach, Michael I. Jordan
NeurIPS 2005 Robust Design of Biological Experiments Patrick Flaherty, Adam Arkin, Michael I. Jordan
AISTATS 2005 Semiparametric Latent Factor Models Yee Whye Teh, Matthias Seeger, Michael I. Jordan
NeurIPS 2005 Structured Prediction via the Extragradient Method Ben Taskar, Simon Lacoste-Julien, Michael I. Jordan
UAI 2005 The DLR Hierarchy of Approximate Inference Michal Rosen-Zvi, Michael I. Jordan, Alan L. Yuille
NeurIPS 2004 A Direct Formulation for Sparse PCA Using Semidefinite Programming Alexandre D'aspremont, Laurent E. Ghaoui, Michael I. Jordan, Gert R. Lanckriet
ICML 2004 Bayesian Haplo-Type Inference via the Dirichlet Process Eric P. Xing, Roded Sharan, Michael I. Jordan
NeurIPS 2004 Blind One-Microphone Speech Separation: A Spectral Learning Approach Francis R. Bach, Michael I. Jordan
NeurIPS 2004 Computing Regularization Paths for Learning Multiple Kernels Francis R. Bach, Romain Thibaux, Michael I. Jordan
ICML 2004 Decentralized Detection and Classification Using Kernel Methods XuanLong Nguyen, Martin J. Wainwright, Michael I. Jordan
JMLR 2004 Dimensionality Reduction for Supervised Learning with Reproducing Kernel Hilbert Spaces Kenji Fukumizu, Francis R. Bach, Michael I. Jordan
UAI 2004 Graph Partition Strategies for Generalized Mean Field Inference Eric P. Xing, Michael I. Jordan
JMLR 2004 Learning the Kernel Matrix with Semidefinite Programming Gert R.G. Lanckriet, Nello Cristianini, Peter Bartlett, Laurent El Ghaoui, Michael I. Jordan
ICML 2004 Multiple Kernel Learning, Conic Duality, and the SMO Algorithm Francis R. Bach, Gert R. G. Lanckriet, Michael I. Jordan
NeurIPS 2004 Semi-Supervised Learning via Gaussian Processes Neil D. Lawrence, Michael I. Jordan
NeurIPS 2004 Sharing Clusters Among Related Groups: Hierarchical Dirichlet Processes Yee W. Teh, Michael I. Jordan, Matthew J. Beal, David M. Blei
ICML 2004 Variational Methods for the Dirichlet Process David M. Blei, Michael I. Jordan
UAI 2003 A Generalized Mean Field Algorithm for Variational Inference in Exponential Families Eric P. Xing, Michael I. Jordan, Stuart Russell
MLJ 2003 An Introduction to MCMC for Machine Learning Christophe Andrieu, Nando de Freitas, Arnaud Doucet, Michael I. Jordan
NeurIPS 2003 Autonomous Helicopter Flight via Reinforcement Learning H. J. Kim, Michael I. Jordan, Shankar Sastry, Andrew Y. Ng
JMLR 2003 Beyond Independent Components: Trees and Clusters Francis R. Bach, Michael I. Jordan
NeurIPS 2003 Hierarchical Topic Models and the Nested Chinese Restaurant Process Thomas L. Griffiths, Michael I. Jordan, Joshua B. Tenenbaum, David M. Blei
NeurIPS 2003 Kernel Dimensionality Reduction for Supervised Learning Kenji Fukumizu, Francis R. Bach, Michael I. Jordan
NeurIPS 2003 Large Margin Classifiers: Convex Loss, Low Noise, and Convergence Rates Peter L. Bartlett, Michael I. Jordan, Jon D. Mcauliffe
NeurIPS 2003 Learning Spectral Clustering Francis R. Bach, Michael I. Jordan
NeurIPS 2003 On the Concentration of Expectation and Approximate Inference in Layered Networks Xuanlong Nguyen, Michael I. Jordan
NeurIPS 2003 Semidefinite Relaxations for Approximate Inference on Graphs with Cycles Michael I. Jordan, Martin J. Wainwright
NeurIPS 2003 Statistical Debugging of Sampled Programs Alice X. Zheng, Michael I. Jordan, Ben Liblit, Alex Aiken
NeurIPS 2002 A Hierarchical Bayesian Markovian Model for Motifs in Biopolymer Sequences Eric P. Xing, Michael I. Jordan, Richard M. Karp, Stuart Russell
NeurIPS 2002 A Minimal Intervention Principle for Coordinated Movement Emanuel Todorov, Michael I. Jordan
JMLR 2002 A Robust Minimax Approach to Classification Gert R.G. Lanckriet, Laurent El Ghaoui, Chiranjib Bhattacharyya, Michael I. Jordan
NeurIPS 2002 Distance Metric Learning with Application to Clustering with Side-Information Eric P. Xing, Michael I. Jordan, Stuart Russell, Andrew Y. Ng
JMLR 2002 Kernel Independent Component Analysis (Kernel Machines Section) Francis R. Bach, Michael I. Jordan
NeurIPS 2002 Learning Graphical Models with Mercer Kernels Francis R. Bach, Michael I. Jordan
ICML 2002 Learning the Kernel Matrix with Semi-Definite Programming Gert R. G. Lanckriet, Nello Cristianini, Peter L. Bartlett, Laurent El Ghaoui, Michael I. Jordan
UAI 2002 Loopy Belief Propogation and Gibbs Measures Sekhar Tatikonda, Michael I. Jordan
NeurIPS 2002 Robust Novelty Detection with Single-Class MPM Laurent E. Ghaoui, Michael I. Jordan, Gert R. Lanckriet
UAI 2002 Tree-Dependent Component Analysis Francis R. Bach, Michael I. Jordan
NeCo 2001 Asymptotic Convergence Rate of the EM Algorithm for Gaussian Mixtures Jinwen Ma, Lei Xu, Michael I. Jordan
ICML 2001 Convergence Rates of the Voting Gibbs Classifier, with Application to Bayesian Feature Selection Andrew Y. Ng, Michael I. Jordan
UAI 2001 Efficient Stepwise Selection in Decomposable Models Amol Deshpande, Minos N. Garofalakis, Michael I. Jordan
ICML 2001 Feature Selection for High-Dimensional Genomic Microarray Data Eric P. Xing, Michael I. Jordan, Richard M. Karp
NeurIPS 2001 Latent Dirichlet Allocation David M. Blei, Andrew Y. Ng, Michael I. Jordan
IJCAI 2001 Link Analysis, Eigenvectors and Stability Andrew Y. Ng, Alice X. Zheng, Michael I. Jordan
NeurIPS 2001 Minimax Probability Machine Gert Lanckriet, Laurent E. Ghaoui, Chiranjib Bhattacharyya, Michael I. Jordan
NeurIPS 2001 On Discriminative vs. Generative Classifiers: A Comparison of Logistic Regression and Naive Bayes Andrew Y. Ng, Michael I. Jordan
NeurIPS 2001 On Spectral Clustering: Analysis and an Algorithm Andrew Y. Ng, Michael I. Jordan, Yair Weiss
NeurIPS 2001 Thin Junction Trees Francis R. Bach, Michael I. Jordan
NeCo 2000 Attractor Dynamics in Feedforward Neural Networks Lawrence K. Saul, Michael I. Jordan
JMLR 2000 Learning with Mixtures of Trees Marina Meila, Michael I. Jordan
UAI 2000 PEGASUS: A Policy Search Method for Large MDPs and POMDPs Andrew Y. Ng, Michael I. Jordan
MLJ 1999 An Introduction to Variational Methods for Graphical Models Michael I. Jordan, Zoubin Ghahramani, Tommi S. Jaakkola, Lawrence K. Saul
NeurIPS 1999 Approximate Inference a Lgorithms for Two-Layer Bayesian Networks Andrew Y. Ng, Michael I. Jordan
UAI 1999 Loopy Belief Propagation for Approximate Inference: An Empirical Study Kevin P. Murphy, Yair Weiss, Michael I. Jordan
MLJ 1999 Mixed Memory Markov Models: Decomposing Complex Stochastic Processes as Mixtures of Simpler Ones Lawrence K. Saul, Michael I. Jordan
JAIR 1999 Variational Probabilistic Inference and the QMR-DT Network Tommi S. Jaakkola, Michael I. Jordan
NeurIPS 1998 Learning from Dyadic Data Thomas Hofmann, Jan Puzicha, Michael I. Jordan
UAI 1998 Mixture Representations for Inference and Learning in Boltzmann Machines Neil D. Lawrence, Christopher M. Bishop, Michael I. Jordan
AISTATS 1997 A Variational Approach to Bayesian Logistic Regression Models and Their Extensions Tommi S. Jaakkola, Michael I. Jordan
NeurIPS 1997 Adaptation in Speech Motor Control John F. Houde, Michael I. Jordan
AISTATS 1997 An Objective Function for Belief Net Triangulation Marina Meilă, Michael I. Jordan
NeurIPS 1997 Approximating Posterior Distributions in Belief Networks Using Mixtures Christopher M. Bishop, Neil D. Lawrence, Tommi Jaakkola, Michael I. Jordan
NeurIPS 1997 Estimating Dependency Structure as a Hidden Variable Marina Meila, Michael I. Jordan
MLJ 1997 Factorial Hidden Markov Models Zoubin Ghahramani, Michael I. Jordan
AISTATS 1997 Mixed Memory Markov Models Lawrence K. Saul, Michael I. Jordan
NeCo 1997 Probabilistic Independence Networks for Hidden Markov Probability Models Padhraic Smyth, David Heckerman, Michael I. Jordan
NeurIPS 1996 A Variational Principle for Model-Based Morphing Lawrence K. Saul, Michael I. Jordan
JAIR 1996 Active Learning with Statistical Models David A. Cohn, Zoubin Ghahramani, Michael I. Jordan
UAI 1996 Computing Upper and Lower Bounds on Likelihoods in Intractable Networks Tommi S. Jaakkola, Michael I. Jordan
NeurIPS 1996 Hidden Markov Decision Trees Michael I. Jordan, Zoubin Ghahramani, Lawrence K. Saul
JAIR 1996 Mean Field Theory for Sigmoid Belief Networks Lawrence K. Saul, Tommi S. Jaakkola, Michael I. Jordan
NeCo 1996 On Convergence Properties of the EM Algorithm for Gaussian Mixtures Lei Xu, Michael I. Jordan
NeurIPS 1996 Recursive Algorithms for Approximating Probabilities in Graphical Models Tommi Jaakkola, Michael I. Jordan
NeurIPS 1996 Triangulation by Continuous Embedding Marina Meila, Michael I. Jordan
NeurIPS 1995 Exploiting Tractable Substructures in Intractable Networks Lawrence K. Saul, Michael I. Jordan
NeurIPS 1995 Factorial Hidden Markov Models Zoubin Ghahramani, Michael I. Jordan
NeurIPS 1995 Fast Learning by Bounding Likelihoods in Sigmoid Type Belief Networks Tommi Jaakkola, Lawrence K. Saul, Michael I. Jordan
NeurIPS 1995 Learning Fine Motion by Markov Mixtures of Experts Marina Meila, Michael I. Jordan
NeurIPS 1995 Reinforcement Learning by Probability Matching Philip N. Sabes, Michael I. Jordan
COLT 1994 A Statistical Approach to Decision Tree Modeling Michael I. Jordan
ICML 1994 A Statistical Approach to Decision Tree Modeling Michael I. Jordan
NeurIPS 1994 Active Learning with Statistical Models David A. Cohn, Zoubin Ghahramani, Michael I. Jordan
NeurIPS 1994 An Alternative Model for Mixtures of Experts Lei Xu, Michael I. Jordan, Geoffrey E. Hinton
NeurIPS 1994 Boltzmann Chains and Hidden Markov Models Lawrence K. Saul, Michael I. Jordan
NeurIPS 1994 Computational Structure of Coordinate Transformations: A Generalization Study Zoubin Ghahramani, Daniel M. Wolpert, Michael I. Jordan
NeurIPS 1994 Forward Dynamic Models in Human Motor Control: Psychophysical Evidence Daniel M. Wolpert, Zoubin Ghahramani, Michael I. Jordan
NeCo 1994 Hierarchical Mixtures of Experts and the EM Algorithm Michael I. Jordan, Robert A. Jacobs
ICML 1994 Learning Without State-Estimation in Partially Observable Markovian Decision Processes Satinder P. Singh, Tommi S. Jaakkola, Michael I. Jordan
NeCo 1994 Learning in Boltzmann Trees Lawrence K. Saul, Michael I. Jordan
NeCo 1994 On the Convergence of Stochastic Iterative Dynamic Programming Algorithms Tommi S. Jaakkola, Michael I. Jordan, Satinder P. Singh
NeurIPS 1994 Reinforcement Learning Algorithm for Partially Observable Markov Decision Problems Tommi Jaakkola, Satinder P. Singh, Michael I. Jordan
NeurIPS 1994 Reinforcement Learning with Soft State Aggregation Satinder P. Singh, Tommi Jaakkola, Michael I. Jordan
NeurIPS 1993 Convergence of Stochastic Iterative Dynamic Programming Algorithms Tommi Jaakkola, Michael I. Jordan, Satinder P. Singh
ICML 1993 Supervised Learning and Divide-and-Conquer: A Statistical Approach Michael I. Jordan, Robert A. Jacobs
NeurIPS 1993 Supervised Learning from Incomplete Data via an EM Approach Zoubin Ghahramani, Michael I. Jordan
NeurIPS 1992 A Dynamical Model of Priming and Repetition Blindness Daphne Bavelier, Michael I. Jordan
NeCo 1991 Adaptive Mixtures of Local Experts Robert A. Jacobs, Michael I. Jordan, Steven J. Nowlan, Geoffrey E. Hinton
NeurIPS 1991 Forward Dynamics Modeling of Speech Motor Control Using Physiological Data Makoto Hirayama, Eric Vatikiotis-Bateson, Mitsuo Kawato, Michael I. Jordan
NeurIPS 1991 Hierarchies of Adaptive Experts Michael I. Jordan, Robert A. Jacobs
ICML 1991 Internal World Models and Supervised Learning Michael I. Jordan, David E. Rumelhart
NeurIPS 1990 A Competitive Modular Connectionist Architecture Robert A. Jacobs, Michael I. Jordan
NeurIPS 1989 Learning to Control an Unstable System with Forward Modeling Michael I. Jordan, Robert A. Jacobs