Oymak, Samet

57 publications

CPAL 2025 A Case Study of Low Ranked Self-Expressive Structures in Neural Network Representations Uday Singh Saini, William Shiao, Yahya Sattar, Yogesh Dahiya, Samet Oymak, Evangelos E. Papalexakis
CVPR 2025 AdMiT: Adaptive Multi-Source Tuning in Dynamic Environments Xiangyu Chang, Fahim Faisal Niloy, Sk Miraj Ahmed, Srikanth V. Krishnamurthy, Basak Guler, Ananthram Swami, Samet Oymak, Amit Roy-Chowdhury
NeurIPS 2025 Attention with Trained Embeddings Provably Selects Important Tokens Diyuan Wu, Aleksandr Shevchenko, Samet Oymak, Marco Mondelli
NeurIPS 2025 BREAD: Branched Rollouts from Expert Anchors Bridge SFT & RL for Reasoning Xuechen Zhang, Zijian Huang, Yingcong Li, Chenshun Ni, Jiasi Chen, Samet Oymak
ICML 2025 Everything Everywhere All at Once: LLMs Can In-Context Learn Multiple Tasks in Superposition Zheyang Xiong, Ziyang Cai, John Cooper, Albert Ge, Vasilis Papageorgiou, Zack Sifakis, Angeliki Giannou, Ziqian Lin, Liu Yang, Saurabh Agarwal, Grigorios Chrysos, Samet Oymak, Kangwook Lee, Dimitris Papailiopoulos
NeurIPS 2025 Extrapolation by Association: Length Generalization Transfer in Transformers Ziyang Cai, Nayoung Lee, Avi Schwarzschild, Samet Oymak, Dimitris Papailiopoulos
ICLR 2025 High-Dimensional Analysis of Knowledge Distillation: Weak-to-Strong Generalization and Scaling Laws Muhammed Emrullah Ildiz, Halil Alperen Gozeten, Ege Onur Taga, Marco Mondelli, Samet Oymak
AAAI 2025 On the Power of Convolution-Augmented Transformer Mingchen Li, Xuechen Zhang, Yixiao Huang, Samet Oymak
AISTATS 2025 Provable Benefits of Task-Specific Prompts for In-Context Learning Xiangyu Chang, Yingcong Li, Muti Kara, Samet Oymak, Amit Roy-Chowdhury
ICML 2025 Test-Time Training Provably Improves Transformers as In-Context Learners Halil Alperen Gozeten, Muhammed Emrullah Ildiz, Xuechen Zhang, Mahdi Soltanolkotabi, Marco Mondelli, Samet Oymak
AAAI 2025 TimePFN: Effective Multivariate Time Series Forecasting with Synthetic Data Ege Onur Taga, Muhammed Emrullah Ildiz, Samet Oymak
NeurIPS 2025 When and How Unlabeled Data Provably Improve In-Context Learning Yingcong Li, Xiangyu Chang, Muti Kara, Xiaofeng Liu, Amit Roy-Chowdhury, Samet Oymak
AAAI 2024 A Score-Based Deterministic Diffusion Algorithm with Smooth Scores for General Distributions Karthik Elamvazhuthi, Xuechen Zhang, Matthew Jacobs, Samet Oymak, Fabio Pasqualetti
NeurIPSW 2024 Algorithmic Oversight for Deceptive Reasoning Ege Onur Taga, Mingchen Li, Yongqi Chen, Samet Oymak
NeurIPS 2024 CONTRAST: Continual Multi-Source Adaptation to Dynamic Distributions Sk Miraj Ahmed, Fahim Faisal Niloy, Xiangyu Chang, Dripta S. Raychaudhuri, Samet Oymak, Amit K. Roy-Chowdhury
ICML 2024 Can Mamba Learn How to Learn? a Comparative Study on In-Context Learning Tasks Jongho Park, Jaeseung Park, Zheyang Xiong, Nayoung Lee, Jaewoong Cho, Samet Oymak, Kangwook Lee, Dimitris Papailiopoulos
AAAI 2024 Class-Attribute Priors: Adapting Optimization to Heterogeneity and Fairness Objective Xuechen Zhang, Mingchen Li, Jiasi Chen, Christos Thrampoulidis, Samet Oymak
WACV 2024 Effective Restoration of Source Knowledge in Continual Test Time Adaptation Fahim Faisal Niloy, Sk Miraj Ahmed, Dripta S. Raychaudhuri, Samet Oymak, Amit K. Roy-Chowdhury
NeurIPS 2024 Efficient Contextual LLM Cascades Through Budget-Constrained Policy Learning Xuechen Zhang, Zijian Huang, Ege Onur Taga, Carlee Joe-Wong, Samet Oymak, Jiasi Chen
ICMLW 2024 Fine-Grained Analysis of In-Context Linear Estimation Yingcong Li, Ankit Singh Rawat, Samet Oymak
NeurIPS 2024 Fine-Grained Analysis of In-Context Linear Estimation: Data, Architecture, and Beyond Yingcong Li, Ankit Singh Rawat, Samet Oymak
ICMLW 2024 Fine-Grained Analysis of In-Context Linear Estimation: Data, Architecture, and Beyond Yingcong Li, Ankit Singh Rawat, Samet Oymak
ICML 2024 From Self-Attention to Markov Models: Unveiling the Dynamics of Generative Transformers Muhammed Emrullah Ildiz, Yixiao Huang, Yingcong Li, Ankit Singh Rawat, Samet Oymak
AISTATS 2024 Mechanics of Next Token Prediction with Self-Attention Yingcong Li, Yixiao Huang, Muhammed E. Ildiz, Ankit Singh Rawat, Samet Oymak
ICMLW 2024 On the Power of Convolution Augmented Transformer Mingchen Li, Xuechen Zhang, Yixiao Huang, Samet Oymak
NeurIPS 2024 Selective Attention: Enhancing Transformer Through Principled Context Control Xuechen Zhang, Xiangyu Chang, Mingchen Li, Amit Roy-Chowdhury, Jiasi Chen, Samet Oymak
NeurIPSW 2024 TimePFN: Effective Multivariate Time Series Forecasting with Synthetic Data Ege Onur Taga, Muhammed Emrullah Ildiz, Samet Oymak
AISTATS 2024 Understanding Inverse Scaling and Emergence in Multitask Representation Learning Muhammed E. Ildiz, Zhe Zhao, Samet Oymak
NeurIPSW 2023 Augmenting Federated Learning with Pretrained Transformers Xuechen Zhang, Mingchen Li, Xiangyu Chang, Jiasi Chen, Amit Roy-Chowdhury, Ananda Suresh, Samet Oymak
NeurIPS 2023 Dissecting Chain-of-Thought: Compositionality Through In-Context Filtering and Learning Yingcong Li, Kartik Sreenivasan, Angeliki Giannou, Dimitris Papailiopoulos, Samet Oymak
L4DC 2023 Learning on Manifolds: Universal Approximations Properties Using Geometric Controllability Conditions for Neural ODEs Karthik Elamvazhuthi, Xuechen Zhang, Samet Oymak, Fabio Pasqualetti
NeurIPS 2023 Max-Margin Token Selection in Attention Mechanism Davoud Ataee Tarzanagh, Yingcong Li, Xuechen Zhang, Samet Oymak
ICML 2023 On the Role of Attention in Prompt-Tuning Samet Oymak, Ankit Singh Rawat, Mahdi Soltanolkotabi, Christos Thrampoulidis
ICLRW 2023 On the Role of Attention in Prompt-Tuning Samet Oymak, Ankit Singh Rawat, Mahdi Soltanolkotabi, Christos Thrampoulidis
AAAI 2023 Provable Pathways: Learning Multiple Tasks over Multiple Paths Yingcong Li, Samet Oymak
AAAI 2023 Stochastic Contextual Bandits with Long Horizon Rewards Yuzhen Qin, Yingcong Li, Fabio Pasqualetti, Maryam Fazel, Samet Oymak
ICML 2023 Transformers as Algorithms: Generalization and Stability in In-Context Learning Yingcong Li, Muhammed Emrullah Ildiz, Dimitris Papailiopoulos, Samet Oymak
NeurIPSW 2023 Transformers as Support Vector Machines Davoud Ataee Tarzanagh, Yingcong Li, Christos Thrampoulidis, Samet Oymak
ICML 2022 FedNest: Federated Bilevel, Minimax, and Compositional Optimization Davoud Ataee Tarzanagh, Mingchen Li, Christos Thrampoulidis, Samet Oymak
JMLR 2022 Non-Asymptotic and Accurate Learning of Nonlinear Dynamical Systems Yahya Sattar, Samet Oymak
AISTATS 2021 A Theoretical Characterization of Semi-Supervised Learning with Self-Training for Gaussian Mixture Models Samet Oymak, Talha Cihad Gulcu
NeurIPS 2021 AutoBalance: Optimized Loss Functions for Imbalanced Data Mingchen Li, Xuechen Zhang, Christos Thrampoulidis, Jiasi Chen, Samet Oymak
ICML 2021 Generalization Guarantees for Neural Architecture Search with Train-Validation Split Samet Oymak, Mingchen Li, Mahdi Soltanolkotabi
NeurIPS 2021 Label-Imbalanced and Group-Sensitive Classification Under Overparameterization Ganesh Ramachandra Kini, Orestis Paraskevas, Samet Oymak, Christos Thrampoulidis
AAAI 2021 Provable Benefits of Overparameterization in Model Compression: From Double Descent to Pruning Neural Networks Xiangyu Chang, Yingcong Li, Samet Oymak, Christos Thrampoulidis
NeurIPS 2021 Towards Sample-Efficient Overparameterized Meta-Learning Yue Sun, Adhyyan Narang, Ibrahim Gulluk, Samet Oymak, Maryam Fazel
CVPR 2021 Unsupervised Multi-Source Domain Adaptation Without Access to Source Data Sk Miraj Ahmed, Dripta S. Raychaudhuri, Sujoy Paul, Samet Oymak, Amit K. Roy-Chowdhury
L4DC 2020 Finite Sample System Identification: Optimal Rates and the Role of Regularization Yue Sun, Samet Oymak, Maryam Fazel
AISTATS 2020 Gradient Descent with Early Stopping Is Provably Robust to Label Noise for Overparameterized Neural Networks Mingchen Li, Mahdi Soltanolkotabi, Samet Oymak
NeurIPS 2020 Theoretical Insights into Multiclass Classification: A High-Dimensional Asymptotic View Christos Thrampoulidis, Samet Oymak, Mahdi Soltanolkotabi
ICMLW 2019 Data Enrichment: Multi-Task Learning in High Dimension with Theoretical Guarantees Amir Asiaee, Samet Oymak, Kevin R. Coombes, Arindam Banerjee
ICML 2019 Overparameterized Nonlinear Learning: Gradient Descent Takes the Shortest Path? Samet Oymak, Mahdi Soltanolkotabi
COLT 2019 Stochastic Gradient Descent Learns State Equations with Nonlinear Activations Samet Oymak
ICML 2018 Learning Compact Neural Networks with Regularization Samet Oymak
NeurIPS 2015 Parallel Correlation Clustering on Big Graphs Xinghao Pan, Dimitris Papailiopoulos, Samet Oymak, Benjamin Recht, Kannan Ramchandran, Michael I Jordan
COLT 2015 Regularized Linear Regression: A Precise Analysis of the Estimation Error Christos Thrampoulidis, Samet Oymak, Babak Hassibi
NeurIPS 2014 Graph Clustering with Missing Data: Convex Algorithms and Analysis Ramya Korlakai Vinayak, Samet Oymak, Babak Hassibi