ICML 1998

66 papers

A Case Study in the Use of Theory Revision in Requirements Validation Thomas Leo McCluskey, Margaret Mary West
A Fast, Bottom-up Decision Tree Pruning Algorithm with Near-Optimal Generalization Michael J. Kearns, Yishay Mansour
A Learning Rate Analysis of Reinforcement Learning Algorithms in Finite-Horizon Frédérick Garçia, Seydina M. Ndiaye
A Neural Network Model for Prognostic Prediction W. Nick Street
A Process-Oriented Heuristic for Model Selection Pedro M. Domingos
A Randomized ANOVA Procedure for Comparing Performance Curves Justus H. Piater, Paul R. Cohen, Xiaoqin Zhang, Michael Atighetchi
A Supra-Classifier Architecture for Scalable Knowledge Reuse Kurt D. Bollacker, Joydeep Ghosh
An Analysis of Actor/Critic Algorithms Using Eligibility Traces: Reinforcement Learning with Imperfect Value Function Hajime Kimura, Shigenobu Kobayashi
An Analysis of Direct Reinforcement Learning in Non-Markovian Domains Mark D. Pendrith, Michael McGarity
An Efficient Boosting Algorithm for Combining Preferences Yoav Freund, Raj D. Iyer, Robert E. Schapire, Yoram Singer
An Experimental Evaluation of Coevolutive Concept Learning Cosimo Anglano, Attilio Giordana, Giuseppe Lo Bello, Lorenza Saitta
An Information-Theoretic Definition of Similarity Dekang Lin
An Investigation of Transformation-Based Learning in Discourse Ken Samuel, Sandra Carberry, K. Vijay-Shanker
Automatic Segmentation of Continuous Trajectories with Invariance to Nonlinear Warpings of Time Lawrence K. Saul
Bayesian Classifiers Are Large Margin Hyperplanes in a Hilbert Space Nello Cristianini, John Shawe-Taylor, Peter Sykacek
Bayesian Network Classification with Continuous Attributes: Getting the Best of Both Discretization and Parametric Fitting Nir Friedman, Moisés Goldszmidt, Thomas J. Lee
Classification Using Phi-Machines and Constructive Function Approximation Doina Precup, Paul E. Utgoff
Coevolutionary Learning: A Case Study Hugues Juillé, Jordan B. Pollack
Collaborative Filtering Using Weighted Majority Prediction Algorithms Atsuyoshi Nakamura, Naoki Abe
Combining Nearest Neighbor Classifiers Through Multiple Feature Subsets Stephen D. Bay
Employing EM and Pool-Based Active Learning for Text Classification Andrew Kachites McCallum, Kamal Nigam
Evolving Structured Programs with Hierarchical Instructions and Skip Nodes Rafal Salustowicz, Jürgen Schmidhuber
Feature Selection via Concave Minimization and Support Vector Machines Paul S. Bradley, Olvi L. Mangasarian
Finite-Time Regret Bounds for the Multiarmed Bandit Problem Nicolò Cesa-Bianchi, Paul Fischer
Generating Accurate Rule Sets Without Global Optimization Eibe Frank, Ian H. Witten
Genetic Programming and Deductive-Inductive Learning: A Multi-Strategy Approach Ricardo Aler, Daniel Borrajo, Pedro Isasi
Heading in the Right Direction Hagit Shatkay, Leslie Pack Kaelbling
Improving Text Classification by Shrinkage in a Hierarchy of Classes Andrew McCallum, Ronald Rosenfeld, Tom M. Mitchell, Andrew Y. Ng
Intra-Option Learning About Temporally Abstract Actions Richard S. Sutton, Doina Precup, Satinder Singh
KnightCap: A Chess Programm That Learns by Combining TD(lambda) with Game-Tree Search Jonathan Baxter, Andrew Tridgell, Lex Weaver
Learning a Language-Independent Representation for Terms from a Partially Aligned Corpus Michael L. Littman, Fan Jiang, Greg A. Keim
Learning Collaborative Information Filters Daniel Billsus, Michael J. Pazzani
Learning First-Order Acyclic Horn Programs from Entailment Chandra Reddy, Prasad Tadepalli
PDF
Learning Sorting and Decision Trees with POMDPs Blai Bonet, Hector Geffner
Learning the Grammar of Dance Joshua M. Stuart, Elizabeth Bradley
PDF
Learning to Drive a Bicycle Using Reinforcement Learning and Shaping Jette Randløv, Preben Alstrøm
Learning to Locate an Object in 3D Space from a Sequence of Camera Images Dimitris Margaritis, Sebastian Thrun
Local Cascade Generalization João Gama
Multi-Criteria Reinforcement Learning Zoltán Gábor, Zsolt Kalmár, Csaba Szepesvári
Multiagent Reinforcement Learning: Theoretical Framework and an Algorithm Junling Hu, Michael P. Wellman
Multiple-Instance Learning for Natural Scene Classification Oded Maron, Aparna Lakshmi Ratan
Multistrategy Learning for Information Extraction Dayne Freitag
Near-Optimal Reinforcement Learning in Polynominal Time Michael J. Kearns, Satinder Singh
On Feature Selection: Learning with Exponentially Many Irrelevant Features as Training Examples Andrew Y. Ng
On the Power of Decision Lists Richard Nock, Pascal Jappy
Q2: Memory-Based Active Learning for Optimizing Noisy Continuous Functions Andrew W. Moore, Jeff G. Schneider, Justin A. Boyan, Mary S. Lee
Query Learning Strategies Using Boosting and Bagging Naoki Abe, Hiroshi Mamitsuka
Refining Initial Points for K-Means Clustering Paul S. Bradley, Usama M. Fayyad
Relational Reinforcement Learning Saso Dzeroski, Luc De Raedt, Hendrik Blockeel
Ridge Regression Learning Algorithm in Dual Variables Craig Saunders, Alexander Gammerman, Volodya Vovk
RL-TOPS: An Architecture for Modularity and Re-Use in Reinforcement Learning Malcolm R. K. Ryan, Mark D. Pendrith
Solving a Huge Number of Similar Tasks: A Combination of Multi-Task Learning and a Hierarchical Bayesian Approach Tom Heskes
Stochastic Resonance with Adaptive Fuzzy Systems Sanya Mitaim, Bart Kosko
Structural Machine Learning with Galois Lattice and Graphs Michel Liquiere, Jean Sallantin
Teaching an Agent to Test Students Gheorghe Tecuci, Harry Keeling
The Case Against Accuracy Estimation for Comparing Induction Algorithms Foster J. Provost, Tom Fawcett, Ron Kohavi
The Kernel-Adatron Algorithm: A Fast and Simple Learning Procedure for Support Vector Machines Thilo-Thomas Frieß, Nello Cristianini, Colin Campbell
The MAXQ Method for Hierarchical Reinforcement Learning Thomas G. Dietterich
The Problem with Noise and Small Disjuncts Gary M. Weiss, Haym Hirsh
PDF
Theory Refinement of Bayesian Networks with Hidden Variables Sowmya Ramachandran, Raymond J. Mooney
Top-Down Induction of Clustering Trees Hendrik Blockeel, Luc De Raedt, Jan Ramon
Using a Permutation Test for Attribute Selection in Decision Trees Eibe Frank, Ian H. Witten
Using Eligibility Traces to Find the Best Memoryless Policy in Partially Observable Markov Decision Processes John Loch, Satinder Singh
Using Learning for Approximation in Stochastic Processes Daphne Koller, Raya Fratkina
Value Function Based Production Scheduling Jeff G. Schneider, Justin A. Boyan, Andrew W. Moore
Well-Behaved Borgs, Bolos, and Berserkers Diana F. Gordon