Semi-Supervised Learning with Explicit Misclassification Modeling
Abstract
To harness modern multi-core processors, it is imperative to develop parallel versions of fundamental algorithms. In this paper, we present a general approach to best-first heuristic search in a shared-memory setting. Each thread attempts to expand the most promising open nodes. By using abstraction to partition the state space, we detect duplicate states without requiring frequent locking. We allow speculative expansions when necessary to keep threads busy. We identify and fix potential livelock conditions in our approach, verifying its correctness using temporal logic. In an empirical comparison on STRIPS planning, grid pathfinding, and sliding tile puzzle problems using an 8-core machine, we show that A* implemented in our framework yields faster search than improved versions of previous parallel search proposals. Our approach extends easily to other best-first searches, such as Anytime weighted A*.
Cite
Text
Amini and Gallinari. "Semi-Supervised Learning with Explicit Misclassification Modeling." International Joint Conference on Artificial Intelligence, 2003. doi:10.1613/jair.3094Markdown
[Amini and Gallinari. "Semi-Supervised Learning with Explicit Misclassification Modeling." International Joint Conference on Artificial Intelligence, 2003.](https://mlanthology.org/ijcai/2003/amini2003ijcai-semi/) doi:10.1613/jair.3094BibTeX
@inproceedings{amini2003ijcai-semi,
title = {{Semi-Supervised Learning with Explicit Misclassification Modeling}},
author = {Amini, Massih-Reza and Gallinari, Patrick},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2003},
pages = {555-560},
doi = {10.1613/jair.3094},
url = {https://mlanthology.org/ijcai/2003/amini2003ijcai-semi/}
}