Markov Network Structure Learning: A Randomized Feature Generation Approach
Abstract
The structure of a Markov network is typically learned in one of two ways. The first approach is to treat this task as a global search problem. However, these algorithms are slow as they require running the expensive operation of weight (i.e., parameter) learning many times. The second approach involves learning a set of local models and then combining them into a global model. However, it can be computationally expensive to learn the local models for datasets that contain a large number of variables and/or examples. This paper pursues a third approach that views Markov network structure learning as a feature generation problem. The algorithm combines a data-driven, specific-to-general search strategy with randomization to quickly generate a large set of candidate features that all have support in the data. It uses weight learning, with L1 regularization, to select a subset of generated features to include in the model. On a large empirical study, we find that our algorithm is equivalently accurate to other state-of-the-art methods while exhibiting a much faster run time.
Cite
Text
Van Haaren and Davis. "Markov Network Structure Learning: A Randomized Feature Generation Approach." AAAI Conference on Artificial Intelligence, 2012. doi:10.1609/AAAI.V26I1.8315Markdown
[Van Haaren and Davis. "Markov Network Structure Learning: A Randomized Feature Generation Approach." AAAI Conference on Artificial Intelligence, 2012.](https://mlanthology.org/aaai/2012/haaren2012aaai-markov/) doi:10.1609/AAAI.V26I1.8315BibTeX
@inproceedings{haaren2012aaai-markov,
title = {{Markov Network Structure Learning: A Randomized Feature Generation Approach}},
author = {Van Haaren, Jan and Davis, Jesse},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2012},
pages = {1148-1154},
doi = {10.1609/AAAI.V26I1.8315},
url = {https://mlanthology.org/aaai/2012/haaren2012aaai-markov/}
}