An Efficient Approximation Algorithm for Finding a Maximum Clique Using Hopfield Network Learning

Abstract

In this article, we present a solution to the maximum clique problem using a gradient-ascent learning algorithm of the Hopfield neural network. This method provides a near-optimum parallel algorithm for finding a maximum clique. To do this, we use the Hopfield neural network to generate a near-maximum clique and then modify weights in a gradient-ascent direction to allow the network to escape from the state of near-maximum clique to maximum clique or better. The proposed parallel algorithm is tested on two types of random graphs and some benchmark graphs from the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS). The simulation results show that the proposed learning algorithm can find good solutions in reasonable computation time.

Cite

Text

Wang et al. "An Efficient Approximation Algorithm for Finding a Maximum Clique Using Hopfield Network Learning." Neural Computation, 2003. doi:10.1162/089976603321891828

Markdown

[Wang et al. "An Efficient Approximation Algorithm for Finding a Maximum Clique Using Hopfield Network Learning." Neural Computation, 2003.](https://mlanthology.org/neco/2003/wang2003neco-efficient/) doi:10.1162/089976603321891828

BibTeX

@article{wang2003neco-efficient,
  title     = {{An Efficient Approximation Algorithm for Finding a Maximum Clique Using Hopfield Network Learning}},
  author    = {Wang, Rong Long and Tang, Zheng and Cao, Qi Ping},
  journal   = {Neural Computation},
  year      = {2003},
  pages     = {1605-1619},
  doi       = {10.1162/089976603321891828},
  volume    = {15},
  url       = {https://mlanthology.org/neco/2003/wang2003neco-efficient/}
}