Approximating Maximum-Entropy Ratings for Evidential Parsing and Semantic Interpretation

Abstract

We consider the problem of assigning probabilistic ratings to hypotheses in a natural language interpretation system. To facilitate integrating syntactic, semantic, and conceptual constraints, we allow a fully compositional frame representation, which permits co-indexed syntactic constituents and/or semantic entities filling multiple roles. In addition the knowledge base contains probabilistic information encoded by marginal probabilities on frames. These probabilities are used to specify typicality of realworld scenarios on one hand, and conventionality of linguistic usage patterns on the other. Because the theoretical maximum-entropy solution is infeasible in the general case, we propose an approximate method. This method's strengths are (1) its ability to rate compositional structures, and (2) its flexibility with respect to the inputs chosen by the system it is embedded in. Arbitrary sets of hypotheses from the front-end processor can be accepted, as well as arbitrary subsets of co...

Cite

Text

Wu. "Approximating Maximum-Entropy Ratings for Evidential Parsing and Semantic Interpretation." International Joint Conference on Artificial Intelligence, 1993.

Markdown

[Wu. "Approximating Maximum-Entropy Ratings for Evidential Parsing and Semantic Interpretation." International Joint Conference on Artificial Intelligence, 1993.](https://mlanthology.org/ijcai/1993/wu1993ijcai-approximating/)

BibTeX

@inproceedings{wu1993ijcai-approximating,
  title     = {{Approximating Maximum-Entropy Ratings for Evidential Parsing and Semantic Interpretation}},
  author    = {Wu, Dekai},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {1993},
  pages     = {1290-1296},
  url       = {https://mlanthology.org/ijcai/1993/wu1993ijcai-approximating/}
}