Harnessing Code Switching to Transcend the Linguistic Barrier

Abstract

Accurate local-level poverty measurement is an essential task for governments and humanitarian organizations to track the progress towards improving livelihoods and distribute scarce resources. Recent computer vision advances in using satellite imagery to predict poverty have shown increasing accuracy, but they do not generate features that are interpretable to policymakers, inhibiting adoption by practitioners. Here we demonstrate an interpretable computational framework to accurately predict poverty at a local level by applying object detectors to high resolution (30cm) satellite images. Using the weighted counts of objects as features, we achieve 0.539 Pearson's r^2 in predicting village-level poverty in Uganda, a 31% improvement over existing (and less interpretable) benchmarks. Feature importance and ablation analysis reveal intuitive relationships between object counts and poverty predictions. Our results suggest that interpretability does not have to come at the cost of performance, at least in this important domain.

Cite

Text

KhudaBukhsh et al. "Harnessing Code Switching to Transcend the Linguistic Barrier." International Joint Conference on Artificial Intelligence, 2020. doi:10.24963/IJCAI.2020/602

Markdown

[KhudaBukhsh et al. "Harnessing Code Switching to Transcend the Linguistic Barrier." International Joint Conference on Artificial Intelligence, 2020.](https://mlanthology.org/ijcai/2020/khudabukhsh2020ijcai-harnessing/) doi:10.24963/IJCAI.2020/602

BibTeX

@inproceedings{khudabukhsh2020ijcai-harnessing,
  title     = {{Harnessing Code Switching to Transcend the Linguistic Barrier}},
  author    = {KhudaBukhsh, Ashiqur R. and Palakodety, Shriphani and Carbonell, Jaime G.},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2020},
  pages     = {4366-4374},
  doi       = {10.24963/IJCAI.2020/602},
  url       = {https://mlanthology.org/ijcai/2020/khudabukhsh2020ijcai-harnessing/}
}