Aduah, Wisdom and Meyer, Francois (2025) Designing and Contextualising Probes for African Languages, Proceedings of Sixth Workshop on African Natural Language Processing (AfricaNLP 2025).
|
Text
2025.africanlp-1.7.pdf Download (466kB) |
Abstract
Pretrained language models (PLMs) for African languages are continually improving, but the reasons behind these advances remain unclear. This paper presents the first systematic investigation into how knowledge about African languages is encoded in PLMs. We train layer-wise probes for six typologically diverse African languages to analyse how linguistic features are distributed. We also design control tasks, a way to interpret probe performance, for the MasakhaPOS dataset. We find PLMs adapted for African languages to encode more linguistic information about target languages than massively multilingual PLMs. Our results reaffirm previous findings that token-level syntactic information concentrates in middle-to-last layers, while sentence-level semantic information is distributed across all layers. Through control tasks and probing baselines, we confirm that performance reflects the internal knowledge of PLMs rather than probe memorisation. Our study applies established interpretability techniques to African-language PLMs. In doing so, we highlight the internal mechanisms underlying the success of strategies like active learning and multilingual adaptation.
| Item Type: | Conference paper |
|---|---|
| Subjects: | Computing methodologies > Artificial intelligence > Natural language processing |
| Date Deposited: | 28 Jul 2025 12:38 |
| Last Modified: | 28 Jul 2025 12:38 |
| URI: | https://pubs.cs.uct.ac.za/id/eprint/1737 |
Actions (login required)
![]() |
View Item |
