We compare manual and automatic approaches to the problem of extracting bitexts from the Web in the framework of a case study on building a Russian-Kazakh parallel corpus. Our findings suggest that targeted, site-specific crawling results in cleaner bitexts with a higher ratio of parallel sentences. We also find that general crawlers combined with boilerplate removal tools tend to retrieve shorter texts, as some content gets cleaned out with the markup. When it comes to sentence splitting and alignment we show that investing some effort in data pre- and post- processing as well as fiddling with off-the-shelf solutions pays noticeable dividend. Overall we observe that, depending on the source, automatic bitext extraction methods may lack severely in coverage (retrieve less sentence pairs) and on average are less precise (retrieve less parallel sentence pairs). We conclude that if one aims at extracting high quality bitexts for a small number of language pairs, automatic methods best be avoided, or at least used with caution.
|Title of host publication||11th International Conference on Language Resources and Evaluation|
|Place of Publication||Miyazaki, Japan|
|Publication status||Published - 2018|