Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hybrid as in a combination of different LLMs. I recommend trying the demo on the site, it should give you an idea of what it's doing. The code is also pretty short.

So those numbers are from an older version of the benchmark.

Coherence is done by:

- Translating English, to the target language, to English

- repeating three times

- Having 3 LLMs score how close the original English is to the new English

I like it because it's robust against LLM bias, but it obviously isn't exact, and I found that after a certain point it's actually negatively correlated with quality, because it incentivises literal, word by word translations.

Accuracy and Idiomaticity are based on asking the judge LLMs to rate by how accurate / idiomatic the translations are. I mostly focused on idiomaticity, as it was the differentiator at the upper end.

The new benchmark has gone through a few iterations, and I'm still not super happy with it. Now it's just based on LLM scoring (this time 0-100), but with better stats, prompting, etc. I've still done some small scale tests on coherence, and I did some more today that I haven't published yet, and again they have DeepL and Lingvanex doing well because they tend towards quite rigid translations over idiomatic ones. Claude 4 is also interestingly doing quite well on those metrics.

I need to sleep, but I can discuss it more tomorrow, if you'd like.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: