Evan Martin (evan) wrote in evan_tech,
Evan Martin

NIST results

NIST 2005 Machine Translation Evaluation Official Results. As I mentioned before, Franz & co. (congrats again, hawk!) totally rocked it.

But it's worth noting that (at least according to Franz's papers from before Google; I don't know much about what they're actually doing here) part of his approach is use the BLEU score as the objective function in their learning. This does make sense: the BLEU score was designed to correlate with human translation quality and so it's a reasonable function to optimize. And the sentences they were given to translate must have been entirely separate from all available training data. But still, it feels a little weird to me that you'd optimize on the metric used to judge; it means you can make "simple" translation mistakes (at least to a human observer) but still get a good score as long as the scoring function doesn't account for those sort of mistakes.
Tags: google, linguistics, papers

  • dremel

    They published a paper on Dremel, my favorite previously-unpublished tool from the Google toolchest. Greg Linden discusses it: "[...] it is capable…

  • google ime

    Japanophiles might be interested to learn that Google released a Japanese IME. IME is the sort of NLP problem that Google is nearly uniquely…

  • ghc llvm

    I read this thesis on an LLVM backend for GHC, primarily because I was curious to learn more about GHC internals. The thesis serves well as an…

  • Post a new comment


    default userpic
    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.