Code of Thrones

Japanese Tokenization with Java and Lucene

I was trying to write Japanese analysis program with Java and Lucene 4.4. After trying Lucene’s CJKAnalyzer and Lucene-gosen, I ended up writing my own Tokenizer, Filter and Analyzer.

Lucene CJKAnalyzer

Lucene 4.4 comes with a built-in analyzer for Chinese, Japanese and Korean. The demo result for Chinese on Lucene’s document seem quite good, so I gave it a try on Japanese:

final String s = "バカです。よろしくお願いいたします";
final CJKAnalyzer cjkAnalyzer = new CJKAnalyzer(Version.LUCENE_44);
final TokenStream tokenStream = cjkAnalyzer.tokenStream("", new StringReader(s));
final CharTermAttribute charTermAttribute = tokenStream.addAttribute(CharTermAttribute.class);
while (tokenStream.incrementToken()) {

And here’s what I got:
バカ カで です よろ ろし しく くお お願 願い いい いた たし しま ます
Boo - it’s pure bigrams of the sentence. Most of the bigrams actaully make no sense in Japanese. Very バカ :)


I tried another one that works with Lucene called Lucene-gosen, but taking a look at the source code, it apparently doesn’t work with Lucene 4.4.


Sen seems to be the original project that Lucene-gosen is based on, so I guess we can wrap up our own Tokenizer with Sen’s components:

import org.apache.lucene.analysis.Tokenizer;
import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;
public final class JapaneseTokenizer extends Tokenizer {
    private final StreamTagger tagger;
    private final CharTermAttribute termAttr;
    public JapaneseTokenizer(final Reader in, final String senConfPath) throws IOException {
        tagger = new StreamTagger(in, senConfPath);
        termAttr = addAttribute(CharTermAttribute.class);
    public boolean incrementToken() throws IOException {
        if (!tagger.hasNext()) {
            return false;
        final Token token =;
        termAttr.append(token.getSurface(), 0, token.length());
        return true;

Apart from Tokenizer, we should also provide some filters to do common Japanese processing tricks like removing punctuations, normalizing half-with characters, and ruling out stopwords, etc. And here’s the result I got using the Sen-based Tokenzier:
バカ です よろしく お願い いたし ます
Looks good! Cheers!

I was told that MeCab is more popular in the Japanese IT industry. I recommend you to try it out if Sen cannot meet your need.

Published: February 05 2015

blog comments powered by Disqus