现在的位置: 首页 > 综合 > 正文

Lucene学习笔记——深入剖析Analyzer(1)

2013年10月04日 ⁄ 综合 ⁄ 共 5678字 ⁄ 字号 评论关闭

 Analyzer是Lucene中非常重要的一个环节。Victor在文章中把它比喻成人体的肠道,负责把食物分解成易于吸收的小块。这个说法非常形象,Analyzer的作用就是把文本分解为便于Lucene处理的token。Lucene有四个自带的Analyzer,分别是WhitespaceAnalyzer、SimpleAnalyzer、StopAnalyzer、StandardAnalyze。现在我们以"The quick brown fox jumped over the lazy dogs"和"XY&Z Corporation
- xyz@example.com"两个句子为例就看看这四个Analyzer究竟是怎样分解文本的。(这个例子来自于LIA

Analyzing "The quick brown fox jumped over the lazy dogs"

WhitespaceAnalyzer:

[The] [quick] [brown] [fox] [jumped] [over] [the] [lazy] [dogs]

SimpleAnalyzer:

[the] [quick] [brown] [fox] [jumped] [over] [the] [lazy] [dogs]

StopAnalyzer:

[quick] [brown] [fox] [jumped] [over] [lazy] [dogs]

StandardAnalyzer:

[quick] [brown] [fox] [jumped] [over] [lazy] [dogs]

Analyzing "XY&Z Corporation - xyz@example.com"

WhitespaceAnalyzer:

[XY&Z] [Corporation] [-] [xyz@example.com]

SimpleAnalyzer:

[xy] [z] [corporation] [xyz] [example] [com]

StopAnalyzer:

[xy] [z] [corporation] [xyz] [example] [com]

StandardAnalyzer:

[xy&z] [corporation] [xyz@example.com]

(产生以上文本的代码见附录)

      结合四个Analyzer对两个句子的分析,我们可以看到WhitespaceAnalyzer只对文本进行空格切分;SimpleAnalyzer除了按空格切分之外遇到标点符号也会切分,同时还把所有的字母变成了小写的;StopAnalyzer在SimpleAnalyzer功能的基础上还去掉了”the”, “a”等停用词;而StandardAnalyzer最为强大,表面上看它是按空格切分,然后去掉一些停用词,但实际上它有很强的token识别功能,像”xyz@example.com”这样的字符串它可以识别为email。

      知道了四个Analyzer的功能,那么它们是怎样实现的呢?我们先看看Analyzer的继承体系:

     [见图一]

      从上图可以看出,四个Analyzer都是继承自Analyzer的。而Analyzer是一个抽象类,它只提供了一个叫tokenStream的虚函数,下面是Analyzer类的代码:

      public abstract class Analyzer {

         public abstract TokenStream tokenStream(String fieldName, Reader reader);

             …

      }

      接下来看看四个继承类的代码就会发现,它们的代码非常简单,Lucene把他们的实现细节隐藏在其他的一些类中了。

      public final class WhitespaceAnalyzer extends Analyzer {

          public TokenStream tokenStream(String fieldName, Reader reader) {

                return new WhitespaceTokenizer(reader);

          }

}

public final class SimpleAnalyzer extends Analyzer {

public TokenStream tokenStream(String fieldName, Reader reader) {

   return new LowerCaseTokenizer(reader);

}

}

WhitespaceAnalyzer和SimpleAnalyzer非常的简单,它们只是把功能实现分别托管给了WhitespaceTokenizer和LowerCaseTokenizer。

public final class StopAnalyzer extends Analyzer {

private Set stopWords;

public static final String[] ENGLISH_STOP_WORDS = {

   "a", "an", "and", "are", "as", "at", "be", "but", "by",

   "for", "if", "in", "into", "is", "it",

  "no", "not", "of", "on", "or", "s", "such",

   "t", "that", "the", "their", "then", "there", "these",

   "they", "this", "to", "was", "will", "with"

};

public TokenStream tokenStream(String fieldName, Reader reader) {

   return new StopFilter(new LowerCaseTokenizer(reader), stopWords);

}

}

public class StandardAnalyzer extends Analyzer {

   private Set stopSet;

public static final String[] STOP_WORDS = StopAnalyzer.ENGLISH_STOP_WORDS;

public TokenStream tokenStream(String fieldName, Reader reader) {

           TokenStream result = new StandardTokenizer(reader);

           result = new StandardFilter(result);

          result = new LowerCaseFilter(result);

          result = new StopFilter(result, stopSet);

          return result;

     }

}

StopAnalyzer虽然复杂了一些,但其实也是把功能托管给了StopFilter。而StandardAnalyzer最复杂,它用到了三个Filter,分别是StandardFilter、LowerCaseFilter和StopFilter。

从上面的代码不难看出,四个Analyzer其实只和两个类型在打交道,就是各种Tokenizer和各种Filter。让我们看一下这里的继承体系:[见图二]

      原来各种Filter都继承自TokenFilter,各种Tokenizer都继承自Tokenizer。而这两个抽象类又全都继承自TokenStream,TokenFilter除了继承自TokenStream以外还拥有一个TokenStream的实例。

      那TokenStream是个什么类呢?正如它的名字,TokenStream其实就是一个token流,或者说token的序列。下面是TokenStream的代码:

      public abstract class TokenStream {

          public abstract Token next() throws IOException;

public void close() throws IOException {}

      }

      TokenStream只提供了一个虚函数next,每次调用next函数就可以获得TokenStream中的一个token,重复调用就可以遍历一个TokenStream中的所有token。

      那么到底什么是Tokenizer,什么又是TokenFilter呢?

      Lucene的注释中这样说:

A Tokenizer is a TokenStream whose input is a Reader.

A TokenFilter is a TokenStream whose input is another token stream.

从这两句话可以看出,Tokenizer是一个以Reader为输入的TokenStream;而TokenFilter是一个以另一个TokenStream为输入的TokenStream。表面上看两者只是输入不同,但正因为这一点,Tokenizer被用来做初级的文本处理,它把从Reader读入的原始文本通过一些简单的办法处理成一个个初级的token;TokenFilter则以Tokenizer为输入(因为Tokenizer继承自TokenStream),用一些规则过滤掉不符合要求的token(像StopFilter中的停用词),产生最终的token
stream。还记得前文说的WhitespaceAnalyzer和SimpleAnalyzer引用的都是Tokenizer,StopAnalyzer和StandardAnalyzer引用的都是TokenFilter吗?这就是因为前二者处理规则比较简单,用Tokenizer把Reader的输入经过一步处理就够了;后二者处理要复杂一些,需要用到TokenFilter,而TokenFilter在Tokenizer处理的基础上进行一些过滤,这样才能满足后二者的需要。(待续)

附:测试四个Analyzer的代码

/**

* Adapted from code which first appeared in a java.net article

* written by Erik

*/

public class AnalyzerDemo {

private static final String[] examples = {

"The quick brown fox jumped over the lazy dogs",

"XY&Z Corporation - xyz@example.com"

};

private static final Analyzer[] analyzers = new Analyzer[] {

new WhitespaceAnalyzer(),

new SimpleAnalyzer(),

new StopAnalyzer(),

new StandardAnalyzer()

};

Listpublic static void main(String[] args) throws IOException {

// Use the embedded example strings, unless

// command line arguments are specified, then use those.

String[] strings = examples;

if (args.length > 0) {

strings = args;

}

for (int i = 0; i < strings.length; i++) {

analyze(strings[i]);

}

}

private static void analyze(String text) throws IOException {

System.out.println("Analyzing \"" + text + "\"");

for (int i = 0; i < analyzers.length; i++) {

Analyzer analyzer = analyzers[i];

String name = analyzer.getClass().getName();

name = name.substring(name.lastIndexOf(".") + 1);

System.out.println(" " + name + ":");

System.out.print(" ");

AnalyzerUtils.displayTokens(analyzer, text);

System.out.println("\n");

}

}

}

public class AnalyzerUtils {

public static Token[] tokensFromAnalysis

(Analyzer analyzer, String text) throws IOException {

TokenStream stream =

analyzer.tokenStream("contents", new StringReader(text));

ArrayList tokenList = new ArrayList();

while (true) {

Token token = stream.next();

if (token == null) break;

tokenList.add(token);

}

return (Token[]) tokenList.toArray(new Token[0]);

}

public static void displayTokens

(Analyzer analyzer, String text) throws IOException {

Token[] tokens = tokensFromAnalysis(analyzer, text);

for (int i = 0; i < tokens.length; i++) {

Token token = tokens[i];

System.out.print("[" + token.termText() + "] ");

}

}

}

抱歉!评论已关闭.