Class CharTokenizer

    • Method Detail

      • isTokenChar

        @Deprecated
        protected boolean isTokenChar​(char c)
        Deprecated.
        use isTokenChar(int) instead. This method will be removed in Lucene 4.0.
        Returns true iff a UTF-16 code unit should be included in a token. This tokenizer generates as tokens adjacent sequences of characters which satisfy this predicate. Characters for which this is false are used to define token boundaries and are not included in tokens.

        Note: This method cannot handle supplementary characters. To support all Unicode characters, including supplementary characters, use the isTokenChar(int) method.

      • normalize

        @Deprecated
        protected char normalize​(char c)
        Deprecated.
        use normalize(int) instead. This method will be removed in Lucene 4.0.
        Called on each token UTF-16 code unit to normalize it before it is added to the token. The default implementation does nothing. Subclasses may use this to, e.g., lowercase tokens.

        Note: This method cannot handle supplementary characters. To support all Unicode characters, including supplementary characters, use the normalize(int) method.

      • isTokenChar

        protected boolean isTokenChar​(int c)
        Returns true iff a codepoint should be included in a token. This tokenizer generates as tokens adjacent sequences of codepoints which satisfy this predicate. Codepoints for which this is false are used to define token boundaries and are not included in tokens.

        As of Lucene 3.1 the char based API (isTokenChar(char) and normalize(char)) has been depreciated in favor of a Unicode 4.0 compatible int based API to support codepoints instead of UTF-16 code units. Subclasses of CharTokenizer must not override the char based methods if a Version >= 3.1 is passed to the constructor.

        NOTE: This method will be marked abstract in Lucene 4.0.

      • normalize

        protected int normalize​(int c)
        Called on each token character to normalize it before it is added to the token. The default implementation does nothing. Subclasses may use this to, e.g., lowercase tokens.

        As of Lucene 3.1 the char based API (isTokenChar(char) and normalize(char)) has been depreciated in favor of a Unicode 4.0 compatible int based API to support codepoints instead of UTF-16 code units. Subclasses of CharTokenizer must not override the char based methods if a Version >= 3.1 is passed to the constructor.

        NOTE: This method will be marked abstract in Lucene 4.0.

      • incrementToken

        public final boolean incrementToken()
                                     throws IOException
        Description copied from class: TokenStream
        Consumers (i.e., IndexWriter) use this method to advance the stream to the next token. Implementing classes must implement this method and update the appropriate AttributeImpls with the attributes of the next token.

        The producer must make no assumptions about the attributes after the method has been returned: the caller may arbitrarily change it. If the producer needs to preserve the state for subsequent calls, it can use AttributeSource.captureState() to create a copy of the current attribute state.

        This method is called for every token of a document, so an efficient implementation is crucial for good performance. To avoid calls to AttributeSource.addAttribute(Class) and AttributeSource.getAttribute(Class), references to all AttributeImpls that this stream uses should be retrieved during instantiation.

        To ensure that filters and consumers know which attributes are available, the attributes must be added during instantiation. Filters and consumers are not required to check for availability of attributes in TokenStream.incrementToken().

        Specified by:
        incrementToken in class TokenStream
        Returns:
        false for end of stream; true otherwise
        Throws:
        IOException
      • end

        public final void end()
        Description copied from class: TokenStream
        This method is called by the consumer after the last token has been consumed, after TokenStream.incrementToken() returned false (using the new TokenStream API). Streams implementing the old API should upgrade to use this feature.

        This method can be used to perform any end-of-stream operations, such as setting the final offset of a stream. The final offset of a stream might differ from the offset of the last token eg in case one or more whitespaces followed after the last token, but a WhitespaceTokenizer was used.

        Overrides:
        end in class TokenStream
      • reset

        public void reset​(Reader input)
                   throws IOException
        Description copied from class: Tokenizer
        Expert: Reset the tokenizer to a new reader. Typically, an analyzer (in its reusableTokenStream method) will use this to re-use a previously created tokenizer.
        Overrides:
        reset in class Tokenizer
        Throws:
        IOException