Class CompoundWordTokenFilterBase
- java.lang.Object
-
- org.apache.lucene.util.AttributeSource
-
- org.apache.lucene.analysis.TokenStream
-
- org.apache.lucene.analysis.TokenFilter
-
- org.apache.lucene.analysis.compound.CompoundWordTokenFilterBase
-
- All Implemented Interfaces:
Closeable
,AutoCloseable
- Direct Known Subclasses:
DictionaryCompoundWordTokenFilter
,HyphenationCompoundWordTokenFilter
public abstract class CompoundWordTokenFilterBase extends TokenFilter
Base class for decomposition token filters.You must specify the required
Version
compatibility when creating CompoundWordTokenFilterBase:- As of 3.1, CompoundWordTokenFilterBase correctly handles Unicode 4.0 supplementary characters in strings and char arrays provided as compound word dictionaries.
If you pass in a
CharArraySet
as dictionary, it should be case-insensitive unless it contains only lowercased entries and you haveLowerCaseFilter
before this filter in your analysis chain. For optional performance (as this filter does lots of lookups to the dictionary, you should use the latter analysis chain/CharArraySet). Be aware: If you supply arbitrarySets
to the ctors orString[]
dictionaries, they will be automatically transformed to case-insensitive!
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description protected class
CompoundWordTokenFilterBase.CompoundToken
Helper class to hold decompounded token information-
Nested classes/interfaces inherited from class org.apache.lucene.util.AttributeSource
AttributeSource.AttributeFactory, AttributeSource.State
-
-
Field Summary
Fields Modifier and Type Field Description static int
DEFAULT_MAX_SUBWORD_SIZE
The default for maximal length of subwords that get propagated to the output of this filterstatic int
DEFAULT_MIN_SUBWORD_SIZE
The default for minimal length of subwords that get propagated to the output of this filterstatic int
DEFAULT_MIN_WORD_SIZE
The default for minimal word length that gets decomposedprotected CharArraySet
dictionary
protected int
maxSubwordSize
protected int
minSubwordSize
protected int
minWordSize
protected OffsetAttribute
offsetAtt
protected boolean
onlyLongestMatch
protected CharTermAttribute
termAtt
protected LinkedList<CompoundWordTokenFilterBase.CompoundToken>
tokens
-
Fields inherited from class org.apache.lucene.analysis.TokenFilter
input
-
-
Constructor Summary
Constructors Modifier Constructor Description protected
CompoundWordTokenFilterBase(TokenStream input, String[] dictionary)
Deprecated.protected
CompoundWordTokenFilterBase(TokenStream input, String[] dictionary, boolean onlyLongestMatch)
Deprecated.protected
CompoundWordTokenFilterBase(TokenStream input, String[] dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, boolean onlyLongestMatch)
Deprecated.protected
CompoundWordTokenFilterBase(TokenStream input, Set<?> dictionary)
Deprecated.protected
CompoundWordTokenFilterBase(TokenStream input, Set<?> dictionary, boolean onlyLongestMatch)
Deprecated.protected
CompoundWordTokenFilterBase(TokenStream input, Set<?> dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, boolean onlyLongestMatch)
Deprecated.protected
CompoundWordTokenFilterBase(Version matchVersion, TokenStream input, String[] dictionary)
protected
CompoundWordTokenFilterBase(Version matchVersion, TokenStream input, String[] dictionary, boolean onlyLongestMatch)
protected
CompoundWordTokenFilterBase(Version matchVersion, TokenStream input, String[] dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, boolean onlyLongestMatch)
protected
CompoundWordTokenFilterBase(Version matchVersion, TokenStream input, Set<?> dictionary)
protected
CompoundWordTokenFilterBase(Version matchVersion, TokenStream input, Set<?> dictionary, boolean onlyLongestMatch)
protected
CompoundWordTokenFilterBase(Version matchVersion, TokenStream input, Set<?> dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, boolean onlyLongestMatch)
-
Method Summary
All Methods Static Methods Instance Methods Abstract Methods Concrete Methods Deprecated Methods Modifier and Type Method Description protected abstract void
decompose()
Decomposes the currenttermAtt
and placesCompoundWordTokenFilterBase.CompoundToken
instances in thetokens
list.boolean
incrementToken()
Consumers (i.e.,IndexWriter
) use this method to advance the stream to the next token.static CharArraySet
makeDictionary(Version matchVersion, String[] dictionary)
Deprecated.Only available for backwards compatibility.void
reset()
Reset the filter as well as the input TokenStream.-
Methods inherited from class org.apache.lucene.analysis.TokenFilter
close, end
-
Methods inherited from class org.apache.lucene.util.AttributeSource
addAttribute, addAttributeImpl, captureState, clearAttributes, cloneAttributes, copyTo, equals, getAttribute, getAttributeClassesIterator, getAttributeFactory, getAttributeImplsIterator, hasAttribute, hasAttributes, hashCode, reflectAsString, reflectWith, restoreState, toString
-
-
-
-
Field Detail
-
DEFAULT_MIN_WORD_SIZE
public static final int DEFAULT_MIN_WORD_SIZE
The default for minimal word length that gets decomposed- See Also:
- Constant Field Values
-
DEFAULT_MIN_SUBWORD_SIZE
public static final int DEFAULT_MIN_SUBWORD_SIZE
The default for minimal length of subwords that get propagated to the output of this filter- See Also:
- Constant Field Values
-
DEFAULT_MAX_SUBWORD_SIZE
public static final int DEFAULT_MAX_SUBWORD_SIZE
The default for maximal length of subwords that get propagated to the output of this filter- See Also:
- Constant Field Values
-
dictionary
protected final CharArraySet dictionary
-
tokens
protected final LinkedList<CompoundWordTokenFilterBase.CompoundToken> tokens
-
minWordSize
protected final int minWordSize
-
minSubwordSize
protected final int minSubwordSize
-
maxSubwordSize
protected final int maxSubwordSize
-
onlyLongestMatch
protected final boolean onlyLongestMatch
-
termAtt
protected final CharTermAttribute termAtt
-
offsetAtt
protected final OffsetAttribute offsetAtt
-
-
Constructor Detail
-
CompoundWordTokenFilterBase
@Deprecated protected CompoundWordTokenFilterBase(TokenStream input, String[] dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, boolean onlyLongestMatch)
Deprecated.
-
CompoundWordTokenFilterBase
@Deprecated protected CompoundWordTokenFilterBase(TokenStream input, String[] dictionary, boolean onlyLongestMatch)
Deprecated.
-
CompoundWordTokenFilterBase
@Deprecated protected CompoundWordTokenFilterBase(TokenStream input, Set<?> dictionary, boolean onlyLongestMatch)
Deprecated.
-
CompoundWordTokenFilterBase
@Deprecated protected CompoundWordTokenFilterBase(TokenStream input, String[] dictionary)
Deprecated.
-
CompoundWordTokenFilterBase
@Deprecated protected CompoundWordTokenFilterBase(TokenStream input, Set<?> dictionary)
Deprecated.
-
CompoundWordTokenFilterBase
@Deprecated protected CompoundWordTokenFilterBase(TokenStream input, Set<?> dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, boolean onlyLongestMatch)
Deprecated.
-
CompoundWordTokenFilterBase
protected CompoundWordTokenFilterBase(Version matchVersion, TokenStream input, String[] dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, boolean onlyLongestMatch)
-
CompoundWordTokenFilterBase
protected CompoundWordTokenFilterBase(Version matchVersion, TokenStream input, String[] dictionary, boolean onlyLongestMatch)
-
CompoundWordTokenFilterBase
protected CompoundWordTokenFilterBase(Version matchVersion, TokenStream input, Set<?> dictionary, boolean onlyLongestMatch)
-
CompoundWordTokenFilterBase
protected CompoundWordTokenFilterBase(Version matchVersion, TokenStream input, String[] dictionary)
-
CompoundWordTokenFilterBase
protected CompoundWordTokenFilterBase(Version matchVersion, TokenStream input, Set<?> dictionary)
-
CompoundWordTokenFilterBase
protected CompoundWordTokenFilterBase(Version matchVersion, TokenStream input, Set<?> dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, boolean onlyLongestMatch)
-
-
Method Detail
-
makeDictionary
@Deprecated public static CharArraySet makeDictionary(Version matchVersion, String[] dictionary)
Deprecated.Only available for backwards compatibility.
-
incrementToken
public final boolean incrementToken() throws IOException
Description copied from class:TokenStream
Consumers (i.e.,IndexWriter
) use this method to advance the stream to the next token. Implementing classes must implement this method and update the appropriateAttributeImpl
s with the attributes of the next token.The producer must make no assumptions about the attributes after the method has been returned: the caller may arbitrarily change it. If the producer needs to preserve the state for subsequent calls, it can use
AttributeSource.captureState()
to create a copy of the current attribute state.This method is called for every token of a document, so an efficient implementation is crucial for good performance. To avoid calls to
AttributeSource.addAttribute(Class)
andAttributeSource.getAttribute(Class)
, references to allAttributeImpl
s that this stream uses should be retrieved during instantiation.To ensure that filters and consumers know which attributes are available, the attributes must be added during instantiation. Filters and consumers are not required to check for availability of attributes in
TokenStream.incrementToken()
.- Specified by:
incrementToken
in classTokenStream
- Returns:
- false for end of stream; true otherwise
- Throws:
IOException
-
decompose
protected abstract void decompose()
Decomposes the currenttermAtt
and placesCompoundWordTokenFilterBase.CompoundToken
instances in thetokens
list. The original token may not be placed in the list, as it is automatically passed through this filter.
-
reset
public void reset() throws IOException
Description copied from class:TokenFilter
Reset the filter as well as the input TokenStream.- Overrides:
reset
in classTokenFilter
- Throws:
IOException
-
-