Package | Description |
---|---|
org.antlr.v4.runtime | |
org.antlr.v4.runtime.tree | |
org.antlr.v4.runtime.tree.pattern | |
org.antlr.v4.runtime.tree.xpath | |
org.antlr.v4.tool |
Modifier and Type | Interface | Description |
---|---|---|
interface |
TokenFactory<Symbol extends Token> |
The default mechanism for creating tokens.
|
class |
UnbufferedTokenStream<T extends Token> |
Modifier and Type | Interface | Description |
---|---|---|
interface |
WritableToken |
Modifier and Type | Class | Description |
---|---|---|
class |
CommonToken |
Modifier and Type | Field | Description |
---|---|---|
Token |
Lexer._token |
The goal of all lexer rules/methods is to create a token object.
|
protected Token |
ListTokenSource.eofToken |
This field caches the EOF token for the token source.
|
protected Token |
UnbufferedTokenStream.lastToken |
This is the
LT(-1) token for the current position. |
protected Token |
UnbufferedTokenStream.lastTokenBufferStart |
|
Token |
ParserRuleContext.start |
For debugging/tracing purposes, we want to track all of the nodes in
the ATN traversed by the parser for a particular rule.
|
Token |
ParserRuleContext.stop |
For debugging/tracing purposes, we want to track all of the nodes in
the ATN traversed by the parser for a particular rule.
|
protected Token[] |
UnbufferedTokenStream.tokens |
A moving window buffer of the data being scanned.
|
Modifier and Type | Field | Description |
---|---|---|
protected List<Token> |
BufferedTokenStream.tokens |
A collection of all tokens fetched from the token source.
|
protected List<? extends Token> |
ListTokenSource.tokens |
The wrapped collection of
Token objects to return. |
Modifier and Type | Method | Description |
---|---|---|
Token |
Parser.consume() |
Consume and return the current symbol.
|
Token |
Lexer.emit() |
The standard method called to automatically emit a token at the
outermost lexical rule.
|
Token |
Lexer.emitEOF() |
|
Token |
BufferedTokenStream.get(int i) |
|
Token |
TokenStream.get(int index) |
Gets the
Token at the specified index in the stream. |
Token |
UnbufferedTokenStream.get(int i) |
|
Token |
Parser.getCurrentToken() |
Match needs to return the current input symbol, which gets put
into the label for the associated token ref; e.g., x=ID.
|
protected Token |
DefaultErrorStrategy.getMissingSymbol(Parser recognizer) |
Conjure up a missing token during error recovery.
|
Token |
RecognitionException.getOffendingToken() |
|
Token |
ParserRuleContext.getStart() |
Get the initial token in this context.
|
Token |
NoViableAltException.getStartToken() |
|
Token |
ParserRuleContext.getStop() |
Get the final token in this context.
|
Token |
Lexer.getToken() |
Override if emitting multiple tokens.
|
protected Token |
BufferedTokenStream.LB(int k) |
|
protected Token |
CommonTokenStream.LB(int k) |
|
Token |
BufferedTokenStream.LT(int k) |
|
Token |
CommonTokenStream.LT(int k) |
|
Token |
TokenStream.LT(int k) |
|
Token |
UnbufferedTokenStream.LT(int i) |
|
Token |
Parser.match(int ttype) |
Match current input symbol against
ttype . |
Token |
Parser.matchWildcard() |
Match current input symbol as a wildcard.
|
Token |
Lexer.nextToken() |
Return a token from this source; i.e., match a token on the char
stream.
|
Token |
ListTokenSource.nextToken() |
Return a
Token object from your input stream (usually a
CharStream ). |
Token |
TokenSource.nextToken() |
Return a
Token object from your input stream (usually a
CharStream ). |
Token |
ANTLRErrorStrategy.recoverInline(Parser recognizer) |
This method is called when an unexpected symbol is encountered during an
inline match operation, such as
Parser.match(int) . |
Token |
BailErrorStrategy.recoverInline(Parser recognizer) |
Make sure we don't attempt to recover inline; if the parser
successfully recovers, it won't throw an exception.
|
Token |
DefaultErrorStrategy.recoverInline(Parser recognizer) |
This method is called when an unexpected symbol is encountered during an
inline match operation, such as
Parser.match(int) . |
protected Token |
ParserInterpreter.recoverInline() |
|
protected Token |
DefaultErrorStrategy.singleTokenDeletion(Parser recognizer) |
This method implements the single-token deletion inline error recovery
strategy.
|
Modifier and Type | Method | Description |
---|---|---|
protected List<Token> |
BufferedTokenStream.filterForChannel(int from,
int to,
int channel) |
|
List<Token> |
BufferedTokenStream.get(int start,
int stop) |
Get all tokens from start..stop inclusively
|
List<? extends Token> |
Lexer.getAllTokens() |
Return a list of all Token objects in input char stream.
|
List<Token> |
BufferedTokenStream.getHiddenTokensToLeft(int tokenIndex) |
Collect all hidden tokens (any off-default channel) to the left of
the current token up until we see a token on DEFAULT_TOKEN_CHANNEL.
|
List<Token> |
BufferedTokenStream.getHiddenTokensToLeft(int tokenIndex,
int channel) |
Collect all tokens on specified channel to the left of
the current token up until we see a token on DEFAULT_TOKEN_CHANNEL.
|
List<Token> |
BufferedTokenStream.getHiddenTokensToRight(int tokenIndex) |
Collect all hidden tokens (any off-default channel) to the right of
the current token up until we see a token on DEFAULT_TOKEN_CHANNEL
of EOF.
|
List<Token> |
BufferedTokenStream.getHiddenTokensToRight(int tokenIndex,
int channel) |
Collect all tokens on specified channel to the right of
the current token up until we see a token on DEFAULT_TOKEN_CHANNEL or
EOF.
|
TokenFactory<? extends Token> |
Lexer.getTokenFactory() |
|
List<Token> |
BufferedTokenStream.getTokens() |
|
List<Token> |
BufferedTokenStream.getTokens(int start,
int stop) |
|
List<Token> |
BufferedTokenStream.getTokens(int start,
int stop,
int ttype) |
|
List<Token> |
BufferedTokenStream.getTokens(int start,
int stop,
Set<Integer> types) |
Given a start and stop index, return a List of all tokens in
the token type BitSet.
|
Modifier and Type | Method | Description |
---|---|---|
protected void |
UnbufferedTokenStream.add(Token t) |
|
TerminalNode |
ParserRuleContext.addChild(Token matchedToken) |
|
ErrorNode |
ParserRuleContext.addErrorNode(Token badToken) |
|
void |
TokenStreamRewriter.delete(String programName,
Token from,
Token to) |
|
void |
TokenStreamRewriter.delete(Token indexT) |
|
void |
TokenStreamRewriter.delete(Token from,
Token to) |
|
void |
Lexer.emit(Token token) |
By default does not support multiple emits per nextToken invocation
for efficiency reasons.
|
protected String |
DefaultErrorStrategy.getSymbolText(Token symbol) |
|
protected int |
DefaultErrorStrategy.getSymbolType(Token symbol) |
|
String |
BufferedTokenStream.getText(Token start,
Token stop) |
|
String |
TokenStream.getText(Token start,
Token stop) |
Return the text of all tokens in this stream between
start and
stop (inclusive). |
String |
UnbufferedTokenStream.getText(Token start,
Token stop) |
|
protected String |
DefaultErrorStrategy.getTokenErrorDisplay(Token t) |
How should a token be displayed in an error message? The default
is to display just the text, but during development you might
want to have a lot of information spit out.
|
String |
Recognizer.getTokenErrorDisplay(Token t) |
Deprecated.
This method is not called by the ANTLR 4 Runtime.
|
void |
TokenStreamRewriter.insertAfter(String programName,
Token t,
Object text) |
|
void |
TokenStreamRewriter.insertAfter(Token t,
Object text) |
|
void |
TokenStreamRewriter.insertBefore(String programName,
Token t,
Object text) |
|
void |
TokenStreamRewriter.insertBefore(Token t,
Object text) |
|
void |
Parser.notifyErrorListeners(Token offendingToken,
String msg,
RecognitionException e) |
|
void |
TokenStreamRewriter.replace(String programName,
Token from,
Token to,
Object text) |
|
void |
TokenStreamRewriter.replace(Token indexT,
Object text) |
|
void |
TokenStreamRewriter.replace(Token from,
Token to,
Object text) |
|
protected void |
RecognitionException.setOffendingToken(Token offendingToken) |
|
void |
Lexer.setToken(Token _token) |
Constructor | Description |
---|---|
CommonToken(Token oldToken) |
Constructs a new
CommonToken as a copy of another Token . |
NoViableAltException(Parser recognizer,
TokenStream input,
Token startToken,
Token offendingToken,
ATNConfigSet deadEndConfigs,
ParserRuleContext ctx) |
Constructor | Description |
---|---|
ListTokenSource(List<? extends Token> tokens) |
Constructs a new
ListTokenSource instance from the specified
collection of Token objects. |
ListTokenSource(List<? extends Token> tokens,
String sourceName) |
Constructs a new
ListTokenSource instance from the specified
collection of Token objects and source name. |
Modifier and Type | Field | Description |
---|---|---|
Token |
TerminalNodeImpl.symbol |
Modifier and Type | Method | Description |
---|---|---|
Token |
TerminalNodeImpl.getPayload() |
|
Token |
TerminalNode.getSymbol() |
|
Token |
TerminalNodeImpl.getSymbol() |
Constructor | Description |
---|---|
ErrorNodeImpl(Token token) |
|
TerminalNodeImpl(Token symbol) |
Modifier and Type | Class | Description |
---|---|---|
class |
RuleTagToken |
A
Token object representing an entire subtree matched by a parser
rule; e.g., <expr> . |
class |
TokenTagToken |
A
Token object representing a token of a particular type; e.g.,
<ID> . |
Modifier and Type | Method | Description |
---|---|---|
List<? extends Token> |
ParseTreePatternMatcher.tokenize(String pattern) |
Modifier and Type | Method | Description |
---|---|---|
protected XPathElement |
XPath.getXPathElement(Token wordToken,
boolean anywhere) |
Convert word like
* or ID or expr to a path
element. |
Modifier and Type | Method | Description |
---|---|---|
Token |
GrammarParserInterpreter.BailButConsumeErrorStrategy.recoverInline(Parser recognizer) |
Copyright © 1992–2018 ANTLR. All rights reserved.