Java Code Examples for org.apache.lucene.analysis.tokenattributes.CharTermAttribute#copyBuffer()
The following examples show how to use
org.apache.lucene.analysis.tokenattributes.CharTermAttribute#copyBuffer() .
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example 1
Source File: WikipediaTokenizerImpl.java From lucene-solr with Apache License 2.0 | 4 votes |
/** * Fills Lucene token with the current token text. */ final void getText(CharTermAttribute t) { t.copyBuffer(zzBuffer, zzStartRead, zzMarkedPos-zzStartRead); }
Example 2
Source File: ClassicTokenizerImpl.java From lucene-solr with Apache License 2.0 | 4 votes |
/** * Fills CharTermAttribute with the current token text. */ public final void getText(CharTermAttribute t) { t.copyBuffer(zzBuffer, zzStartRead, zzMarkedPos-zzStartRead); }
Example 3
Source File: StandardTokenizerImpl.java From lucene-solr with Apache License 2.0 | 4 votes |
/** * Fills CharTermAttribute with the current token text. */ public final void getText(CharTermAttribute t) { t.copyBuffer(zzBuffer, zzStartRead, zzMarkedPos-zzStartRead); }
Example 4
Source File: SimplePreAnalyzedParser.java From lucene-solr with Apache License 2.0 | 4 votes |
private static AttributeSource.State createState(AttributeSource a, Tok state, int tokenEnd) { a.clearAttributes(); CharTermAttribute termAtt = a.addAttribute(CharTermAttribute.class); char[] tokChars = state.token.toString().toCharArray(); termAtt.copyBuffer(tokChars, 0, tokChars.length); int tokenStart = tokenEnd - state.token.length(); for (Entry<String, String> e : state.attr.entrySet()) { String k = e.getKey(); if (k.equals("i")) { // position increment int incr = Integer.parseInt(e.getValue()); PositionIncrementAttribute posIncr = a.addAttribute(PositionIncrementAttribute.class); posIncr.setPositionIncrement(incr); } else if (k.equals("s")) { tokenStart = Integer.parseInt(e.getValue()); } else if (k.equals("e")) { tokenEnd = Integer.parseInt(e.getValue()); } else if (k.equals("y")) { TypeAttribute type = a.addAttribute(TypeAttribute.class); type.setType(e.getValue()); } else if (k.equals("f")) { FlagsAttribute flags = a.addAttribute(FlagsAttribute.class); int f = Integer.parseInt(e.getValue(), 16); flags.setFlags(f); } else if (k.equals("p")) { PayloadAttribute p = a.addAttribute(PayloadAttribute.class); byte[] data = hexToBytes(e.getValue()); if (data != null && data.length > 0) { p.setPayload(new BytesRef(data)); } } else { // unknown attribute } } // handle offset attr OffsetAttribute offset = a.addAttribute(OffsetAttribute.class); offset.setOffset(tokenStart, tokenEnd); State resState = a.captureState(); a.clearAttributes(); return resState; }
Example 5
Source File: StandardTokenizerImpl.java From projectforge-webapp with GNU General Public License v3.0 | 4 votes |
/** * Fills CharTermAttribute with the current token text. */ public final void getText(final CharTermAttribute t) { t.copyBuffer(zzBuffer, zzStartRead, zzMarkedPos-zzStartRead); }
Example 6
Source File: ClassicTokenizerImpl.java From projectforge-webapp with GNU General Public License v3.0 | 4 votes |
/** * Fills CharTermAttribute with the current token text. */ public final void getText(CharTermAttribute t) { t.copyBuffer(zzBuffer, zzStartRead, zzMarkedPos-zzStartRead); }