edu.stanford.nlp.ling.SentenceUtils Java Examples

The following examples show how to use edu.stanford.nlp.ling.SentenceUtils. You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example #1
Source File: StanfordLexicalDemo.java    From Natural-Language-Processing-with-Java-Second-Edition with MIT License 5 votes vote down vote up
public static void main(String args[]){
    String parseModel = getResourcePath() + "englishPCFG.ser.gz";
    LexicalizedParser lexicalizedParser = LexicalizedParser.loadModel(parseModel);
    String [] sentenceArray = {"The", "cow" ,"jumped", "over", "the", "moon", "."};
    List<CoreLabel> words = SentenceUtils.toCoreLabelList(sentenceArray);
    Tree parseTree = lexicalizedParser.apply(words); 
    parseTree.pennPrint(); 
    
    TreePrint treePrint =  new TreePrint("typedDependenciesCollapsed"); 
    treePrint.printTree(parseTree); 
    
    
    String sentence = "The cow jumped over the moon."; 
    TokenizerFactory<CoreLabel> tokenizerFactory =  PTBTokenizer.factory(new CoreLabelTokenFactory(), ""); 
    Tokenizer<CoreLabel> tokenizer =  tokenizerFactory.getTokenizer(new StringReader(sentence)); 
    List<CoreLabel> wordList = tokenizer.tokenize(); 
    parseTree = lexicalizedParser.apply(wordList); 
    TreebankLanguagePack tlp =  lexicalizedParser.treebankLanguagePack(); 
    GrammaticalStructureFactory gsf =  tlp.grammaticalStructureFactory(); 
    GrammaticalStructure gs =  gsf.newGrammaticalStructure(parseTree); 
    List<TypedDependency> tdl = gs.typedDependenciesCCprocessed(); 
    System.out.println(tdl); 
    
    for(TypedDependency dependency : tdl) { 
        System.out.println("Governor Word: [" + dependency.gov()  
            + "] Relation: [" + dependency.reln().getLongName() 
            + "] Dependent Word: [" + dependency.dep() + "]"); 
    } 
    
}
 
Example #2
Source File: CoreNLP.java    From Criteria2Query with Apache License 2.0 5 votes vote down vote up
public List<String> splitParagraph(String paragraph){
	Reader reader = new StringReader(paragraph);
	DocumentPreprocessor dp = new DocumentPreprocessor(reader);
	List<String> sentenceList = new ArrayList<String>();
	for (List<HasWord> sentence : dp) {
		String sentenceString = SentenceUtils.listToString(sentence);
		sentenceList.add(sentenceString);
	}
	return sentenceList;
}
 
Example #3
Source File: TaggerDemo.java    From blog-codes with Apache License 2.0 4 votes vote down vote up
public static void main(String[] args) throws Exception { 
	InputStream input = TaggerDemo.class.getResourceAsStream("/"+MaxentTagger.DEFAULT_JAR_PATH);

	MaxentTagger tagger = new MaxentTagger(input);
	
	List<List<HasWord>> sentences = MaxentTagger.tokenizeText(new StringReader("Karma of humans is AI"));

	for (List<HasWord> sentence : sentences) {

		List<TaggedWord> tSentence = tagger.tagSentence(sentence);

		System.out.println(SentenceUtils.listToString(tSentence, false));

	}

}