org.openrdf.query.QueryInterruptedException Java Examples

The following examples show how to use org.openrdf.query.QueryInterruptedException. You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example #1
Source File: RepositoryConnectionTest.java    From database with GNU General Public License v2.0 5 votes vote down vote up
@Test
public void testOrderByQueriesAreInterruptable()
	throws Exception
{
	testCon.begin();
	for (int index = 0; index < 512; index++) {
		testCon.add(RDFS.CLASS, RDFS.COMMENT, testCon.getValueFactory().createBNode());
	}
	testCon.commit();

	final TupleQuery query = testCon.prepareTupleQuery(QueryLanguage.SPARQL,
			"SELECT * WHERE { ?s ?p ?o . ?s1 ?p1 ?o1 . ?s2 ?p2 ?o2 . ?s3 ?p3 ?o3 } ORDER BY ?s1 ?p1 ?o1 LIMIT 1000");
	query.setMaxQueryTime(2);

	final TupleQueryResult result = query.evaluate();
	final long startTime = System.currentTimeMillis();
	try {
		result.hasNext();
		fail("Query should have been interrupted");
	}
	catch (QueryInterruptedException e) {
		// Expected
		final long duration = System.currentTimeMillis() - startTime;

		assertTrue("Query not interrupted quickly enough, should have been ~2s, but was "
				+ (duration / 1000) + "s", duration < 5000);
	}
}
 
Example #2
Source File: Bigdata2Sesame2BindingSetIterator.java    From database with GNU General Public License v2.0 4 votes vote down vote up
/**
 * {@inheritDoc}
 * 
 * @see <a href="https://sourceforge.net/apps/trac/bigdata/ticket/503">
 *      Bigdata2Sesame2BindingSetIterator throws QueryEvaluationException
 *      were it should throw NoSuchElementException. </a>
 */
public boolean hasNext() throws QueryEvaluationException {

    try {

        if (!open) {
            return false;
        }
        if (next != null) {
            // already fetched.
            return true;
        }
        if (!src.hasNext()) {
            // Exhausted.
            close();
            return false;
        }
        final IBindingSet bset = src.next(); // fetch
        next = getBindingSet(bset); // resolve.
        return true;

    } catch (Throwable t) {

        if (!open) {
            /**
             * The iterator was concurrently closed. This often means that
             * the connection guarding the query was concurrently closed, in
             * which case it is possible for a concurrent writer to have
             * triggered recycling (on the RWStore). Therefore, we want to
             * ignore any thrown exception after the iterator was closed
             * since a wide variety of problems could be triggered by
             * reading against a commit point that had since been recycled.
             * <p>
             * Note: The logic to fetch the next result was moved into
             * hasNext() in order to avoid doing any work in next(). Thus,
             * if there is any problem resolving the next chunk of
             * solutions, hasNext() will report [false] if the iterator was
             * concurrently closed and otherwise will throw out the
             * exception.
             * 
             * @see <a
             *      href="https://sourceforge.net/apps/trac/bigdata/ticket/644"
             *      > Bigdata2Sesame2BindingSetIterator can fail to notice
             *      asynchronous close() </a>
             */
            return false;
        }
        
        // Ensure closed.
        try {
            close();
        } catch (Throwable t2) {
            // Ignore.
        }
        
        // Wrap and rethrow.
        if (InnerCause.isInnerCause(t, QueryTimeoutException.class)) {
        
            /*
             * Align with the openrdf API.
             */
        
            throw new QueryInterruptedException(t);
            
        } else {
            
            throw new QueryEvaluationException(t);
            
        }
        
    }

}