Java Code Examples for org.pentaho.di.trans.Trans#monitorClusteredTransformation()

The following examples show how to use org.pentaho.di.trans.Trans#monitorClusteredTransformation() . You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example 1
Source File: MasterSlaveIT.java    From pentaho-kettle with Apache License 2.0 6 votes vote down vote up
/**
 * This test check passing rows to sub-transformation executed on cluster
 * See PDI-10704 for details
 * @throws Exception
 */
public void runSubtransformationClustered() throws Exception {
  TransMeta transMeta =
    loadTransMetaReplaceSlavesInCluster(
      clusterGenerator, "test/org/pentaho/di/cluster/test-subtrans-clustered.ktr" );
  TransExecutionConfiguration config = createClusteredTransExecutionConfiguration();
  Result prevResult = new Result();
  prevResult.setRows( getSampleRows() );
  config.setPreviousResult( prevResult );

  TransSplitter transSplitter = Trans.executeClustered( transMeta, config );
  LogChannel logChannel = createLogChannel( "cluster unit test <runSubtransformationClustered>" );
  long nrErrors = Trans.monitorClusteredTransformation( logChannel, transSplitter, null, 1 );
  assertEquals( 0L, nrErrors );

  String result = loadFileContent( transMeta, "${java.io.tmpdir}/test-subtrans-clustered.txt" );
  assertEqualsIgnoreWhitespacesAndCase( "10", result );
}
 
Example 2
Source File: MasterSlaveIT.java    From pentaho-kettle with Apache License 2.0 5 votes vote down vote up
/**
 * This test reads a CSV file in parallel on the master in 1 copy.<br>
 * It then passes the data over to a dummy step on the slaves.<br>
 * We want to make sure that only 1 copy is considered.<br>
 */
public void runParallelFileReadOnMaster() throws Exception {
  TransMeta transMeta =
    loadTransMetaReplaceSlavesInCluster(
      clusterGenerator, "test/org/pentaho/di/cluster/test-parallel-file-read-on-master.ktr" );
  TransExecutionConfiguration config = createClusteredTransExecutionConfiguration();
  TransSplitter transSplitter = Trans.executeClustered( transMeta, config );
  LogChannel logChannel = createLogChannel( "cluster unit test <testParallelFileReadOnMaster>" );
  long nrErrors = Trans.monitorClusteredTransformation( logChannel, transSplitter, null, 1 );
  assertEquals( 0L, nrErrors );
  String result = loadFileContent( transMeta, "${java.io.tmpdir}/test-parallel-file-read-on-master-result.txt" );
  assertEqualsIgnoreWhitespacesAndCase( "100", result );
}
 
Example 3
Source File: MasterSlaveIT.java    From pentaho-kettle with Apache License 2.0 5 votes vote down vote up
/**
 * This test reads a CSV file in parallel on the master in 3 copies.<br>
 * It then passes the data over to a dummy step on the slaves.<br>
 */
public void runParallelFileReadOnMasterWithCopies() throws Exception {
  TransMeta transMeta =
    loadTransMetaReplaceSlavesInCluster(
      clusterGenerator, "test/org/pentaho/di/cluster/test-parallel-file-read-on-master-with-copies.ktr" );
  TransExecutionConfiguration config = createClusteredTransExecutionConfiguration();
  TransSplitter transSplitter = Trans.executeClustered( transMeta, config );
  LogChannel logChannel = createLogChannel( "cluster unit test <runParallelFileReadOnMasterWithCopies>" );
  long nrErrors = Trans.monitorClusteredTransformation( logChannel, transSplitter, null, 1 );
  assertEquals( 0L, nrErrors );
  String result =
    loadFileContent( transMeta, "${java.io.tmpdir}/test-parallel-file-read-on-master-result-with-copies.txt" );
  assertEqualsIgnoreWhitespacesAndCase( "100", result );
}
 
Example 4
Source File: MasterSlaveIT.java    From pentaho-kettle with Apache License 2.0 5 votes vote down vote up
/**
 * This test reads a CSV file in parallel on all 3 slaves, each with 1 copy.<br>
 * It then passes the data over to a dummy step on the slaves.<br>
 */
public void runParallelFileReadOnSlaves() throws Exception {
  TransMeta transMeta =
    loadTransMetaReplaceSlavesInCluster(
      clusterGenerator, "test/org/pentaho/di/cluster/test-parallel-file-read-on-slaves.ktr" );
  TransExecutionConfiguration config = createClusteredTransExecutionConfiguration();
  TransSplitter transSplitter = Trans.executeClustered( transMeta, config );
  LogChannel logChannel = createLogChannel( "cluster unit test <runParallelFileReadOnSlaves>" );
  long nrErrors = Trans.monitorClusteredTransformation( logChannel, transSplitter, null, 1 );
  assertEquals( 0L, nrErrors );
  String result = loadFileContent( transMeta, "${java.io.tmpdir}/test-parallel-file-read-on-slaves.txt" );
  assertEqualsIgnoreWhitespacesAndCase( "100", result );
}
 
Example 5
Source File: MasterSlaveIT.java    From pentaho-kettle with Apache License 2.0 5 votes vote down vote up
/**
 * This test reads a CSV file in parallel on all 3 slaves, each with 4 partitions.<br>
 * It then passes the data over to a dummy step on the slaves.<br>
 */
public void runParallelFileReadOnSlavesWithPartitioning() throws Exception {
  TransMeta transMeta =
    loadTransMetaReplaceSlavesInCluster(
      clusterGenerator,
      "test/org/pentaho/di/cluster/test-parallel-file-read-on-slaves-with-partitioning.ktr" );
  TransExecutionConfiguration config = createClusteredTransExecutionConfiguration();
  TransSplitter transSplitter = Trans.executeClustered( transMeta, config );
  LogChannel logChannel = createLogChannel( "cluster unit test <runParallelFileReadOnSlavesWithPartitioning>" );
  long nrErrors = Trans.monitorClusteredTransformation( logChannel, transSplitter, null, 1 );
  assertEquals( 0L, nrErrors );
  String result =
    loadFileContent( transMeta, "${java.io.tmpdir}/test-parallel-file-read-on-slaves-with-partitioning.txt" );
  assertEqualsIgnoreWhitespacesAndCase( "100", result );
}
 
Example 6
Source File: MasterSlaveIT.java    From pentaho-kettle with Apache License 2.0 5 votes vote down vote up
/**
 * This test reads a CSV file in parallel on all 3 slaves, each with 4 partitions.<br>
 * This is a variation on the test right above, with 2 steps in sequence in clustering & partitioning.<br>
 * It then passes the data over to a dummy step on the slaves.<br>
 */
public void runParallelFileReadOnSlavesWithPartitioning2() throws Exception {
  TransMeta transMeta =
    loadTransMetaReplaceSlavesInCluster(
      clusterGenerator,
      "test/org/pentaho/di/cluster/test-parallel-file-read-on-slaves-with-partitioning2.ktr" );
  TransExecutionConfiguration config = createClusteredTransExecutionConfiguration();
  TransSplitter transSplitter = Trans.executeClustered( transMeta, config );
  LogChannel logChannel = createLogChannel( "cluster unit test <runParallelFileReadOnSlavesWithPartitioning2>" );
  long nrErrors = Trans.monitorClusteredTransformation( logChannel, transSplitter, null, 1 );
  assertEquals( 0L, nrErrors );
  String result =
    loadFileContent( transMeta, "${java.io.tmpdir}/test-parallel-file-read-on-slaves-with-partitioning2.txt" );
  assertEqualsIgnoreWhitespacesAndCase( "100", result );
}
 
Example 7
Source File: MasterSlaveIT.java    From pentaho-kettle with Apache License 2.0 5 votes vote down vote up
/**
 * This test reads a CSV file and sends the data to 3 copies on 3 slave servers.<br>
 */
public void runMultipleCopiesOnMultipleSlaves2() throws Exception {
  TransMeta transMeta =
    loadTransMetaReplaceSlavesInCluster(
      clusterGenerator, "test/org/pentaho/di/cluster/test-hops-between-multiple-copies-steps-on-cluster.ktr" );
  TransExecutionConfiguration config = createClusteredTransExecutionConfiguration();
  TransSplitter transSplitter = Trans.executeClustered( transMeta, config );
  LogChannel logChannel = createLogChannel( "cluster unit test <runMultipleCopiesOnMultipleSlaves2>" );
  long nrErrors = Trans.monitorClusteredTransformation( logChannel, transSplitter, null, 1 );
  assertEquals( 0L, nrErrors );
  String result = loadFileContent( transMeta, "${java.io.tmpdir}/test-multiple-copies-on-multiple-slaves2.txt" );
  assertEqualsIgnoreWhitespacesAndCase( "90000", result );
}
 
Example 8
Source File: MasterSlaveIT.java    From pentaho-kettle with Apache License 2.0 5 votes vote down vote up
/**
 * This test reads a CSV file and sends the data to 3 copies on 3 slave servers.<br>
 */
public void runMultipleCopiesOnMultipleSlaves() throws Exception {
  TransMeta transMeta =
    loadTransMetaReplaceSlavesInCluster(
      clusterGenerator, "test/org/pentaho/di/cluster/test-multiple-copies-on-multiple-slaves.ktr" );
  TransExecutionConfiguration config = createClusteredTransExecutionConfiguration();
  TransSplitter transSplitter = Trans.executeClustered( transMeta, config );
  LogChannel logChannel = createLogChannel( "cluster unit test <testMultipleCopiesOnMultipleSlaves>" );
  long nrErrors = Trans.monitorClusteredTransformation( logChannel, transSplitter, null, 1 );
  assertEquals( 0L, nrErrors );
  String result = loadFileContent( transMeta, "${java.io.tmpdir}/test-multiple-copies-on-multiple-slaves.txt" );
  assertEqualsIgnoreWhitespacesAndCase( "100", result );
}
 
Example 9
Source File: MasterSlaveIT.java    From pentaho-kettle with Apache License 2.0 5 votes vote down vote up
/**
 * This test generates rows on the master, generates random values clustered and brings them back the master.<br>
 * See also: PDI-6324 : Generate Rows to a clustered step ceases to work
 */
public void runOneStepClustered() throws Exception {
  TransMeta transMeta =
    loadTransMetaReplaceSlavesInCluster(
      clusterGenerator, "test/org/pentaho/di/cluster/one-step-clustered.ktr" );
  TransExecutionConfiguration config = createClusteredTransExecutionConfiguration();
  TransSplitter transSplitter = Trans.executeClustered( transMeta, config );
  LogChannel logChannel = createLogChannel( "cluster unit test <runOneStepClustered>" );
  long nrErrors = Trans.monitorClusteredTransformation( logChannel, transSplitter, null, 1 );
  assertEquals( 0L, nrErrors );
  String result = loadFileContent( transMeta, "${java.io.tmpdir}/one-step-clustered.txt" );
  assertEqualsIgnoreWhitespacesAndCase( "10000", result );
}