Java Code Examples for org.apache.flink.api.java.ClosureCleaner#clean()

The following examples show how to use org.apache.flink.api.java.ClosureCleaner#clean() . You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example 1
Source File: KeyedTwoInputStreamOperatorTestHarness.java    From flink with Apache License 2.0 6 votes vote down vote up
public KeyedTwoInputStreamOperatorTestHarness(
		TwoInputStreamOperator<IN1, IN2, OUT> operator,
		KeySelector<IN1, K> keySelector1,
		KeySelector<IN2, K> keySelector2,
		TypeInformation<K> keyType,
		int maxParallelism,
		int numSubtasks,
		int subtaskIndex) throws Exception {
	super(operator, maxParallelism, numSubtasks, subtaskIndex);

	ClosureCleaner.clean(keySelector1, ExecutionConfig.ClosureCleanerLevel.RECURSIVE, false);
	ClosureCleaner.clean(keySelector2, ExecutionConfig.ClosureCleanerLevel.RECURSIVE, false);
	config.setStatePartitioner(0, keySelector1);
	config.setStatePartitioner(1, keySelector2);
	config.setStateKeySerializer(keyType.createSerializer(executionConfig));
}
 
Example 2
Source File: Pattern.java    From flink with Apache License 2.0 6 votes vote down vote up
/**
 * Applies a stop condition for a looping state. It allows cleaning the underlying state.
 *
 * @param untilCondition a condition an event has to satisfy to stop collecting events into looping state
 * @return The same pattern with applied untilCondition
 */
public Pattern<T, F> until(IterativeCondition<F> untilCondition) {
	Preconditions.checkNotNull(untilCondition, "The condition cannot be null");

	if (this.untilCondition != null) {
		throw new MalformedPatternException("Only one until condition can be applied.");
	}

	if (!quantifier.hasProperty(Quantifier.QuantifierProperty.LOOPING)) {
		throw new MalformedPatternException("The until condition is only applicable to looping states.");
	}

	ClosureCleaner.clean(untilCondition, ExecutionConfig.ClosureCleanerLevel.RECURSIVE, true);
	this.untilCondition = untilCondition;

	return this;
}
 
Example 3
Source File: Pravega.java    From flink-connectors with Apache License 2.0 5 votes vote down vote up
/**
 * Configures the timestamp and watermark assigner.
 *
 * @param assignerWithTimeWindows the timestamp and watermark assigner.
 * @return TableSourceReaderBuilder instance.
 */
// TODO: Due to the serialization validation for `connectorProperties`, only `public` `static-inner/outer` class implements
// `AssignerWithTimeWindow` is supported as a parameter of `withTimestampAssigner` in Table API stream table source.
public TableSourceReaderBuilder withTimestampAssigner(AssignerWithTimeWindows<Row> assignerWithTimeWindows) {
    try {
        ClosureCleaner.clean(assignerWithTimeWindows, ExecutionConfig.ClosureCleanerLevel.RECURSIVE, true);
        this.assignerWithTimeWindows = new SerializedValue<>(assignerWithTimeWindows);
    } catch (IOException e) {
        throw new IllegalArgumentException("The given assigner is not serializable", e);
    }
    return this;
}
 
Example 4
Source File: ClosureCleanerTest.java    From flink with Apache License 2.0 5 votes vote down vote up
@Test
public void testNestedSerializable() throws Exception  {
	MapCreator creator = new NestedSerializableMapCreator(1);
	MapFunction<Integer, Integer> map = creator.getMap();

	ClosureCleaner.clean(map, ExecutionConfig.ClosureCleanerLevel.RECURSIVE, true);

	ClosureCleaner.ensureSerializable(map);

	int result = map.map(3);
	Assert.assertEquals(result, 4);
}
 
Example 5
Source File: RpcGlobalAggregateManager.java    From flink with Apache License 2.0 5 votes vote down vote up
@Override
public <IN, ACC, OUT> OUT updateGlobalAggregate(String aggregateName, Object aggregand, AggregateFunction<IN, ACC, OUT> aggregateFunction)
	throws IOException {
	ClosureCleaner.clean(aggregateFunction, ExecutionConfig.ClosureCleanerLevel.RECURSIVE,true);
	byte[] serializedAggregateFunction = InstantiationUtil.serializeObject(aggregateFunction);
	Object result = null;
	try {
		result = jobMasterGateway.updateGlobalAggregate(aggregateName, aggregand, serializedAggregateFunction).get();
	} catch (Exception e) {
		throw new IOException("Error updating global aggregate.", e);
	}
	return (OUT) result;
}
 
Example 6
Source File: KeyedOneInputStreamOperatorTestHarness.java    From Flink-CEPplus with Apache License 2.0 5 votes vote down vote up
public KeyedOneInputStreamOperatorTestHarness(
		OneInputStreamOperator<IN, OUT> operator,
		final KeySelector<IN, K> keySelector,
		TypeInformation<K> keyType,
		int maxParallelism,
		int numSubtasks,
		int subtaskIndex) throws Exception {
	super(operator, maxParallelism, numSubtasks, subtaskIndex);

	ClosureCleaner.clean(keySelector, ExecutionConfig.ClosureCleanerLevel.RECURSIVE, false);
	config.setStatePartitioner(0, keySelector);
	config.setStateKeySerializer(keyType.createSerializer(executionConfig));
}
 
Example 7
Source File: ClosureCleanerTest.java    From flink with Apache License 2.0 5 votes vote down vote up
@Test
public void testCleanedNonSerializable() throws Exception  {
	MapCreator creator = new NonSerializableMapCreator();
	MapFunction<Integer, Integer> map = creator.getMap();

	ClosureCleaner.clean(map, ExecutionConfig.ClosureCleanerLevel.RECURSIVE, true);

	int result = map.map(3);
	Assert.assertEquals(result, 4);
}
 
Example 8
Source File: KeyedOneInputStreamOperatorTestHarness.java    From Flink-CEPplus with Apache License 2.0 5 votes vote down vote up
public KeyedOneInputStreamOperatorTestHarness(
		final OneInputStreamOperator<IN, OUT> operator,
		final  KeySelector<IN, K> keySelector,
		final TypeInformation<K> keyType,
		final MockEnvironment environment) throws Exception {

	super(operator, environment);

	ClosureCleaner.clean(keySelector, ExecutionConfig.ClosureCleanerLevel.RECURSIVE, false);
	config.setStatePartitioner(0, keySelector);
	config.setStateKeySerializer(keyType.createSerializer(executionConfig));
}
 
Example 9
Source File: FlinkKafkaProducerBase.java    From flink with Apache License 2.0 5 votes vote down vote up
/**
 * The main constructor for creating a FlinkKafkaProducer.
 *
 * @param defaultTopicId The default topic to write data to
 * @param serializationSchema A serializable serialization schema for turning user objects into a kafka-consumable byte[] supporting key/value messages
 * @param producerConfig Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.
 * @param customPartitioner A serializable partitioner for assigning messages to Kafka partitions. Passing null will use Kafka's partitioner.
 */
public FlinkKafkaProducerBase(String defaultTopicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, FlinkKafkaPartitioner<IN> customPartitioner) {
	requireNonNull(defaultTopicId, "TopicID not set");
	requireNonNull(serializationSchema, "serializationSchema not set");
	requireNonNull(producerConfig, "producerConfig not set");
	ClosureCleaner.clean(customPartitioner, ExecutionConfig.ClosureCleanerLevel.RECURSIVE, true);
	ClosureCleaner.ensureSerializable(serializationSchema);

	this.defaultTopicId = defaultTopicId;
	this.schema = serializationSchema;
	this.producerConfig = producerConfig;
	this.flinkKafkaPartitioner = customPartitioner;

	// set the producer configuration properties for kafka record key value serializers.
	if (!producerConfig.containsKey(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG)) {
		this.producerConfig.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class.getName());
	} else {
		LOG.warn("Overwriting the '{}' is not recommended", ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG);
	}

	if (!producerConfig.containsKey(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG)) {
		this.producerConfig.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class.getName());
	} else {
		LOG.warn("Overwriting the '{}' is not recommended", ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG);
	}

	// eagerly ensure that bootstrap servers are set.
	if (!this.producerConfig.containsKey(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG)) {
		throw new IllegalArgumentException(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG + " must be supplied in the producer config properties.");
	}

	this.topicPartitionsMap = new HashMap<>();
}
 
Example 10
Source File: ClosureCleanerTest.java    From Flink-CEPplus with Apache License 2.0 5 votes vote down vote up
@Test(expected = InvalidProgramException.class)
public void testNestedNonSerializable() throws Exception  {
	MapCreator creator = new NestedNonSerializableMapCreator(1);
	MapFunction<Integer, Integer> map = creator.getMap();

	ClosureCleaner.clean(map, ExecutionConfig.ClosureCleanerLevel.RECURSIVE, true);

	ClosureCleaner.ensureSerializable(map);

	int result = map.map(3);
	Assert.assertEquals(result, 4);
}
 
Example 11
Source File: ClosureCleanerTest.java    From Flink-CEPplus with Apache License 2.0 5 votes vote down vote up
@Test
public void testNestedSerializable() throws Exception  {
	MapCreator creator = new NestedSerializableMapCreator(1);
	MapFunction<Integer, Integer> map = creator.getMap();

	ClosureCleaner.clean(map, ExecutionConfig.ClosureCleanerLevel.RECURSIVE, true);

	ClosureCleaner.ensureSerializable(map);

	int result = map.map(3);
	Assert.assertEquals(result, 4);
}
 
Example 12
Source File: StreamExecutionEnvironment.java    From flink with Apache License 2.0 5 votes vote down vote up
/**
 * Returns a "closure-cleaned" version of the given function. Cleans only if closure cleaning
 * is not disabled in the {@link org.apache.flink.api.common.ExecutionConfig}
 */
@Internal
public <F> F clean(F f) {
	if (getConfig().isClosureCleanerEnabled()) {
		ClosureCleaner.clean(f, getConfig().getClosureCleanerLevel(), true);
	}
	ClosureCleaner.ensureSerializable(f);
	return f;
}
 
Example 13
Source File: ClosureCleanerTest.java    From Flink-CEPplus with Apache License 2.0 5 votes vote down vote up
@Test
public void testSerializable() throws Exception  {
	MapCreator creator = new SerializableMapCreator(1);
	MapFunction<Integer, Integer> map = creator.getMap();

	ClosureCleaner.clean(map, ExecutionConfig.ClosureCleanerLevel.RECURSIVE, true);

	int result = map.map(3);
	Assert.assertEquals(result, 4);
}
 
Example 14
Source File: ClosureCleanerTest.java    From flink with Apache License 2.0 4 votes vote down vote up
@Test
public void testSelfReferencingClean() {
	final NestedSelfReferencing selfReferencing = new NestedSelfReferencing();
	ClosureCleaner.clean(selfReferencing, ExecutionConfig.ClosureCleanerLevel.RECURSIVE, true);
}
 
Example 15
Source File: JobMasterTest.java    From Flink-CEPplus with Apache License 2.0 4 votes vote down vote up
/**
 * Tests the updateGlobalAggregate functionality
 */
@Test
public void testJobMasterAggregatesValuesCorrectly() throws Exception {
	final JobMaster jobMaster = createJobMaster(
		configuration,
		jobGraph,
		haServices,
		new TestingJobManagerSharedServicesBuilder().build(),
		heartbeatServices);

	CompletableFuture<Acknowledge> startFuture = jobMaster.start(jobMasterId);
	final JobMasterGateway jobMasterGateway = jobMaster.getSelfGateway(JobMasterGateway.class);

	try {
		// wait for the start to complete
		startFuture.get(testingTimeout.toMilliseconds(), TimeUnit.MILLISECONDS);

		CompletableFuture<Object> updateAggregateFuture;

		AggregateFunction<Integer, Integer, Integer> aggregateFunction = createAggregateFunction();

		ClosureCleaner.clean(aggregateFunction, ExecutionConfig.ClosureCleanerLevel.RECURSIVE, true);
		byte[] serializedAggregateFunction = InstantiationUtil.serializeObject(aggregateFunction);

		updateAggregateFuture = jobMasterGateway.updateGlobalAggregate("agg1", 1, serializedAggregateFunction);
		assertThat(updateAggregateFuture.get(), equalTo(1));

		updateAggregateFuture = jobMasterGateway.updateGlobalAggregate("agg1", 2, serializedAggregateFunction);
		assertThat(updateAggregateFuture.get(), equalTo(3));

		updateAggregateFuture = jobMasterGateway.updateGlobalAggregate("agg1", 3, serializedAggregateFunction);
		assertThat(updateAggregateFuture.get(), equalTo(6));

		updateAggregateFuture = jobMasterGateway.updateGlobalAggregate("agg1", 4, serializedAggregateFunction);
		assertThat(updateAggregateFuture.get(), equalTo(10));

		updateAggregateFuture = jobMasterGateway.updateGlobalAggregate("agg2", 10, serializedAggregateFunction);
		assertThat(updateAggregateFuture.get(), equalTo(10));

		updateAggregateFuture = jobMasterGateway.updateGlobalAggregate("agg2", 23, serializedAggregateFunction);
		assertThat(updateAggregateFuture.get(), equalTo(33));

	} finally {
		RpcUtils.terminateRpcEndpoint(jobMaster, testingTimeout);
	}
}
 
Example 16
Source File: CassandraTupleWriteAheadSink.java    From Flink-CEPplus with Apache License 2.0 4 votes vote down vote up
protected CassandraTupleWriteAheadSink(String insertQuery, TypeSerializer<IN> serializer, ClusterBuilder builder, CheckpointCommitter committer) throws Exception {
	super(committer, serializer, UUID.randomUUID().toString().replace("-", "_"));
	this.insertQuery = insertQuery;
	this.builder = builder;
	ClosureCleaner.clean(builder, ExecutionConfig.ClosureCleanerLevel.RECURSIVE, true);
}
 
Example 17
Source File: ClosureCleanerTest.java    From Flink-CEPplus with Apache License 2.0 3 votes vote down vote up
@Test
public void testRealOuterStaticClassInnerStaticClassInnerAnonymousOrLocalClass() {
	MapFunction<Integer, Integer> nestedMap = new OuterMapCreator().getMap();

	MapFunction<Integer, Integer> wrappedMap = new WrapperMapFunction(nestedMap);

	Tuple1<MapFunction<Integer, Integer>> tuple = new Tuple1<>(wrappedMap);

	ClosureCleaner.clean(tuple, ExecutionConfig.ClosureCleanerLevel.RECURSIVE, true);

	ClosureCleaner.ensureSerializable(tuple);
}
 
Example 18
Source File: FlinkPulsarSource.java    From pulsar-flink with Apache License 2.0 3 votes vote down vote up
/**
 * Specifies an {@link AssignerWithPunctuatedWatermarks} to emit watermarks
 * in a punctuated manner. The watermark extractor will run per Pulsar partition,
 * watermarks will be merged across partitions in the same way as in the Flink runtime,
 * when streams are merged.
 *
 * <p>When a subtask of a FlinkPulsarSource source reads multiple Pulsar partitions,
 * the streams from the partitions are unioned in a "first come first serve" fashion.
 * Per-partition characteristics are usually lost that way.
 * For example, if the timestamps are strictly ascending per Pulsar partition,
 * they will not be strictly ascending in the resulting Flink DataStream,
 * if the parallel source subtask reads more that one partition.
 *
 * <p>Running timestamp extractors / watermark generators directly inside the Pulsar source,
 * per Pulsar partition, allows users to let them exploit the per-partition characteristics.
 *
 * <p>Note: One can use either an {@link AssignerWithPunctuatedWatermarks} or an
 * {@link AssignerWithPeriodicWatermarks}, not both at the same time.
 *
 * @param assigner The timestamp assigner / watermark generator to use.
 * @return The reader object, to allow function chaining.
 */
public FlinkPulsarSource<T> assignTimestampsAndWatermarks(AssignerWithPeriodicWatermarks<T> assigner) {
    checkNotNull(assigner);

    if (this.punctuatedWatermarkAssigner != null) {
        throw new IllegalStateException("A punctuated watermark emitter has already been set.");
    }
    try {
        ClosureCleaner.clean(assigner, ExecutionConfig.ClosureCleanerLevel.RECURSIVE, true);
        this.periodicWatermarkAssigner = new SerializedValue<>(assigner);
        return this;
    } catch (Exception e) {
        throw new IllegalArgumentException("The given assigner is not serializable", e);
    }
}
 
Example 19
Source File: ClosureCleanerTest.java    From flink with Apache License 2.0 3 votes vote down vote up
@Test
public void testRealOuterStaticClassInnerStaticClassInnerAnonymousOrLocalClass() {
	MapFunction<Integer, Integer> nestedMap = new OuterMapCreator().getMap();

	MapFunction<Integer, Integer> wrappedMap = new WrapperMapFunction(nestedMap);

	Tuple1<MapFunction<Integer, Integer>> tuple = new Tuple1<>(wrappedMap);

	ClosureCleaner.clean(tuple, ExecutionConfig.ClosureCleanerLevel.RECURSIVE, true);

	ClosureCleaner.ensureSerializable(tuple);
}
 
Example 20
Source File: FlinkKafkaConsumerBase.java    From Flink-CEPplus with Apache License 2.0 3 votes vote down vote up
/**
 * Specifies an {@link AssignerWithPunctuatedWatermarks} to emit watermarks in a punctuated manner.
 * The watermark extractor will run per Kafka partition, watermarks will be merged across partitions
 * in the same way as in the Flink runtime, when streams are merged.
 *
 * <p>When a subtask of a FlinkKafkaConsumer source reads multiple Kafka partitions,
 * the streams from the partitions are unioned in a "first come first serve" fashion. Per-partition
 * characteristics are usually lost that way. For example, if the timestamps are strictly ascending
 * per Kafka partition, they will not be strictly ascending in the resulting Flink DataStream, if the
 * parallel source subtask reads more that one partition.
 *
 * <p>Running timestamp extractors / watermark generators directly inside the Kafka source, per Kafka
 * partition, allows users to let them exploit the per-partition characteristics.
 *
 * <p>Note: One can use either an {@link AssignerWithPunctuatedWatermarks} or an
 * {@link AssignerWithPeriodicWatermarks}, not both at the same time.
 *
 * @param assigner The timestamp assigner / watermark generator to use.
 * @return The consumer object, to allow function chaining.
 */
public FlinkKafkaConsumerBase<T> assignTimestampsAndWatermarks(AssignerWithPeriodicWatermarks<T> assigner) {
	checkNotNull(assigner);

	if (this.punctuatedWatermarkAssigner != null) {
		throw new IllegalStateException("A punctuated watermark emitter has already been set.");
	}
	try {
		ClosureCleaner.clean(assigner, ExecutionConfig.ClosureCleanerLevel.RECURSIVE, true);
		this.periodicWatermarkAssigner = new SerializedValue<>(assigner);
		return this;
	} catch (Exception e) {
		throw new IllegalArgumentException("The given assigner is not serializable", e);
	}
}