com.google.cloud.dataflow.sdk.options.Description Java Examples
The following examples show how to use
com.google.cloud.dataflow.sdk.options.Description.
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example #1
Source File: WordCount.java From flink-dataflow with Apache License 2.0 | 4 votes |
@Description("Path of the file to write to") String getOutput();
Example #2
Source File: JoinExamples.java From flink-dataflow with Apache License 2.0 | 4 votes |
@Description("Path of the file to write to") @Validation.Required String getOutput();
Example #3
Source File: FlinkPipelineOptions.java From flink-dataflow with Apache License 2.0 | 4 votes |
@Description("Sets the delay between executions. A value of {@code -1} indicates that the default value should be used.") @Default.Long(-1L) Long getExecutionRetryDelay();
Example #4
Source File: FlinkPipelineOptions.java From flink-dataflow with Apache License 2.0 | 4 votes |
@Description("Sets the number of times that failed tasks are re-executed. " + "A value of zero effectively disables fault tolerance. A value of -1 indicates " + "that the system default value (as defined in the configuration) should be used.") @Default.Integer(-1) Integer getNumberOfExecutionRetries();
Example #5
Source File: FlinkPipelineOptions.java From flink-dataflow with Apache License 2.0 | 4 votes |
@Description("The interval between consecutive checkpoints (i.e. snapshots of the current pipeline state used for " + "fault tolerance).") @Default.Long(-1L) Long getCheckpointingInterval();
Example #6
Source File: FlinkPipelineOptions.java From flink-dataflow with Apache License 2.0 | 4 votes |
@Description("The degree of parallelism to be used when distributing operations onto workers.") @Default.Integer(-1) Integer getParallelism();
Example #7
Source File: FlinkPipelineOptions.java From flink-dataflow with Apache License 2.0 | 4 votes |
/** * The job name is used to identify jobs running on a Flink cluster. */ @Description("Dataflow job name, to uniquely identify active jobs. " + "Defaults to using the ApplicationName-UserName-Date.") @Default.InstanceFactory(DataflowPipelineOptions.JobNameFactory.class) String getJobName();
Example #8
Source File: AutoComplete.java From flink-dataflow with Apache License 2.0 | 4 votes |
@Description("Whether to use the recursive algorithm") @Default.Boolean(true) Boolean getRecursive();
Example #9
Source File: WindowedWordCount.java From flink-dataflow with Apache License 2.0 | 4 votes |
@Description("Window slide, in seconds") @Default.Long(SLIDE_SIZE) Long getSlide();
Example #10
Source File: WindowedWordCount.java From flink-dataflow with Apache License 2.0 | 4 votes |
@Description("Sliding window duration, in seconds") @Default.Long(WINDOW_SIZE) Long getWindowSize();
Example #11
Source File: KafkaWindowedWordCountExample.java From flink-dataflow with Apache License 2.0 | 4 votes |
@Description("The groupId") @Default.String(GROUP_ID) String getGroup();
Example #12
Source File: KafkaWindowedWordCountExample.java From flink-dataflow with Apache License 2.0 | 4 votes |
@Description("The Zookeeper server to connect to") @Default.String(ZOOKEEPER) String getZookeeper();
Example #13
Source File: KafkaWindowedWordCountExample.java From flink-dataflow with Apache License 2.0 | 4 votes |
@Description("The Kafka Broker to read from") @Default.String(KAFKA_BROKER) String getBroker();
Example #14
Source File: KafkaWindowedWordCountExample.java From flink-dataflow with Apache License 2.0 | 4 votes |
@Description("The Kafka topic to read from") @Default.String(KAFKA_TOPIC) String getKafkaTopic();
Example #15
Source File: CustomPipelineOptions.java From cloud-dataflow-nyc-taxi-tycoon with Apache License 2.0 | 4 votes |
@Description("ProjectId where data source topic lives") @Default.String("pubsub-public-data") @Validation.Required String getSourceProject();
Example #16
Source File: WordCount.java From flink-dataflow with Apache License 2.0 | 4 votes |
@Description("Path of the file to read from") @Default.String("gs://dataflow-samples/shakespeare/kinglear.txt") String getInput();
Example #17
Source File: TFIDF.java From flink-dataflow with Apache License 2.0 | 4 votes |
@Description("Prefix of output URI to write to") @Validation.Required String getOutput();
Example #18
Source File: TFIDF.java From flink-dataflow with Apache License 2.0 | 4 votes |
@Description("Path to the directory or GCS prefix containing files to read from") @Default.String("gs://dataflow-samples/shakespeare/") String getInput();
Example #19
Source File: FXTimeSeriesPipelineOptions.java From data-timeseries-java with Apache License 2.0 | 4 votes |
@Description("Enable Logging output to see results being output from the example") @Default.Boolean(false) boolean getEnableSampleLogging();
Example #20
Source File: FXTimeSeriesPipelineOptions.java From data-timeseries-java with Apache License 2.0 | 4 votes |
@Description("Sliding Window Period") @Default.Integer(300) int getCorrelationWindowPeriod();
Example #21
Source File: FXTimeSeriesPipelineOptions.java From data-timeseries-java with Apache License 2.0 | 4 votes |
@Description("Correlation sliding window size") @Default.Integer(600) int getCorrelationWindowSize();
Example #22
Source File: FXTimeSeriesPipelineOptions.java From data-timeseries-java with Apache License 2.0 | 4 votes |
@Description("Should NAN values be propogated forward") @Validation.Required boolean getPropogateNAN();
Example #23
Source File: FXTimeSeriesPipelineOptions.java From data-timeseries-java with Apache License 2.0 | 4 votes |
@Description("The lowest ABS(correlation value) to emit. If there are 1000 FX instrements there will be (N^2-N)/2 " + "correlations very slide of the window. If you are consuming this in a UI downstream you need to be aware" + "of not swamping the downstream systems with values that have little importance. 0.5 is a good starting point." + "important to note that 0.5 is not the median there is normally a heavey skew towards 0 ") @Validation.Required double getMinCorrelationValue();
Example #24
Source File: FXTimeSeriesPipelineOptions.java From data-timeseries-java with Apache License 2.0 | 4 votes |
@Description("When outputing correlations should the underlying values be included. " + "Warning this will generate a lot of extra work and should only be used for testing / demos") @Validation.Required boolean getIncludeUnderlying();
Example #25
Source File: FXTimeSeriesPipelineOptions.java From data-timeseries-java with Apache License 2.0 | 4 votes |
@Description("Resolution of the candles in secs") @Default.Integer(120) int getCandleResolution();
Example #26
Source File: FXTimeSeriesPipelineOptions.java From data-timeseries-java with Apache License 2.0 | 4 votes |
@Description("Number of Shards to create for the correlation calculations") @Validation.Required int getShards();
Example #27
Source File: CustomPipelineOptions.java From cloud-dataflow-nyc-taxi-tycoon with Apache License 2.0 | 4 votes |
@Description("TopicId of sink topic") @Default.String("visualizer") @Validation.Required String getSinkTopic();
Example #28
Source File: CustomPipelineOptions.java From cloud-dataflow-nyc-taxi-tycoon with Apache License 2.0 | 4 votes |
@Description("ProjectId where data sink topic lives") @Validation.Required String getSinkProject();
Example #29
Source File: CustomPipelineOptions.java From cloud-dataflow-nyc-taxi-tycoon with Apache License 2.0 | 4 votes |
@Description("TopicId of source topic") @Default.String("taxirides-realtime") @Validation.Required String getSourceTopic();
Example #30
Source File: FlinkPipelineOptions.java From flink-dataflow with Apache License 2.0 | 2 votes |
/** * List of local files to make available to workers. * <p> * Jars are placed on the worker's classpath. * <p> * The default value is the list of jars from the main program's classpath. */ @Description("Jar-Files to send to all workers and put on the classpath. " + "The default value is all files from the classpath.") @JsonIgnore List<String> getFilesToStage();