Java Code Examples for org.elasticsearch.common.io.stream.StreamOutput#write()

The following examples show how to use org.elasticsearch.common.io.stream.StreamOutput#write() . You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example 1
Source File: IntegerTermsSet.java    From siren-join with GNU Affero General Public License v3.0 6 votes vote down vote up
/**
 * Serialize the list of terms to the {@link StreamOutput}.
 * <br>
 * Given the low performance of {@link org.elasticsearch.common.io.stream.BytesStreamOutput} when writing a large number
 * of longs (5 to 10 times slower than writing directly to a byte[]), we use a small buffer of 8kb
 * to optimise the throughput. 8kb seems to be the optimal buffer size, larger buffer size did not improve
 * the throughput.
 *
 * @param out the output
 */
@Override
public void writeTo(StreamOutput out) throws IOException {
  // Encode flag
  out.writeBoolean(this.isPruned());

  // Encode size of list
  out.writeInt(set.size());

  // Encode ints
  BytesRef buffer = new BytesRef(new byte[1024 * 8]);
  Iterator<IntCursor> it = set.iterator();
  while (it.hasNext()) {
    Bytes.writeVInt(buffer, it.next().value);
    if (buffer.offset > buffer.bytes.length - 5) {
      out.write(buffer.bytes, 0, buffer.offset);
      buffer.offset = 0;
    }
  }

  // flush the remaining bytes from the buffer
  out.write(buffer.bytes, 0, buffer.offset);
}
 
Example 2
Source File: BloomFilterTermsSet.java    From siren-join with GNU Affero General Public License v3.0 6 votes vote down vote up
/**
 * Serialize the list of terms to the {@link StreamOutput}.
 * <br>
 * Given the low performance of {@link org.elasticsearch.common.io.stream.BytesStreamOutput} when writing a large number
 * of longs (5 to 10 times slower than writing directly to a byte[]), we use a small buffer of 8kb
 * to optimise the throughput. 8kb seems to be the optimal buffer size, larger buffer size did not improve
 * the throughput.
 */
@Override
public void writeTo(StreamOutput out) throws IOException {
  // Encode flag
  out.writeBoolean(this.isPruned());

  // Encode bloom filter
  out.writeVInt(set.numHashFunctions);
  out.writeVInt(set.hashing.type()); // hashType
  out.writeVInt(set.bits.data.length);
  BytesRef buffer = new BytesRef(new byte[1024 * 8]);
  for (long l : set.bits.data) {
    Bytes.writeLong(buffer, l);
    if (buffer.offset == buffer.length) {
      out.write(buffer.bytes, 0, buffer.offset);
      buffer.offset = 0;
    }
  }
  // flush the remaining bytes from the buffer
  out.write(buffer.bytes, 0, buffer.offset);
}
 
Example 3
Source File: LongTermsSet.java    From siren-join with GNU Affero General Public License v3.0 6 votes vote down vote up
/**
 * Serialize the list of terms to the {@link StreamOutput}.
 * <br>
 * Given the low performance of {@link org.elasticsearch.common.io.stream.BytesStreamOutput} when writing a large number
 * of longs (5 to 10 times slower than writing directly to a byte[]), we use a small buffer of 8kb
 * to optimise the throughput. 8kb seems to be the optimal buffer size, larger buffer size did not improve
 * the throughput.
 *
 * @param out the output
 */
@Override
public void writeTo(StreamOutput out) throws IOException {
  // Encode flag
  out.writeBoolean(this.isPruned());

  // Encode size of list
  out.writeInt(set.size());

  // Encode longs
  BytesRef buffer = new BytesRef(new byte[1024 * 8]);
  Iterator<LongCursor> it = set.iterator();
  while (it.hasNext()) {
    Bytes.writeLong(buffer, it.next().value);
    if (buffer.offset == buffer.length) {
      out.write(buffer.bytes, 0, buffer.offset);
      buffer.offset = 0;
    }
  }

  // flush the remaining bytes from the buffer
  out.write(buffer.bytes, 0, buffer.offset);
}
 
Example 4
Source File: InternalTopK.java    From elasticsearch-topk-plugin with Apache License 2.0 6 votes vote down vote up
@Override
public void writeTo(StreamOutput out) throws IOException {
    out.writeString(this.name);
    out.writeInt(this.size.intValue());
    out.writeBoolean(this.summary != null);
    if (this.summary != null) {
        out.writeInt(this.summary.getCapacity());
        byte[] bytes = this.summary.toBytes();
        out.writeInt(bytes.length);
        out.write(bytes);
    }
    out.writeInt(getBuckets().size());
    for (TopK.Bucket bucket : getBuckets()) {
        bucket.writeTo(out);
    }
}
 
Example 5
Source File: BlobStartPrefixResponse.java    From Elasticsearch with Apache License 2.0 5 votes vote down vote up
@Override
public void writeTo(StreamOutput out) throws IOException {
    super.writeTo(out);
    out.writeInt(existingDigests.length);
    for (byte[] digest: existingDigests){
        out.write(digest);
    }
}
 
Example 6
Source File: InetSocketTransportAddress.java    From Elasticsearch with Apache License 2.0 5 votes vote down vote up
@Override
public void writeTo(StreamOutput out) throws IOException {
    byte[] bytes = address().getAddress().getAddress();  // 4 bytes (IPv4) or 16 bytes (IPv6)
    out.writeByte((byte) bytes.length); // 1 byte
    out.write(bytes, 0, bytes.length);
    // don't serialize scope ids over the network!!!!
    // these only make sense with respect to the local machine, and will only formulate
    // the address incorrectly remotely.
    out.writeInt(address.getPort());
}
 
Example 7
Source File: BlobStartPrefixResponse.java    From crate with Apache License 2.0 5 votes vote down vote up
@Override
public void writeTo(StreamOutput out) throws IOException {
    out.writeInt(existingDigests.length);
    for (byte[] digest : existingDigests) {
        out.write(digest);
    }
}
 
Example 8
Source File: TransportAddress.java    From crate with Apache License 2.0 5 votes vote down vote up
@Override
public void writeTo(StreamOutput out) throws IOException {
    byte[] bytes = address.getAddress().getAddress();  // 4 bytes (IPv4) or 16 bytes (IPv6)
    out.writeByte((byte) bytes.length); // 1 byte
    out.write(bytes, 0, bytes.length);
    out.writeString(address.getHostString());
    // don't serialize scope ids over the network!!!!
    // these only make sense with respect to the local machine, and will only formulate
    // the address incorrectly remotely.
    out.writeInt(address.getPort());
}
 
Example 9
Source File: DeleteBlobRequest.java    From Elasticsearch with Apache License 2.0 4 votes vote down vote up
@Override
public void writeTo(StreamOutput out) throws IOException {
    super.writeTo(out);
    out.write(digest);
}
 
Example 10
Source File: StartBlobRequest.java    From Elasticsearch with Apache License 2.0 4 votes vote down vote up
@Override
public void writeTo(StreamOutput out) throws IOException {
    super.writeTo(out);
    out.write(digest);
}
 
Example 11
Source File: PutChunkRequest.java    From Elasticsearch with Apache License 2.0 4 votes vote down vote up
@Override
public void writeTo(StreamOutput out) throws IOException {
    super.writeTo(out);
    out.write(digest);
    out.writeVLong(currentPos);
}
 
Example 12
Source File: IndicesOptions.java    From Elasticsearch with Apache License 2.0 4 votes vote down vote up
public void writeIndicesOptions(StreamOutput out) throws IOException {
    out.write(id);
}
 
Example 13
Source File: ColumnIndexWriterProjection.java    From crate with Apache License 2.0 4 votes vote down vote up
@Override
public void writeTo(StreamOutput out) throws IOException {
    super.writeTo(out);

    if (targetColsSymbolsExclPartition == null) {
        out.writeBoolean(false);
    } else {
        out.writeBoolean(true);
        Symbols.toStream(targetColsSymbolsExclPartition, out);
    }
    if (targetColsExclPartitionCols == null) {
        out.writeBoolean(false);
    } else {
        out.writeBoolean(true);
        out.writeVInt(targetColsExclPartitionCols.size());
        for (Reference columnIdent : targetColsExclPartitionCols) {
            Reference.toStream(columnIdent, out);
        }
    }

    out.writeBoolean(ignoreDuplicateKeys);
    if (onDuplicateKeyAssignments == null) {
        out.writeBoolean(false);
    } else {
        out.writeBoolean(true);
        out.writeVInt(onDuplicateKeyAssignments.size());
        for (Map.Entry<Reference, Symbol> entry : onDuplicateKeyAssignments.entrySet()) {
            Reference.toStream(entry.getKey(), out);
            Symbols.toStream(entry.getValue(), out);
        }
    }

    if (out.getVersion().onOrAfter(Version.V_4_2_0)) {
        out.write(allTargetColumns.size());
        for (var ref : allTargetColumns) {
            Symbols.toStream(ref, out);
        }
        if (outputs != null) {
            out.writeVInt(outputs.size());
            for (var output : outputs) {
                Symbols.toStream(output, out);
            }
        } else {
            out.writeVInt(0);
        }
        out.writeVInt(returnValues.size());
        for (var returnValue : returnValues) {
            Symbols.toStream(returnValue, out);
        }
    }
}
 
Example 14
Source File: DeleteBlobRequest.java    From crate with Apache License 2.0 4 votes vote down vote up
@Override
public void writeTo(StreamOutput out) throws IOException {
    super.writeTo(out);
    out.write(digest);
}
 
Example 15
Source File: StartBlobRequest.java    From crate with Apache License 2.0 4 votes vote down vote up
@Override
public void writeTo(StreamOutput out) throws IOException {
    super.writeTo(out);
    out.write(digest);
}
 
Example 16
Source File: PutChunkRequest.java    From crate with Apache License 2.0 4 votes vote down vote up
@Override
public void writeTo(StreamOutput out) throws IOException {
    super.writeTo(out);
    out.write(digest);
    out.writeVLong(currentPos);
}