org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.PipelineAckProto Java Examples
The following examples show how to use
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.PipelineAckProto.
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example #1
Source File: PipelineAck.java From hadoop with Apache License 2.0 | 6 votes |
/** * Constructor * @param seqno sequence number * @param replies an array of replies * @param downstreamAckTimeNanos ack RTT in nanoseconds, 0 if no next DN in pipeline */ public PipelineAck(long seqno, int[] replies, long downstreamAckTimeNanos) { ArrayList<Status> statusList = Lists.newArrayList(); ArrayList<Integer> flagList = Lists.newArrayList(); for (int r : replies) { statusList.add(StatusFormat.getStatus(r)); flagList.add(r); } proto = PipelineAckProto.newBuilder() .setSeqno(seqno) .addAllReply(statusList) .addAllFlag(flagList) .setDownstreamAckTimeNanos(downstreamAckTimeNanos) .build(); }
Example #2
Source File: PipelineAck.java From big-c with Apache License 2.0 | 6 votes |
/** * Constructor * @param seqno sequence number * @param replies an array of replies * @param downstreamAckTimeNanos ack RTT in nanoseconds, 0 if no next DN in pipeline */ public PipelineAck(long seqno, int[] replies, long downstreamAckTimeNanos) { ArrayList<Status> statusList = Lists.newArrayList(); ArrayList<Integer> flagList = Lists.newArrayList(); for (int r : replies) { statusList.add(StatusFormat.getStatus(r)); flagList.add(r); } proto = PipelineAckProto.newBuilder() .setSeqno(seqno) .addAllReply(statusList) .addAllFlag(flagList) .setDownstreamAckTimeNanos(downstreamAckTimeNanos) .build(); }
Example #3
Source File: FanOutOneBlockAsyncDFSOutput.java From hbase with Apache License 2.0 | 6 votes |
@Override protected void channelRead0(ChannelHandlerContext ctx, PipelineAckProto ack) throws Exception { Status reply = getStatus(ack); if (reply != Status.SUCCESS) { failed(ctx.channel(), () -> new IOException("Bad response " + reply + " for block " + block + " from datanode " + ctx.channel().remoteAddress())); return; } if (PipelineAck.isRestartOOBStatus(reply)) { failed(ctx.channel(), () -> new IOException("Restart response " + reply + " for block " + block + " from datanode " + ctx.channel().remoteAddress())); return; } if (ack.getSeqno() == HEART_BEAT_SEQNO) { return; } completed(ctx.channel()); }
Example #4
Source File: FanOutOneBlockAsyncDFSOutputHelper.java From hbase with Apache License 2.0 | 5 votes |
static Status getStatus(PipelineAckProto ack) { List<Integer> flagList = ack.getFlagList(); Integer headerFlag; if (flagList.isEmpty()) { Status reply = ack.getReply(0); headerFlag = PipelineAck.combineHeader(ECN.DISABLED, reply); } else { headerFlag = flagList.get(0); } return PipelineAck.getStatusFromHeader(headerFlag); }
Example #5
Source File: FanOutOneBlockAsyncDFSOutput.java From hbase with Apache License 2.0 | 5 votes |
private void setupReceiver(int timeoutMs) { AckHandler ackHandler = new AckHandler(timeoutMs); for (Channel ch : datanodeList) { ch.pipeline().addLast( new IdleStateHandler(timeoutMs, timeoutMs / 2, 0, TimeUnit.MILLISECONDS), new ProtobufVarint32FrameDecoder(), new ProtobufDecoder(PipelineAckProto.getDefaultInstance()), ackHandler); ch.config().setAutoRead(true); } }
Example #6
Source File: PipelineAck.java From hadoop with Apache License 2.0 | 4 votes |
/**** Writable interface ****/ public void readFields(InputStream in) throws IOException { proto = PipelineAckProto.parseFrom(vintPrefixed(in)); }
Example #7
Source File: PipelineAck.java From big-c with Apache License 2.0 | 4 votes |
/**** Writable interface ****/ public void readFields(InputStream in) throws IOException { proto = PipelineAckProto.parseFrom(vintPrefixed(in)); }