hbase在做count时,一个hadoop的datanode节点出现网卡假死

hbase在做count时,一个hadoop的datanode节点出现网卡假死,count超时后网卡又恢复正常,datanode节点得日志报如下错误: org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1965446308-192.168.15.35 -1498918077927:blk_1073743652_2952, type=HAS_DOWNS...显示全部

hbase在做count时,一个hadoop的datanode节点出现网卡假死,count超时后网卡又恢复正常,datanode节点得日志报如下错误:
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1965446308-192.168.15.35

-1498918077927:blk_1073743652_2952, type=HAS_DOWNSTREAM_IN_PIPELINE
java.io

.EOFException: Premature EOF: no length prefix available

at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2282)
at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1285)
at java.lang.Thread.run(Thread.java:745)

2017-07-12 22:51:48,354 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-1965446308-192.168.15.35

-1498918077927:blk_1073743652_2952
java.io

.IOException: Premature EOF from inputStream

at org.apache.hadoop.io 

.IOUtils.readFully(IOUtils.java:201)

at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:501)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:897)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:802)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253)
at java.lang.Thread.run(Thread.java:745)

2017-07-12 22:51:48,356 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in BlockReceiver.run():
java.io

.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=/192.168.15.31:50010

remote=/192.168.15.32:43398

]. 480000 millis timeout left.

收起
参与3

返回美国队长的回答

美国队长美国队长研发工程师Alibaba

你这个问题造成的原因很多,我觉得最大的可能是节点之间网络连接超时,但是用hbase进行数据统计,你可以采用hbase的协处理器,每次做个累积,这样速度会快一些,

互联网服务 · 2017-07-26
浏览3230

回答者

美国队长
研发工程师Alibaba
擅长领域: 大数据大数据平台数据库

美国队长 最近回答过的问题

回答状态

  • 发布时间:2017-07-26
  • 关注会员:2 人
  • 回答浏览:3230
  • X社区推广