Frame errors on new hosts on Hadoop cluster

0 votes
asked Nov 6, 2017 in Hadoop by anonymous
Hi All,

After performing the hadoop cluster expansion , Adding more nodes to the existing capacity the hadoop cluster is up and running fine but the performance is very slow.

Also it gives dataframe error even though server is up .

1 Answer

0 votes
answered Nov 6, 2017 by admin (4,410 points)

Please check the MTU value for uplink and downlink for your host/nodes. It relates to  maximum frame size that can be transported on the data link layer, e.g. Ethernet frame.

Verify the configuration across switches connecting the nodes across the cluster.

Ideally the frame size is 1500 bytes /sec &  to provide maximum data transfer the jumbo buffers can be enabled (based on hardware configuration ) ensuring faster data transmission @ 9000 bytes/sec.