How to manually kill HDFS DataNodes? report that there are no datanodes running on some nodes like

hdfs-node-000208: no datanode to stop

However, there are DataNode process running there. How to clean these processes on many (100s) of nodes?

asked Dec 6, 2015 by anonymous

1 Answer

You may use this piece of bash script:

for i in `cat hadoop/etc/hadoop/slaves`; do \
echo $i; \
ssh $i \
  'jps | grep DataNode | cut -d" " -f1 \
  | xargs --no-run-if-empty -I@ bash -c "echo \ -- killing @; kill @"'; \

What it does is, for each slave ndoes, run the command started by jps:

finds DataNode Java processes, gets the process IDs and passes these IDs to kill to kill them.

answered Dec 6, 2015 by Eric Z Ma (44,280 points)
edited Dec 9, 2015 by Eric Z Ma

Please log in or register to answer this question.

Copyright © SysTutorials. User contributions licensed under cc-wiki with attribution required.
Hosted on Dreamhost