Hadoop installation on windows without cygwin in 10 mints

By | August 10, 2015

Hadoop installation on windows without cygwin in 10 mints – Hadoop installation on windows 7 or 8

Download
Before starting make sure you have this two softwares

Extract downloaded tar file

Configuration
Step 1 – Windows path configuration
set HADOOP_HOME path in enviornment variable for windows
Right click on my computer > properties > advanced system settings > advance tab > environment variables > click on new

Set hadoop bin directory path
Find path variable in system variable > click on edit > at the end insert ‘; (semicolon)’ and paste path upto hadoop bin directory in my case it’s a

F:/Hortanwork/1gbhadoopram/Software/hadoop-2.7/hadoop-2.7.1/bin


Step 2 – Hadoop configuration
Edit hadoop-2.7.1/etc/hadoop/core-site.xml, paste the following lines and save it.

<configuration>
<property>
       <name>fs.defaultFS</name>
       <value>hdfs://localhost:9000</value>
   </property>
</configuration>

Edit hadoop-2.7.1/etc/hadoop/mapred-site.xml, paste the following lines and save it.

<configuration>
   <property>
       <name>mapreduce.framework.name</name>
       <value>yarn</value>
   </property>
</configuration>

Edit hadoop-2.7.1/etc/hadoop/hdfs-site.xml, paste the following lines and save it, please create data folder somewhere and in my case i have created it in my HADOOP_HOME directory

<configuration>
<property>
       <name>dfs.replication</name>
       <value>1</value>
   </property>
   <property>
       <name>dfs.namenode.name.dir</name>
       <value>F:/Hortanwork/1gbhadoopram/Software/hadoop-2.7/hadoop-2.7.1/data/namenode</value>
   </property>
   <property>
       <name>dfs.datanode.data.dir</name>
     <value>F:/Hortanwork/1gbhadoopram/Software/hadoop-2.7/hadoop-2.7.1/data/datanode</value>
   </property>
</configuration>

Edit hadoop-2.7.1/etc/hadoop/yarn-site.xml, paste the following lines and save it.

<configuration>
   <property>
       <name>yarn.nodemanager.aux-services</name>
       <value>mapreduce_shuffle</value>
   </property>
   <property>
       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
   </property>
</configuration>

Edit hadoop-2.7.1/etc/hadoop/hadoop-env.cmd, comment existing %JAVA_HOME% using @rem at start, give proper path and save it. (my jdk is in program files to avoid spaces i gave PROGRA~1)



Demo

Step 3 – Start everything

Very Important step

Before starting everything you need to add some [dot].dll and [dot].exe files of windows please download bin folder from my github repository – sardetushar_gitrepo_download

bin folder – this contains .dll and .exe file (winutils.exe for hadoop 2.7.1)

Now delete you existing bin folder and replace with new one (downloaded from my repo )

( my github ) etc folder is just given for reference you need to modify your configuration parameters according to your environment path

Open cmd and type ‘hdfs namenode -format’ – after execution you will see below logs

Open cmd and point to sbin directory and type ‘start-all.cmd

F:Hortanwork1gbhadoopramSoftwarehadoop-2.7hadoop-2.7.1sbin>start-all.cmd

It will start following process

Namenode

Datanode

YARN resourcemanager

YARN nodemanager


JPS – to see services are running
open cmd and type – jps (for jps make sure your java path is set properly)

GUI
Step 4 – namenode GUI, resourcemanager GUI
Resourcemanager GUI address – http://localhost:8088

Namenode GUI address – http://localhost:50070

In next tutorial we will see how to run mapreduce programs in windows using eclipse and this hadoop setup




Share this knowledge ! Join us on Facebook ! Now Whatsapp sharing is supportable ! TooDey Inc.

192 thoughts on “Hadoop installation on windows without cygwin in 10 mints

  1. house clearance islington

    I’m not that much of a internet ffpovv reader to be honest but your blogs really nice, keep it up! I’ll go ahead and bookmark your site to come back down the road. All the best

    Reply
  2. Meera

    Hi, When i run the hadoop-env.cmd, there is no response
    wat can i do now

    Reply
  3. Vedant

    I am getting this error after running hdfs namenode -format command:
    Error:could not find or load main class..
    how to solve this????

    Reply
  4. Parth

    It’s working!! The GUI shows it’s working!!! However when I tried jps command it showed that command doesn’t exist!! I’ve set the JAVA path properly!

    Reply
  5. Mallikarjun Ratala

    Great Article. I was able to set up Hadoop successfully, but not in 10 mins

    Reply
  6. Manish Kumar suthar

    I will Run Work With this code But i want to rin word count program with using this so please help me for this
    ………………..

    Reply
  7. Thajudheen

    sir, if all this steps are not applicable for windows 32 bit machine.then, all the nodes are starting while entering start-all.cmd but it automatically shutting down all nodes in the hadoop. what will i do for this error.

    Reply
  8. prashant

    C:\WINDOWS\system32>hdfs namenode -format
    ‘Files’ is not recognized as an internal or external command,
    operable program or batch file.
    Error: JAVA_HOME is incorrectly set.
    Please update C:\hadoop\hadoop-2.7.1\conf\hadoop-env.cmd
    ‘-Dhadoop.security.logger’ is not recognized as an internal or external command

    operable program or batch file.

    Reply
  9. Chandra Sekhar

    Its working. Prior to this I have tried many which were not helpful. Thankyou so much.

    Reply
  10. Naveen

    17/01/23 04:41:29 INFO impl.MetricsSystemImpl: NodeManager metrics system started
    17/01/23 04:41:29 FATAL nodemanager.NodeManager: Error starting NodeManager
    java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
    at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
    at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:609)
    at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:977)
    at org.apache.hadoop.util.DiskChecker.checkAccessByFileMethods(DiskChecker.java:187)
    at org.apache.hadoop.util.DiskChecker.checkDirAccess(DiskChecker.java:174)
    at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:108)
    at org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.testDirs(DirectoryCollection.java:290)
    at org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.checkDirs(DirectoryCollection.java:229)
    at org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.checkDirs(LocalDirsHandlerService.java:379)
    at org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.serviceInit(LocalDirsHandlerService.java:160)
    at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
    at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
    at org.apache.hadoop.yarn.server.nodemanager.NodeHealthCheckerService.serviceInit(NodeHealthCheckerService.java:48)
    at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
    at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
    at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:260)
    at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
    at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:485)
    at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:533)
    17/01/23 04:41:29 INFO impl.MetricsSystemImpl: Stopping NodeManager metrics system…
    17/01/23 04:41:29 INFO impl.MetricsSystemImpl: NodeManager metrics system stopped.
    17/01/23 04:41:29 INFO impl.MetricsSystemImpl: NodeManager metrics system shutdown complete.
    17/01/23 04:41:29 INFO nodemanager.NodeManager: SHUTDOWN_MSG:

    Reply
  11. Naveen

    17/01/23 04:41:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
    17/01/23 04:41:27 FATAL datanode.DataNode: Exception in secureMain
    java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
    at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
    at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:609)
    at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:977)
    at org.apache.hadoop.util.DiskChecker.checkAccessByFileMethods(DiskChecker.java:187)
    at org.apache.hadoop.util.DiskChecker.checkDirAccess(DiskChecker.java:174)
    at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:157)
    at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2344)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2386)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2368)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2260)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2307)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2484)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2508)
    17/01/23 04:41:27 INFO util.ExitUtil: Exiting with status 1

    Reply
  12. Naveen

    17/01/23 04:41:29 FATAL nodemanager.NodeManager: Error starting NodeManager
    java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
    at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
    at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:609)
    at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:977)
    at org.apache.hadoop.util.DiskChecker.checkAccessByFileMethods(DiskChecker.java:187)
    at org.apache.hadoop.util.DiskChecker.checkDirAccess(DiskChecker.java:174)
    at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:108)
    at org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.testDirs(DirectoryCollection.java:290)
    at org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.checkDirs(DirectoryCollection.java:229)
    at org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.checkDirs(LocalDirsHandlerService.java:379)
    at org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.serviceInit(LocalDirsHandlerService.java:160)
    at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
    at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
    at org.apache.hadoop.yarn.server.nodemanager.NodeHealthCheckerService.serviceInit(NodeHealthCheckerService.java:48)
    at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
    at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
    at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:260)
    at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
    at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:485)
    at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:533)
    17/01/23 04:41:29 INFO impl.MetricsSystemImpl: Stopping NodeManager metrics system…
    17/01/23 04:41:29 INFO impl.MetricsSystemImpl: NodeManager metrics system stopped.
    17/01/23 04:41:29 INFO impl.MetricsSystemImpl: NodeManager metrics system shutdown complete.
    17/01/23 04:41:29 INFO nodemanager.NodeManager: SHUTDOWN_MSG:

    Reply
  13. Qazi

    Hi,

    I have completed every step but stuck on Namenode GUI address – http://localhost:50070

    It seems I cannot open port got this message
    This site can’t be reached

    localhost refused to connect.
    Search Google for localhost 50070
    ERR_CONNECTION_REFUSED

    My all 4 windows are open and running fine. I can see the application page. I tried to open port in firewall settings but port still not listed in open ports. Can I open this page in any other port? And why in configuration file ‘core-site.xml’ says localhost:9000? and not 50070?

    Please help to I can move ahead to start my hadoop.

    Thanks

    QF

    Reply
  14. Ganesh

    Hi Tushar Sarde,

    As I am new to hadoop and I tried to replicate the same setting in my windows and I am successfully able to up the following nodes,
    C:\Tools\hadoop-2.7.1\sbin>jps
    10448 DataNode
    800 Launcher
    7556
    10728 NodeManager
    10044 ResourceManager
    8508 Jps

    But it failed to start the namenode and getting the below error,

    16/12/27 15:52:01 ERROR namenode.NameNode: Failed to start namenode.
    java.io.IOException: All specified directories are not accessible or do not exist.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:208)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:975)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:68

    Please kindly share your response.

    Many Thanks,
    Ganeshbabu R

    Reply
  15. Akanksha

    Thank you so much Sir for insightful tutorial. I am able to create a namenode. But after doing start-dfs from sbin I am getting an error as the system can’t find the file hadoop. I am able to get the version by doing hadoop version. I also checked java_home set in a hadoop-env.cmd and it is JAVA_HOME=C://Java//jdk1.8.0_25. Could you please suggest.

    Reply
  16. H2P

    Hi,
    Thanks for this helpful post but while configuring i am facing following problem please suggest solution if possible!!!!!

    While using “hdfs namenode -format” cmd it successfully formatted the namenode:———

    16/11/07 08:22:08 INFO namenode.FSImage: Allocated new BlockPoolId: BP
    16/11/07 08:22:08 INFO common.Storage: Storage directory c:\Hadoop\hadoop-2.7.1\data\dfs\namenode has been successfully formatted.
    16/11/07 08:22:09 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
    16/11/07 08:22:09 INFO util.ExitUtil: Exiting with status 0
    16/11/07 08:22:09 INFO namenode.NameNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down NameNode at ImMortaL-PC/

    But when i use “start-dfs” cmd from sbin it is giving following error message for namenode & datanode:

    [ for datanode ]:————————–
    16/11/07 08:29:37 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
    16/11/07 08:29:39 FATAL datanode.DataNode: Exception in secureMain
    java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
    at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
    at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:609)
    at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:977)
    at org.apache.hadoop.util.DiskChecker.checkAccessByFileMethods(DiskChecker.java:187)
    at org.apache.hadoop.util.DiskChecker.checkDirAccess(DiskChecker.java:174)
    at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:157)
    at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2344)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2386)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2368)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2260)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2307)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2484)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2508)
    16/11/07 08:29:39 INFO util.ExitUtil: Exiting with status 1
    16/11/07 08:29:39 INFO datanode.DataNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down DataNode at ImMortaL-PC/

    [ for namenode] :—————————-
    16/11/07 08:29:48 ERROR namenode.NameNode: Failed to start namenode.
    java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
    at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
    at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:609)
    at org.apache.hadoop.fs.FileUtil.canWrite(FileUtil.java:996)
    at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:490)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:322)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:215)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:975)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:584)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:644)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:811)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:795)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
    16/11/07 08:29:48 INFO util.ExitUtil: Exiting with status 1
    16/11/07 08:29:48 INFO namenode.NameNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down NameNode at ImMortaL-PC/************************************************************/

    I have following cnfiguration for hdfs-site.xml:

    dfs.replication
    1

    dfs.namenode.name.dir
    file:///c:/Hadoop/hadoop-2.7.1/data/dfs/namenode

    dfs.datanode.data.dir
    file:///c:/Hadoop/hadoop-2.7.1/data/dfs/datanode

    Please suggest some needful….!!

    Reply
  17. Conrad S

    This is really a great instruction. It took me 10 minutes to set it up. Thanks for putting this together!

    Reply
  18. sekhar

    Hi tushar

    My name is sekhar.i am do the same installation what u mentioned above…but when I enter C:\hadoop-2.7.1\sbin>hdfs name node -format ….it says “error could not find or load main class my”.
    And also i enter “start-all.cmd” it shows only yarn nodemanager and yarn resourcemanager….when enter jps command it shows only jps value and nodemanager and resourcemanager…..
    So what is the problem in that……how to solve start hdfs name node manager …

    Reply
  19. sekhar

    Hi tuaja

    My name is sekhar.i am do the same installation what u mentioned above…but when I enter C:\hadoop-2.7.1\sbin>hdfs name node -format ….it says “error could not find or load main class my”.
    And also i enter “start-all.cmd” it shows only yarn nodemanager and yarn resourcemanager….when enter jps command it shows only jps value and nodemanager and resourcemanager…..
    So what is the problem in that……how to solve start hdfs name node manager …

    Reply
  20. Nitin

    I Have Successfully installed Hadoop But When I integrate the hadoop With Eclipse DFS Server is not work
    Eclipse shows the error :-
    ” An internal error occurred during: “Connecting to DFS localhost”.
    org/apache/htrace/SamplerBuilder “.
    how to solve it . & How to run Map Reduce “WordCount ” Program In eclipse passing the txt file path.
    plz help me to solve this error

    Reply
    1. vairaprakash

      please share the procedure for hadoop fully distributed installation using this.

      Reply
      1. vairaprakash

        is it possible to install pig and hive here? How fully distributed configuration is possible in windows? once it possible then please share the procedure sir.

        Reply
  21. Kirti P. B.

    Hello Tushar. Its a useful article. I have installed as per the instructions, but while running start-all.cmd from sbin folder, initially all four manager windows are coming but after that I m getting following Exception. I am unable to interpret. Please suggest the solution. Thanks in anticipation.

    hadoop-2.7.1\share\hadoop\common\lib\jaxb-api-2.2.2.jar;H:\RESEARCH\Hadoop\hadoo
    p-2.7.1\hadoop-2.7.1\share\hadoop\common\lib\jaxb-impl-2.2.3-1.jar;H:\RESEARCH\H
    adoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\common\lib\jersey-core-1.9.jar;H:\R
    ESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\common\lib\jersey-json-1.9
    .jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\common\lib\jersey
    -server-1.9.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\common
    \lib\jets3t-0.9.0.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\
    common\lib\jettison-1.1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\h
    adoop\common\lib\jetty-6.1.26.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\s
    hare\hadoop\common\lib\jetty-util-6.1.26.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\had
    oop-2.7.1\share\hadoop\common\lib\jsch-0.1.42.jar;H:\RESEARCH\Hadoop\hadoop-2.7.
    1\hadoop-2.7.1\share\hadoop\common\lib\jsp-api-2.1.jar;H:\RESEARCH\Hadoop\hadoop
    -2.7.1\hadoop-2.7.1\share\hadoop\common\lib\jsr305-3.0.0.jar;H:\RESEARCH\Hadoop\
    hadoop-2.7.1\hadoop-2.7.1\share\hadoop\common\lib\junit-4.11.jar;H:\RESEARCH\Had
    oop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\common\lib\log4j-1.2.17.jar;H:\RESEAR
    CH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\common\lib\mockito-all-1.8.5.ja
    r;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\common\lib\netty-3.6
    .2.Final.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\common\li
    b\paranamer-2.3.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\co
    mmon\lib\protobuf-java-2.5.0.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\sh
    are\hadoop\common\lib\servlet-api-2.5.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop
    -2.7.1\share\hadoop\common\lib\slf4j-api-1.7.10.jar;H:\RESEARCH\Hadoop\hadoop-2.
    7.1\hadoop-2.7.1\share\hadoop\common\lib\slf4j-log4j12-1.7.10.jar;H:\RESEARCH\Ha
    doop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\common\lib\snappy-java-1.0.4.1.jar;H
    :\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\common\lib\stax-api-1.0
    -2.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\common\lib\xmle
    nc-0.52.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\common\lib
    \xz-1.0.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\common\lib
    \zookeeper-3.4.6.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\c
    ommon\hadoop-common-2.7.1-tests.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1
    \share\hadoop\common\hadoop-common-2.7.1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\had
    oop-2.7.1\share\hadoop\common\hadoop-nfs-2.7.1.jar;H:\RESEARCH\Hadoop\hadoop-2.7
    .1\hadoop-2.7.1\share\hadoop\hdfs;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\s
    hare\hadoop\hdfs\lib\asm-3.2.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\sh
    are\hadoop\hdfs\lib\commons-cli-1.2.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2
    .7.1\share\hadoop\hdfs\lib\commons-codec-1.4.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1
    \hadoop-2.7.1\share\hadoop\hdfs\lib\commons-daemon-1.0.13.jar;H:\RESEARCH\Hadoop
    \hadoop-2.7.1\hadoop-2.7.1\share\hadoop\hdfs\lib\commons-io-2.4.jar;H:\RESEARCH\
    Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\hdfs\lib\commons-lang-2.6.jar;H:\R
    ESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\hdfs\lib\commons-logging-1
    .1.3.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\hdfs\lib\guav
    a-11.0.2.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\hdfs\lib\
    htrace-core-3.1.0-incubating.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\sh
    are\hadoop\hdfs\lib\jackson-core-asl-1.9.13.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\
    hadoop-2.7.1\share\hadoop\hdfs\lib\jackson-mapper-asl-1.9.13.jar;H:\RESEARCH\Had
    oop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\hdfs\lib\jersey-core-1.9.jar;H:\RESEA
    RCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\hdfs\lib\jersey-server-1.9.jar
    ;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\hdfs\lib\jetty-6.1.26
    .jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\hdfs\lib\jetty-ut
    il-6.1.26.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\hdfs\lib
    \jsr305-3.0.0.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\hdfs
    \lib\leveldbjni-all-1.8.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\h
    adoop\hdfs\lib\log4j-1.2.17.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\sha
    re\hadoop\hdfs\lib\netty-3.6.2.Final.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-
    2.7.1\share\hadoop\hdfs\lib\netty-all-4.0.23.Final.jar;H:\RESEARCH\Hadoop\hadoop
    -2.7.1\hadoop-2.7.1\share\hadoop\hdfs\lib\protobuf-java-2.5.0.jar;H:\RESEARCH\Ha
    doop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\hdfs\lib\servlet-api-2.5.jar;H:\RESE
    ARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\hdfs\lib\xercesImpl-2.9.1.jar
    ;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\hdfs\lib\xml-apis-1.3
    .04.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\hdfs\lib\xmlen
    c-0.52.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\hdfs\hadoop
    -hdfs-2.7.1-tests.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\
    hdfs\hadoop-hdfs-2.7.1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\ha
    doop\hdfs\hadoop-hdfs-nfs-2.7.1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1
    \share\hadoop\yarn\lib\activation-1.1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop
    -2.7.1\share\hadoop\yarn\lib\aopalliance-1.0.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1
    \hadoop-2.7.1\share\hadoop\yarn\lib\asm-3.2.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\
    hadoop-2.7.1\share\hadoop\yarn\lib\commons-cli-1.2.jar;H:\RESEARCH\Hadoop\hadoop
    -2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\commons-codec-1.4.jar;H:\RESEARCH\Hado
    op\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\commons-collections-3.2.1.jar
    ;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\commons-comp
    ress-1.4.1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\li
    b\commons-io-2.4.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\y
    arn\lib\commons-lang-2.6.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\
    hadoop\yarn\lib\commons-logging-1.1.3.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop
    -2.7.1\share\hadoop\yarn\lib\guava-11.0.2.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\ha
    doop-2.7.1\share\hadoop\yarn\lib\guice-3.0.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\h
    adoop-2.7.1\share\hadoop\yarn\lib\guice-servlet-3.0.jar;H:\RESEARCH\Hadoop\hadoo
    p-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\jackson-core-asl-1.9.13.jar;H:\RESEAR
    CH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\jackson-jaxrs-1.9.13.j
    ar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\jackson-ma
    pper-asl-1.9.13.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\ya
    rn\lib\jackson-xc-1.9.13.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\
    hadoop\yarn\lib\javax.inject-1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\
    share\hadoop\yarn\lib\jaxb-api-2.2.2.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-
    2.7.1\share\hadoop\yarn\lib\jaxb-impl-2.2.3-1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.
    1\hadoop-2.7.1\share\hadoop\yarn\lib\jersey-client-1.9.jar;H:\RESEARCH\Hadoop\ha
    doop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\jersey-core-1.9.jar;H:\RESEARCH\Ha
    doop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\jersey-guice-1.9.jar;H:\RES
    EARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\jersey-json-1.9.jar
    ;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\jersey-serve
    r-1.9.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\jet
    tison-1.1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib
    \jetty-6.1.26.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn
    \lib\jetty-util-6.1.26.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\ha
    doop\yarn\lib\jsr305-3.0.0.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\shar
    e\hadoop\yarn\lib\leveldbjni-all-1.8.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-
    2.7.1\share\hadoop\yarn\lib\log4j-1.2.17.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\had
    oop-2.7.1\share\hadoop\yarn\lib\netty-3.6.2.Final.jar;H:\RESEARCH\Hadoop\hadoop-
    2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\protobuf-java-2.5.0.jar;H:\RESEARCH\Had
    oop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\servlet-api-2.5.jar;H:\RESEA
    RCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\stax-api-1.0-2.jar;H:
    \RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\xz-1.0.jar;H:\R
    ESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\zookeeper-3.4.6-t
    ests.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\zook
    eeper-3.4.6.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\h
    adoop-yarn-api-2.7.1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hado
    op\yarn\hadoop-yarn-applications-distributedshell-2.7.1.jar;H:\RESEARCH\Hadoop\h
    adoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\hadoop-yarn-applications-unmanaged-am
    -launcher-2.7.1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\ya
    rn\hadoop-yarn-client-2.7.1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\sha
    re\hadoop\yarn\hadoop-yarn-common-2.7.1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hado
    op-2.7.1\share\hadoop\yarn\hadoop-yarn-registry-2.7.1.jar;H:\RESEARCH\Hadoop\had
    oop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\hadoop-yarn-server-applicationhistoryse
    rvice-2.7.1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\h
    adoop-yarn-server-common-2.7.1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\
    share\hadoop\yarn\hadoop-yarn-server-nodemanager-2.7.1.jar;H:\RESEARCH\Hadoop\ha
    doop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\hadoop-yarn-server-resourcemanager-2.7
    .1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\hadoop-yar
    n-server-sharedcachemanager-2.7.1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7
    .1\share\hadoop\yarn\hadoop-yarn-server-tests-2.7.1.jar;H:\RESEARCH\Hadoop\hadoo
    p-2.7.1\hadoop-2.7.1\share\hadoop\yarn\hadoop-yarn-server-web-proxy-2.7.1.jar;H:
    \RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\mapreduce\lib\aopallianc
    e-1.0.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\mapreduce\li
    b\asm-3.2.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\mapreduc
    e\lib\avro-1.7.4.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\m
    apreduce\lib\commons-compress-1.4.1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2
    .7.1\share\hadoop\mapreduce\lib\commons-io-2.4.jar;H:\RESEARCH\Hadoop\hadoop-2.7
    .1\hadoop-2.7.1\share\hadoop\mapreduce\lib\guice-3.0.jar;H:\RESEARCH\Hadoop\hado
    op-2.7.1\hadoop-2.7.1\share\hadoop\mapreduce\lib\guice-servlet-3.0.jar;H:\RESEAR
    CH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\mapreduce\lib\hadoop-annotation
    s-2.7.1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\mapreduce\
    lib\hamcrest-core-1.3.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\had
    oop\mapreduce\lib\jackson-core-asl-1.9.13.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\ha
    doop-2.7.1\share\hadoop\mapreduce\lib\jackson-mapper-asl-1.9.13.jar;H:\RESEARCH\
    Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\mapreduce\lib\javax.inject-1.jar;H
    :\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\mapreduce\lib\jersey-co
    re-1.9.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\mapreduce\l
    ib\jersey-guice-1.9.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoo
    p\mapreduce\lib\jersey-server-1.9.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7
    .1\share\hadoop\mapreduce\lib\junit-4.11.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\had
    oop-2.7.1\share\hadoop\mapreduce\lib\leveldbjni-all-1.8.jar;H:\RESEARCH\Hadoop\h
    adoop-2.7.1\hadoop-2.7.1\share\hadoop\mapreduce\lib\log4j-1.2.17.jar;H:\RESEARCH
    \Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\mapreduce\lib\netty-3.6.2.Final.j
    ar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\mapreduce\lib\paran
    amer-2.3.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\mapreduce
    \lib\protobuf-java-2.5.0.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\
    hadoop\mapreduce\lib\snappy-java-1.0.4.1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\had
    oop-2.7.1\share\hadoop\mapreduce\lib\xz-1.0.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\
    hadoop-2.7.1\share\hadoop\mapreduce\hadoop-mapreduce-client-app-2.7.1.jar;H:\RES
    EARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\mapreduce\hadoop-mapreduce-c
    lient-common-2.7.1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop
    \mapreduce\hadoop-mapreduce-client-core-2.7.1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.
    1\hadoop-2.7.1\share\hadoop\mapreduce\hadoop-mapreduce-client-hs-2.7.1.jar;H:\RE
    SEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\mapreduce\hadoop-mapreduce-
    client-hs-plugins-2.7.1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\h
    adoop\mapreduce\hadoop-mapreduce-client-jobclient-2.7.1-tests.jar;H:\RESEARCH\Ha
    doop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\mapreduce\hadoop-mapreduce-client-jo
    bclient-2.7.1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\mapr
    educe\hadoop-mapreduce-client-shuffle-2.7.1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\
    hadoop-2.7.1\share\hadoop\mapreduce\hadoop-mapreduce-examples-2.7.1.jar;H:\RESEA
    RCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\hadoop-yarn-api-2.7.1.jar
    ;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\hadoop-yarn-appl
    ications-distributedshell-2.7.1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1
    \share\hadoop\yarn\hadoop-yarn-applications-unmanaged-am-launcher-2.7.1.jar;H:\R
    ESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\hadoop-yarn-client-2.
    7.1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\hadoop-ya
    rn-common-2.7.1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\ya
    rn\hadoop-yarn-registry-2.7.1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\s
    hare\hadoop\yarn\hadoop-yarn-server-applicationhistoryservice-2.7.1.jar;H:\RESEA
    RCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\hadoop-yarn-server-common
    -2.7.1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\hadoop
    -yarn-server-nodemanager-2.7.1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\
    share\hadoop\yarn\hadoop-yarn-server-resourcemanager-2.7.1.jar;H:\RESEARCH\Hadoo
    p\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\hadoop-yarn-server-sharedcachemana
    ger-2.7.1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\had
    oop-yarn-server-tests-2.7.1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\sha
    re\hadoop\yarn\hadoop-yarn-server-web-proxy-2.7.1.jar;H:\RESEARCH\Hadoop\hadoop-
    2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\activation-1.1.jar;H:\RESEARCH\Hadoop\h
    adoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\aopalliance-1.0.jar;H:\RESEARCH\H
    adoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\asm-3.2.jar;H:\RESEARCH\Ha
    doop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\commons-cli-1.2.jar;H:\RESE
    ARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\commons-codec-1.4.ja
    r;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\commons-col
    lections-3.2.1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yar
    n\lib\commons-compress-1.4.1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\sh
    are\hadoop\yarn\lib\commons-io-2.4.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.
    7.1\share\hadoop\yarn\lib\commons-lang-2.6.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\h
    adoop-2.7.1\share\hadoop\yarn\lib\commons-logging-1.1.3.jar;H:\RESEARCH\Hadoop\h
    adoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\guava-11.0.2.jar;H:\RESEARCH\Hado
    op\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\guice-3.0.jar;H:\RESEARCH\Had
    oop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\guice-servlet-3.0.jar;H:\RES
    EARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\jackson-core-asl-1.
    9.13.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\jack
    son-jaxrs-1.9.13.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\y
    arn\lib\jackson-mapper-asl-1.9.13.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7
    .1\share\hadoop\yarn\lib\jackson-xc-1.9.13.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\h
    adoop-2.7.1\share\hadoop\yarn\lib\javax.inject-1.jar;H:\RESEARCH\Hadoop\hadoop-2
    .7.1\hadoop-2.7.1\share\hadoop\yarn\lib\jaxb-api-2.2.2.jar;H:\RESEARCH\Hadoop\ha
    doop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\jaxb-impl-2.2.3-1.jar;H:\RESEARCH\
    Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\jersey-client-1.9.jar;H:\
    RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\jersey-core-1.9.
    jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\jersey-gu
    ice-1.9.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\j
    ersey-json-1.9.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yar
    n\lib\jersey-server-1.9.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\h
    adoop\yarn\lib\jettison-1.1.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\sha
    re\hadoop\yarn\lib\jetty-6.1.26.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1
    \share\hadoop\yarn\lib\jetty-util-6.1.26.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\had
    oop-2.7.1\share\hadoop\yarn\lib\jsr305-3.0.0.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1
    \hadoop-2.7.1\share\hadoop\yarn\lib\leveldbjni-all-1.8.jar;H:\RESEARCH\Hadoop\ha
    doop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\log4j-1.2.17.jar;H:\RESEARCH\Hadoo
    p\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\netty-3.6.2.Final.jar;H:\RESEA
    RCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\protobuf-java-2.5.0.j
    ar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\servlet-ap
    i-2.5.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib\sta
    x-api-1.0-2.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\l
    ib\xz-1.0.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\hadoop\yarn\lib
    \zookeeper-3.4.6-tests.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\share\ha
    doop\yarn\lib\zookeeper-3.4.6.jar;H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\e
    tc\hadoop\rm-config\log4j.properties
    STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 15e
    cc87ccf4a0228f35af08fc56de536e6ce657a; compiled by ‘jenkins’ on 2015-06-29T06:04
    Z
    STARTUP_MSG: java = 1.7.0_79
    ************************************************************/
    [Fatal Error] core-site.xml:2:6: The processing instruction target matching “[xX
    ][mM][lL]” is not allowed.
    16/09/23 16:42:45 FATAL conf.Configuration: error parsing conf core-site.xml
    org.xml.sax.SAXParseException; systemId: file:/H:/RESEARCH/Hadoop/hadoop-2.7.1/h
    adoop-2.7.1/etc/hadoop/core-site.xml; lineNumber: 2; columnNumber: 6; The proces
    sing instruction target matching “[xX][mM][lL]” is not allowed.
    at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
    at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
    at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150)
    at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2480)
    at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2468)
    at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:
    2539)
    at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java
    :2492)
    at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2405
    )
    at org.apache.hadoop.conf.Configuration.set(Configuration.java:1143)
    at org.apache.hadoop.conf.Configuration.set(Configuration.java:1115)
    at org.apache.hadoop.conf.Configuration.setBoolean(Configuration.java:14
    51)
    at org.apache.hadoop.util.GenericOptionsParser.processGeneralOptions(Gen
    ericOptionsParser.java:321)
    at org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(Gener
    icOptionsParser.java:487)
    at org.apache.hadoop.util.GenericOptionsParser.(GenericOptionsPars
    er.java:170)
    at org.apache.hadoop.util.GenericOptionsParser.(GenericOptionsPars
    er.java:153)
    at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(Re
    sourceManager.java:1197)
    16/09/23 16:42:45 FATAL resourcemanager.ResourceManager: Error starting Resource
    Manager
    java.lang.RuntimeException: org.xml.sax.SAXParseException; systemId: file:/H:/RE
    SEARCH/Hadoop/hadoop-2.7.1/hadoop-2.7.1/etc/hadoop/core-site.xml; lineNumber: 2;
    columnNumber: 6; The processing instruction target matching “[xX][mM][lL]” is n
    ot allowed.
    at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:
    2645)
    at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java
    :2492)
    at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2405
    )
    at org.apache.hadoop.conf.Configuration.set(Configuration.java:1143)
    at org.apache.hadoop.conf.Configuration.set(Configuration.java:1115)
    at org.apache.hadoop.conf.Configuration.setBoolean(Configuration.java:14
    51)
    at org.apache.hadoop.util.GenericOptionsParser.processGeneralOptions(Gen
    ericOptionsParser.java:321)
    at org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(Gener
    icOptionsParser.java:487)
    at org.apache.hadoop.util.GenericOptionsParser.(GenericOptionsPars
    er.java:170)
    at org.apache.hadoop.util.GenericOptionsParser.(GenericOptionsPars
    er.java:153)
    at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(Re
    sourceManager.java:1197)
    Caused by: org.xml.sax.SAXParseException; systemId: file:/H:/RESEARCH/Hadoop/had
    oop-2.7.1/hadoop-2.7.1/etc/hadoop/core-site.xml; lineNumber: 2; columnNumber: 6;
    The processing instruction target matching “[xX][mM][lL]” is not allowed.
    at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
    at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
    at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150)
    at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2480)
    at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2468)
    at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:
    2539)
    … 10 more
    16/09/23 16:42:45 INFO resourcemanager.ResourceManager: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down ResourceManager at SOEX-PC-122-KirtiP/10.84.1.176
    ************************************************************/

    H:\RESEARCH\Hadoop\hadoop-2.7.1\hadoop-2.7.1\sbin>

    Reply
  22. ravi

    I have been successfully been able to use the above tutorial to set up hadoop. However I see this error in one hadoop distribution window. hadoop name node, hadoop data node and yarn resource manager are starting up

    16/09/17 14:37:46 INFO service.AbstractService: Service org.apache.hadoop.yarn.s
    erver.nodemanager.containermanager.localizer.ResourceLocalizationService failed
    in state STARTED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException:
    java.net.BindException: Problem binding to [0.0.0.0:8040] java.net.BindExceptio
    n: Address already in use: bind; For more details see: http://wiki.apache.org/h
    adoop/BindException
    org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.BindException:
    Problem binding to [0.0.0.0:8040] java.net.BindException: Address already in use
    : bind; For more details see: http://wiki.apache.org/hadoop/BindException

    Reply
  23. praKASH

    Good Tutotial dear…. i got install successful and i can open in browser also…
    can u post an example how to submit a job and output

    Thanks

    Reply
  24. Rahi Akela

    Hi,

    I want to install HBase on windows 10. Please share the step how should i proceed because I followed the above step for Hadoop and it is working perfectly on windows 10.

    Thanks
    Rahi

    Reply
  25. Balaji

    I got this while running example, please suggest

    Application application_1473316663235_0005 failed 2 times due to AM Container for appattempt_1473316663235_0005_000002 exited with exitCode: -1073741515
    For more detailed output, check application tracking page:http://Admin-PC:8088/cluster/app/application_1473316663235_0005Then, click on links to logs of each attempt.
    Diagnostics: Exception from container-launch.
    Container id: container_1473316663235_0005_02_000001
    Exit code: -1073741515
    Stack trace: ExitCodeException exitCode=-1073741515:
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
    at org.apache.hadoop.util.Shell.run(Shell.java:456)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
    Container exited with a non-zero exit code -1073741515
    Failing this attempt. Failing the application.

    Reply
  26. student

    Hi sir,
    I am getting errors when i run the command start-all.cmd on all 4 cmd windows -like unable to load native hadoop library from ur platform and at the end a shut_down_mesg.
    How to resolve this error************************************************************/
    errors:
    16/09/06 18:22:05 WARN util.NativeCodeLoader: Unable to load native-hadoop libra
    ry for your platform… using builtin-java classes where applicable
    16/09/06 18:22:05 FATAL datanode.DataNode: Exception in secureMain
    java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.a
    ccess0(Ljava/lang/String;I)Z
    at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)

    at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:6
    09)
    at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:977)
    at org.apache.hadoop.util.DiskChecker.checkAccessByFileMethods(DiskCheck
    er.java:187)
    at org.apache.hadoop.util.DiskChecker.checkDirAccess(DiskChecker.java:17
    4)
    at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:157)
    at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.c
    heckDir(DataNode.java:2344)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations
    (DataNode.java:2386)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode
    .java:2368)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(D
    ataNode.java:2260)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNo
    de.java:2307)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.j
    ava:2484)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:25
    08)
    16/09/06 18:22:05 INFO util.ExitUtil: Exiting with status 1
    16/09/06 18:22:05 INFO datanode.DataNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down DataNode at AGGARWAL-pc/192.168.1.35
    ************************************************************/

    C:\hadoop-2.7.1\sbin>

    Reply
  27. mark

    jps command is throwing error as :
    ‘jps’ is not recognized as an internal or external command,
    operable program or batch file.

    Reply
  28. Harish

    Hi Tushar,

    I have installed hadoop as per the instructions in this article. But i am facing few problems here.

    1. jps command is throwing error as :
    ‘jps’ is not recognized as an internal or external command,
    operable program or batch file.

    2. i am not able access the url http://localhost:50070.

    Can you please help

    Reply
  29. Muhammad Mahagne

    Great Tutorial , it saved me a lot of time.

    Thank you so much

    Reply
  30. ramesh

    I followed the tutorial, but I am having problems with the ‘start-all.cmd’ command from sbin. It gives the following error:
    ‘C:/ “C:\” org.apache.hadoop.util.PlatformName’ is not recognized as an internal or external command, operable program or batch file.
    The system cannot execute the specified program.
    The system cannot execute the specified program.
    You replied : The first C:/ is C:/javapath and the second C:\ is C:/hadoop-share-common-lib-path
    Could you please tell me where to make this change. I mean which file and in which directory.
    Thanks

    Reply
  31. Tushar Sarde Post author

    Is this in C drive ? try installing this setup into d drive!

    16/08/13 23:14:02 ERROR namenode.NameNode: Failed to start namenode.
    java.io.IOException: All specified directories are not accessible or do not exist.
    at

    Reply
  32. Sumeet Gupta

    this is hdfs-site.xml

    dfs.replication

    1

    dfs.namenode.name.dir

    C:/Installs/hadoop-2.7.1/data/namenode

    dfs.datanode.data.dir

    C:/Installs/hadoop-2.7.1/data/datanode

    Reply
  33. shank

    Wmutil.xe given in the github is not compatible with my system. I have a 32-bit system

    Reply
    1. Tushar Sarde Post author

      Yes this works only on 64 bit! I need to build this solution for 32 bit also

      Reply
  34. Adeoye Adewale

    i can’t successfully runs mapreduce default jobs such as wordcount on my dataset….

    “C:\hadoop-2.7.1.tar\hadoop-2.7.1\sbin>hadoop jar \hadoop-2.7.1.tar\hadoop-2.7.1\share\hadoop\mapreduce\hadoop-mapreduce-examples-2.7.1.jar wordcount /adeoye/input/reviews.txt /adeoye/output/success
    ‘C:\Program’ is not recognized as an internal or external command,
    operable program or batch file.
    16/07/29 21:32:31 INFO client.RMProxy: Connecting to ResourceManager at /127.0.0.1:8032
    16/07/29 21:32:35 INFO input.FileInputFormat: Total input paths to process : 1
    16/07/29 21:32:36 INFO mapreduce.JobSubmitter: number of splits:1
    16/07/29 21:32:38 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1469862632042_0003
    16/07/29 21:32:39 INFO impl.YarnClientImpl: Submitted application application_1469862632042_0003
    16/07/29 21:32:40 INFO mapreduce.Job: The url to track the job: http://AdeoyeAdewale-2016:8088/proxy/application_1469862632042_0003/
    16/07/29 21:32:40 INFO mapreduce.Job: Running job: job_1469862632042_0003
    16/07/29 21:32:52 INFO mapreduce.Job: Job job_1469862632042_0003 running in uber mode : false
    16/07/29 21:32:52 INFO mapreduce.Job: map 0% reduce 0%
    16/07/29 21:32:52 INFO mapreduce.Job: Job job_1469862632042_0003 failed with state FAILED due to: Application application_1469862632042_0003 failed 2 times due to AM Container for appattempt_1469862632042_0003_000002 exited with exitCode: 1
    For more detailed output, check application tracking page:http://AdeoyeAdewale-2016:8088/cluster/app/application_1469862632042_0003Then, click on links to logs of each attempt.
    Diagnostics: Exception from container-launch.
    Container id: container_1469862632042_0003_02_000001
    Exit code: 1
    Exception message: CreateSymbolicLink error (1314): A required privilege is not held by the client.

    Stack trace: ExitCodeException exitCode=1: CreateSymbolicLink error (1314): A required privilege is not held by the client.

    at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
    at org.apache.hadoop.util.Shell.run(Shell.java:456)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

    Shell output: 1 file(s) moved.

    Container exited with a non-zero exit code 1
    Failing this attempt. Failing the application.
    16/07/29 21:32:53 INFO mapreduce.Job: Counters: 0″

    steps taking to resolve the issue:

    Full control permission was giving to the user and as well runs as administrator but the issue still persist….

    Reply
    1. Tushar Sarde Post author

      ‘C:\Program’ is not recognized as an internal or external command,
      operable program or batch file.
      ………….. Try to install it in D drive

      Reply
  35. Ilsa

    Hi. Thankyou for the helpful article.
    I am a newbie to Hadoop and this helped me get started with things. However when i enter the command ‘hdfs namenode format’ it is giving me the error ‘Error: Could not find or load main class K’. i have configured all the files as you have explained and checked environment vasriables etc. Can you please help me resolve this?

    Reply
    1. Tushar Sarde Post author

      what is K ? do you have K drive ? check your configuration files again (xml files) in /etc folder

      Reply
  36. shylendran

    I am getting the below waning and error while running the “hdfs namenode -format” command. What could be the reason. I followed all the above mentioned steps. Am I missing any libraries?

    16/07/19 15:50:27 WARN namenode.FSEditLog: No class configured for C, dfs.nameno
    de.edits.journal-plugin.C is empty
    16/07/19 15:50:27 ERROR namenode.NameNode: Failed to start namenode.
    java.lang.IllegalArgumentException: No class configured for C
    at org.apache.hadoop.hdfs.server.namenode.FSEditLog.getJournalClass(FSEd
    itLog.java:1615)
    at org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEdit
    Log.java:1629)
    at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournals(FSEditL
    og.java:282)
    at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournalsForWrite
    (FSEditLog.java:247)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:
    985)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNo
    de.java:1429)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:15
    54)
    16/07/19 15:50:27 INFO util.ExitUtil: Exiting with status 1
    16/07/19 15:50:27 INFO namenode.NameNode: SHUTDOWN_MSG:

    Reply
    1. Tushar Sarde Post author

      check your configuration files (xml files in etc folder), something is wrong with the files

      Reply
  37. Somasundaram

    Thanks TooDey Team
    It perfectly worked for he installation on the first run itself. I ran it n a DESK TOP machine. May be I have to elan the problem solving and deploying.
    Thanks again TOODEY

    Reply
  38. Atul S

    Thanks for the tutorial. I was able to setup Hadoop 2.7.2 with JDK 1.8 on machine. I was able to access the hadoop sites

    http://localhost:50070
    http://localhost:8088

    I closed all console programs. Now when I am trying to run again the start-all.cmd command I get following error

    ————————-
    This script is Deprecated. Instead use start-dfs.cmd and start-yarn.cmd
    Error: JAVA_HOME is incorrectly set.
    Please update D:\hadoop-2.7.2\conf\hadoop-env.cmd
    ——————————

    If I run start-dfs.cmd commad then it gives following error.

    ——————————-
    Error: JAVA_HOME is incorrectly set.
    Please update D:\hadoop-2.7.2\conf\hadoop-env.cmd
    DEPRECATED: Use of this script to execute hdfs command is deprecated.
    Instead use the hdfs command for it.
    Error: JAVA_HOME is incorrectly set.
    Please update D:\hadoop-2.7.2\conf\hadoop-env.cmd
    ‘-server’ is not recognized as an internal or external command,
    operable program or batch file.
    ——————————-

    Everything worked initially now it stopped working completely. I tried checking the JAVA_HOME path in environment variables, it is properly set. Not sure why it stopped working all of sudden.

    Atul

    Reply
    1. Tushar Sarde Post author

      have you set JAVA_HOME in your hadoop-env.cmd file ? show me the code here, ur JAVA_HOME

      Reply
    2. Tushar Sarde Post author

      Atul, This winutils.exe file is only applicable for hadoop 2.7.1 version 64 bit machine

      Reply
      1. Atul S

        I tried with 2.7.1, I get the same error.

        D:\hadoop-2.7.1\bin>hdfs namenode -format
        Error: JAVA_HOME is incorrectly set.
        Please update D:\hadoop-2.7.1\conf\hadoop-env.cmd
        ‘-Dhadoop.security.logger’ is not recognized as an internal or external command,
        operable program or batch file.

        ———————-

        Java_Home code

        @rem The java implementation to use. Required.
        set JAVA_HOME=D:\Softwares\Java\JDK1.8.0_91

        Reply
        1. Geeta

          Nice tutorial..

          In the beginning i got :
          Error: JAVA_HOME is incorrectly set.
          Please update D:\hadoop-2.7.2\conf\hadoop-env.cmd which has resolved later.

          I had made a mistake in hadoop-2.7.1/etc/hadoop/hadoop-env.cmd.
          Need to set JAVA_HOME path as “C:\PROGRA~1\Java\jdk1.7.0_51” instead of “C:\Program Files\Java\jdk1.7.0_51”
          in hadoop-2.7.1/etc/hadoop/hadoop-env.cmd

          Reply
  39. nag

    I followed the setup process as you explained, I am able to start all the processes except the namenode, could u pls suggest how to make it running? Thanks for the video.
    getting the below exception:

    16/07/04 21:39:41 WARN namenode.FSNamesystem: Encountered exception loading fsimage
    java.io.IOException: All specified directories are not accessible or do not exist.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:208)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:975)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:584)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:644)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:811)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:795)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
    16/07/04 21:39:41 INFO mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
    16/07/04 21:39:41 INFO impl.MetricsSystemImpl: Stopping NameNode metrics system…
    16/07/04 21:39:41 INFO impl.MetricsSystemImpl: NameNode metrics system stopped.
    16/07/04 21:39:41 INFO impl.MetricsSystemImpl: NameNode metrics system shutdown complete.

    Reply
    1. nag

      I am able to resolve the issue , changed the hdfs-site.xml as below and it worked in windows 10, thx.

      dfs.replication
      1

      dfs.namenode.name.dir
      /Hadoop/data/namenode

      dfs.datanode.data.dir
      C:/Hadoop/data/datanode

      Reply
    2. Tushar Sarde Post author

      is this under your c drive ? if yes change installation to other driver like D or E

      Reply
  40. PJ

    Hi,

    I was stuck at running JSP where it did not return any values after executed it. Also, the http://localhost:50070/ is not running. I’ve follow exactly all the steps and the comments suggestion, but no luck. Could you please assist?

    Reply
    1. Tushar Sarde Post author

      could you please show me some errors or logs ?

      Reply
  41. Saurav Sircar

    Doesn’t work for me!

    Here’s the error I’m getting:
    C:\Users\Saurav2512>hdfs namenode -format
    Error: JAVA_HOME is incorrectly set.
    Please update C:\Hadoop\hadoop-2.7.2\conf\hadoop-env.cmd
    ‘-Dhadoop.security.logger’ is not recognized as an internal or external command,
    operable program or batch file.

    I don’t even have a “conf” folder present. Am I supposed to create one?
    Also, I’m using Java 8 and Hadoop 2.7.2. Is that the issue?

    Reply
      1. Sangeetha

        Hi Tushar,

        same problem with 2.7.1
        Can you please suggest?

        Reply
        1. Tushar Sarde Post author

          Hi Sangeetha, have u resolved your problem ? could you please describe me once again here ?

          Reply
    1. Ashrunil Roy

      Hi,
      All your problems will go away once you install java (jdk1.7.0_79) in a path where there is no space in the entire path to java.exe. e.g C:\jdk1.7.0_79\bin\java.exe (no spaces here). Now the “conf” thing…change directory to “hadoop-2.7.2\libexec” and look for the file hadoop-config.cmd. Open the file and search for the string –
      if not exist %JAVA_HOME%\bin\java
      and replace with
      if not exist “%JAVA_HOME%\bin\java.exe”
      save the file and try and execute the command
      > hadoop version
      If the command goes through without errors, you are safe to move to the next stage.

      Reply
  42. nathan

    Hi..
    thanks for the tutorial, i success with localhost:8088/cluster, but i have 2 problem
    1. when i try to open namenode localhost:50070 its failed -> the error is:
    this site can’t be reached

    2. c:\hadoop-2.7.1\sbin\jps
    ‘jps’ is not recognized as an internal or external command, operable program or batch file.

    Reply
      1. Ajith

        Is there a way to install apache hive for the above mentionedsteps for hadoop installation on windows?

        Reply
  43. Kamta Mishra

    i want to install Hbase in window and want to connect with Map Reduce program, so could you help me regarding that please

    Reply
  44. brian

    Thank you for this article and very helpful.

    Where is the next one for eclipse and mapreduce?

    Reply
  45. Tahir Imtiaz

    Thanks alot Tushar… nice tutorial .. initially there was many errors but then i replaced my “etc” folder with your and changed the java home path in hadoop-env and paths in hdfs-site files and it worked perfectly… I installed java in c drive not in program files .. using window 10 64 bit …

    Reply
    1. ayesha

      A.o.a
      how r u Tahir Imtiaz? i have installed but how to run and integerate other s/w just like fllume ,hive.indly give me suggestion .

      Reply
  46. Adeoye Adewale

    Good job author.

    Please help me. it only the resource manager task was shown when i run “jps” while namenode, datanode and node manager never runs…

    your swift response will be appreciated as it help me in my undergraduate project defense.

    or contact me: adeoyeadewale@outlook.com
    Many Thanks

    Reply
  47. Raju Sabbani

    I solved that previous error,
    when ran “start-all.cmd” the four terminals were opend properly
    But i have one new problem when i ran “jps” command through “sbin” i am not able to see below services

    Namenode
    Datanode
    YARN resourcemanager
    YARN nodemanager

    i am using windows-10(64_bit)
    ##### when ran jps it is showing like below ####
    C:\Hadoop\hadoop-2.7.1\sbin>jps

    C:\Hadoop\hadoop-2.7.1\sbin>

    Reply
    1. Adeoye Adewale

      ensure you include your java path “C:\Program Files (x86)\Java\jdk1.7.0_03\bin” this to the PATH on system variable and run “JPS”

      best of luck

      Reply
  48. Raju Sabbani

    Hi….
    I am getting this error can you explain in detail pls…..
    i set my java path like below.

    ( ////// @rem The java implementation to use. Required.
    @rem set JAVA_HOME=%JAVA_HOME%
    set JAVA_HOME=C:\Program Files\Java\jdk1.8.0 /////)

    ####The error is####

    C:\Hadoop\hadoop-2.7.1\etc\hadoop>hdfs namenode -format
    Error: JAVA_HOME is incorrectly set.
    Please update C:\Hadoop\hadoop-2.7.1\conf\hadoop-env.cmd
    ‘-Dhadoop.security.logger’ is not recognized as an internal or external command,
    operable program or batch file.

    Reply
  49. Thomas

    Hello Tushar,

    I’m getting this error when starting the data node: any ideas?
    I’m running Hadoop 2.7.2 on Windows 7

    STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r b165c4fe8a74265c792ce23f546c64604acf0e4
    STARTUP_MSG: java = 1.8.0_65
    ************************************************************/
    16/05/25 17:05:57 FATAL datanode.DataNode: Exception in secureMain
    java.lang.RuntimeException: Error while running command to get file permissions : ExitCodeException exitCode=-107374
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
    at org.apache.hadoop.util.Shell.run(Shell.java:456)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:815)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:798)
    at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1097)
    at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSyste
    at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.jav
    at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:139)
    at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:156)
    at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2341)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2383)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2365)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2257)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2304)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2481)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2505)

    at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSyste
    at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.jav
    at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:139)
    at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:156)
    at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2341)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2383)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2365)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2257)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2304)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2481)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2505)
    16/05/25 17:05:57 INFO util.ExitUtil: Exiting with status 1
    16/05/25 17:05:57 INFO datanode.DataNode: SHUTDOWN_MSG:

    Thomas

    Reply
    1. Tushar Sarde Post author

      Its trying to fetch something from git repo but because of security its not able to access URL.
      Are you running this from Office(secure) network

      Reply
  50. Parani

    Hi Thushar,

    Thank you so much for your post. It helps me to setup Hadoop in Windows Machine. I am getting everything working except jps. when I issue command jps.. I see no results. no errors and nothing. But I could see all the 4 services are up and running the localhost urls are also fine.

    Could you please tell me how to resolve this?

    Reply
    1. Tushar Sarde Post author

      problem with your java may be, try to add java into the path, after setting up everything try opening new session of cmd prompt !

      Reply
  51. Rahul Rjk

    I am getting this error, when trying to install in win10 machine.
    Please help on this

    16/05/21 21:07:08 ERROR namenode.NameNode: Failed to start namenode.
    java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
    at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
    at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:609)
    at org.apache.hadoop.fs.FileUtil.canWrite(FileUtil.java:996)
    at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:490)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:322)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:215)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:975)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:584)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:644)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:811)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:795)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
    16/05/21 21:07:08 INFO util.ExitUtil: Exiting with status 1
    16/05/21 21:07:08 INFO namenode.NameNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down NameNode at Raul/192.168.1.16
    ************************************************************

    Reply
    1. Rahul Rjk

      Getting this error as well

      STARTUP_MSG: java = 1.8.0_92
      ************************************************************/
      16/05/21 21:12:14 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
      16/05/21 21:12:15 FATAL datanode.DataNode: Exception in secureMain
      java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
      at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
      at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:609)
      at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:977)
      at org.apache.hadoop.util.DiskChecker.checkAccessByFileMethods(DiskChecker.java:187)
      at org.apache.hadoop.util.DiskChecker.checkDirAccess(DiskChecker.java:174)
      at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:157)
      at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2344)
      at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2386)
      at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2368)
      at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2260)
      at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2307)
      at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2484)
      at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2508)
      16/05/21 21:12:15 INFO util.ExitUtil: Exiting with status 1
      16/05/21 21:12:15 INFO datanode.DataNode: SHUTDOWN_MSG:
      /************************************************************
      SHUTDOWN_MSG: Shutting down DataNode at Raul/192.168.1.16
      ************************************************************/

      C:\Hadoop\hadoop-2.7.1\sbin>

      Reply
      1. Rahul

        Please help on the below error:-

        hadoop-mapreduce-client-jobclient-2.7.1.jar;C:\Hadoop\hadoop-2.7.1\share\hadoop\mapreduce\hadoop-mapreduce-client-shuffle-2.7.1.jar;C:\Hadoop\hadoop-2.7.1\share\hadoop\mapreduce\hadoop-mapreduce-examples-2.7.1.jar
        STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 15ecc87ccf4a0228f35af08fc56de536e6ce657a; compiled by ‘jenkins’ on 2015-06-29T06:04Z
        STARTUP_MSG: java = 1.8.0_92
        ************************************************************/
        16/05/22 21:59:12 INFO namenode.NameNode: createNameNode []
        16/05/22 21:59:13 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
        16/05/22 21:59:13 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
        16/05/22 21:59:13 INFO impl.MetricsSystemImpl: NameNode metrics system started
        16/05/22 21:59:14 INFO namenode.NameNode: fs.defaultFS is hdfs://localhost:9000
        16/05/22 21:59:14 INFO namenode.NameNode: Clients are to use localhost:9000 to access this namenode/service.
        16/05/22 21:59:14 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
        16/05/22 21:59:14 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
        16/05/22 21:59:15 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
        16/05/22 21:59:15 INFO server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
        16/05/22 21:59:15 INFO http.HttpRequestLog: Http request log for http.requests.namenode is not defined
        16/05/22 21:59:15 INFO http.HttpServer2: Added global filter ‘safety’ (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
        16/05/22 21:59:15 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
        16/05/22 21:59:15 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
        16/05/22 21:59:15 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
        16/05/22 21:59:15 INFO http.HttpServer2: Added filter ‘org.apache.hadoop.hdfs.web.AuthFilter’ (class=org.apache.hadoop.hdfs.web.AuthFilter)
        16/05/22 21:59:15 INFO http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
        16/05/22 21:59:15 INFO http.HttpServer2: Jetty bound to port 50070
        16/05/22 21:59:15 INFO mortbay.log: jetty-6.1.26
        16/05/22 21:59:15 INFO mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
        16/05/22 21:59:15 ERROR common.Util: Syntax error in URI C:\Hadoop\data\namenode. Please check hdfs configuration.
        java.net.URISyntaxException: Illegal character in opaque part at index 2: C:\Hadoop\data\namenode
        at java.net.URI$Parser.fail(URI.java:2848)
        at java.net.URI$Parser.checkChars(URI.java:3021)
        at java.net.URI$Parser.parse(URI.java:3058)
        at java.net.URI.(URI.java:588)
        at org.apache.hadoop.hdfs.server.common.Util.stringAsURI(Util.java:48)
        at org.apache.hadoop.hdfs.server.common.Util.stringCollectionAsURIs(Util.java:98)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getStorageDirs(FSNamesystem.java:1400)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceDirs(FSNamesystem.java:1355)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkConfiguration(FSNamesystem.java:615)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:669)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:584)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:644)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:811)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:795)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
        16/05/22 21:59:15 WARN common.Util: Path C:\Hadoop\data\namenode should be specified as a URI in configuration files. Please update hdfs configuration.
        16/05/22 21:59:15 ERROR common.Util: Syntax error in URI C:\Hadoop\data\namenode. Please check hdfs configuration.
        java.net.URISyntaxException: Illegal character in opaque part at index 2: C:\Hadoop\data\namenode
        at java.net.URI$Parser.fail(URI.java:2848)
        at java.net.URI$Parser.checkChars(URI.java:3021)
        at java.net.URI$Parser.parse(URI.java:3058)
        at java.net.URI.(URI.java:588)
        at org.apache.hadoop.hdfs.server.common.Util.stringAsURI(Util.java:48)
        at org.apache.hadoop.hdfs.server.common.Util.stringCollectionAsURIs(Util.java:98)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getStorageDirs(FSNamesystem.java:1400)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceEditsDirs(FSNamesystem.java:1445)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceEditsDirs(FSNamesystem.java:1414)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkConfiguration(FSNamesystem.java:617)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:669)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:584)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:644)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:811)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:795)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
        16/05/22 21:59:15 WARN common.Util: Path C:\Hadoop\data\namenode should be specified as a URI in configuration files. Please update hdfs configuration.
        16/05/22 21:59:15 WARN namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
        16/05/22 21:59:15 WARN namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
        16/05/22 21:59:15 ERROR common.Util: Syntax error in URI C:\Hadoop\data\namenode. Please check hdfs configuration.
        java.net.URISyntaxException: Illegal character in opaque part at index 2: C:\Hadoop\data\namenode
        at java.net.URI$Parser.fail(URI.java:2848)
        at java.net.URI$Parser.checkChars(URI.java:3021)
        at java.net.URI$Parser.parse(URI.java:3058)
        at java.net.URI.(URI.java:588)
        at org.apache.hadoop.hdfs.server.common.Util.stringAsURI(Util.java:48)
        at org.apache.hadoop.hdfs.server.common.Util.stringCollectionAsURIs(Util.java:98)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getStorageDirs(FSNamesystem.java:1400)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceDirs(FSNamesystem.java:1355)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:670)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:584)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:644)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:811)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:795)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
        16/05/22 21:59:15 WARN common.Util: Path C:\Hadoop\data\namenode should be specified as a URI in configuration files. Please update hdfs configuration.
        16/05/22 21:59:15 ERROR common.Util: Syntax error in URI C:\Hadoop\data\namenode. Please check hdfs configuration.
        java.net.URISyntaxException: Illegal character in opaque part at index 2: C:\Hadoop\data\namenode
        at java.net.URI$Parser.fail(URI.java:2848)
        at java.net.URI$Parser.checkChars(URI.java:3021)
        at java.net.URI$Parser.parse(URI.java:3058)
        at java.net.URI.(URI.java:588)
        at org.apache.hadoop.hdfs.server.common.Util.stringAsURI(Util.java:48)
        at org.apache.hadoop.hdfs.server.common.Util.stringCollectionAsURIs(Util.java:98)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getStorageDirs(FSNamesystem.java:1400)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceEditsDirs(FSNamesystem.java:1445)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceEditsDirs(FSNamesystem.java:1414)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:670)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:584)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:644)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:811)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:795)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
        16/05/22 21:59:15 WARN common.Util: Path C:\Hadoop\data\namenode should be specified as a URI in configuration files. Please update hdfs configuration.
        16/05/22 21:59:16 INFO namenode.FSNamesystem: No KeyProvider found.
        16/05/22 21:59:16 INFO namenode.FSNamesystem: fsLock is fair:true
        16/05/22 21:59:16 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
        16/05/22 21:59:16 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
        16/05/22 21:59:16 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
        16/05/22 21:59:16 INFO blockmanagement.BlockManager: The block deletion will start around 2016 May 22 21:59:16
        16/05/22 21:59:16 INFO util.GSet: Computing capacity for map BlocksMap
        16/05/22 21:59:16 INFO util.GSet: VM type = 32-bit
        16/05/22 21:59:16 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
        16/05/22 21:59:16 INFO util.GSet: capacity = 2^22 = 4194304 entries
        16/05/22 21:59:16 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
        16/05/22 21:59:16 INFO blockmanagement.BlockManager: defaultReplication = 1
        16/05/22 21:59:16 INFO blockmanagement.BlockManager: maxReplication = 512
        16/05/22 21:59:16 INFO blockmanagement.BlockManager: minReplication = 1
        16/05/22 21:59:16 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
        16/05/22 21:59:16 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
        16/05/22 21:59:16 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
        16/05/22 21:59:16 INFO blockmanagement.BlockManager: encryptDataTransfer = false
        16/05/22 21:59:16 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
        16/05/22 21:59:16 INFO namenode.FSNamesystem: fsOwner = Rahul (auth:SIMPLE)
        16/05/22 21:59:16 INFO namenode.FSNamesystem: supergroup = supergroup
        16/05/22 21:59:16 INFO namenode.FSNamesystem: isPermissionEnabled = true
        16/05/22 21:59:16 INFO namenode.FSNamesystem: HA Enabled: false
        16/05/22 21:59:16 INFO namenode.FSNamesystem: Append Enabled: true
        16/05/22 21:59:16 INFO util.GSet: Computing capacity for map INodeMap
        16/05/22 21:59:16 INFO util.GSet: VM type = 32-bit
        16/05/22 21:59:16 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
        16/05/22 21:59:16 INFO util.GSet: capacity = 2^21 = 2097152 entries
        16/05/22 21:59:16 INFO namenode.FSDirectory: ACLs enabled? false
        16/05/22 21:59:16 INFO namenode.FSDirectory: XAttrs enabled? true
        16/05/22 21:59:16 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
        16/05/22 21:59:16 INFO namenode.NameNode: Caching file names occuring more than 10 times
        16/05/22 21:59:16 INFO util.GSet: Computing capacity for map cachedBlocks
        16/05/22 21:59:16 INFO util.GSet: VM type = 32-bit
        16/05/22 21:59:16 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
        16/05/22 21:59:16 INFO util.GSet: capacity = 2^19 = 524288 entries
        16/05/22 21:59:16 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
        16/05/22 21:59:16 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
        16/05/22 21:59:16 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
        16/05/22 21:59:16 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
        16/05/22 21:59:16 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
        16/05/22 21:59:16 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
        16/05/22 21:59:16 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
        16/05/22 21:59:16 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
        16/05/22 21:59:16 INFO util.GSet: Computing capacity for map NameNodeRetryCache
        16/05/22 21:59:16 INFO util.GSet: VM type = 32-bit
        16/05/22 21:59:16 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
        16/05/22 21:59:16 INFO util.GSet: capacity = 2^16 = 65536 entries
        16/05/22 21:59:16 ERROR namenode.NameNode: Failed to start namenode.
        java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
        at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
        at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:609)
        at org.apache.hadoop.fs.FileUtil.canWrite(FileUtil.java:996)
        at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:490)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:322)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:215)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:975)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:584)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:644)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:811)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:795)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
        16/05/22 21:59:16 INFO util.ExitUtil: Exiting with status 1
        16/05/22 21:59:16 INFO namenode.NameNode: SHUTDOWN_MSG:
        /************************************************************
        SHUTDOWN_MSG: Shutting down NameNode at Raul/192.168.1.16
        ************************************************************/

        C:\Hadoop\hadoop-2.7.1\sbin>

        Reply
        1. Tushar Sarde Post author

          i see some configuration error, please check the configuration again . . .

          16/05/22 21:59:15 ERROR common.Util: Syntax error in URI C:\Hadoop\data\namenode. Please check hdfs configuration

          Reply
      2. Tushar Sarde Post author

        16/05/21 21:12:14 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
        16/05/21 21:12:15 FATAL datanode.DataNode: Exception in secureMain
        java.lang.UnsatisfiedLinkError:

        This error is because, its not able to find HADOOP_HOME path that is winutil.exe file
        try to configuire – HADOOP_HOME and also path upto “$HADOOP_HOME/bin/”

        Reply
    2. Tushar Sarde Post author

      Not sure on win 10 but try to configure hadoop on other drives like D or E instead of C

      Reply
  52. Shikha

    I am new to Hadoop.. trying to setup Hadoop following your tutorial on Windows7.
    after running ‘hdfs namenode -format’ command, I am getting below error. Googled to understand the issue, many ppl suggested to edit bashrc file.. bt I am not able to find this file within my hadoop home. Could you please suggest where/what to look for?

    ———————————————————————————————————————————————————-
    C:\Shikha\Hadoop\hadoop-2.7.1\bin>hdfs namenode -format
    Picked up JAVA_TOOL_OPTIONS: -Duser.home=C:\Users\Shikha
    Exception in thread “main” java.lang.UnsupportedClassVersionError: org/apache/ha
    doop/util/PlatformName : Unsupported major.minor version 51.0
    at java.lang.ClassLoader.defineClass1(Native Method)
    at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
    at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
    at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:14
    1)
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
    at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
    Could not find the main class: org.apache.hadoop.util.PlatformName. Program wil
    l exit.
    Picked up JAVA_TOOL_OPTIONS: -Duser.home=C:\Users\43279591
    Exception in thread “main” java.lang.UnsupportedClassVersionError: org/apache/ha
    doop/hdfs/server/namenode/NameNode : Unsupported major.minor version 51.0
    at java.lang.ClassLoader.defineClass1(Native Method)
    at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
    at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
    at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:14
    1)
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
    at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
    Could not find the main class: org.apache.hadoop.hdfs.server.namenode.NameNode.
    Program will exit.

    Reply
    1. Tushar Sarde Post author

      are you using 32 bit windows machine ? this setup works on 64 bit windows machine

      Reply
      1. Shikha

        Thanks for responding.. It’s 64 bit only.. I am using jdk 1.6.. Could it be the rootcause?

        Reply
          1. Tushar Sarde Post author

            Are you using 32 bit windows machine ?

            Reply
  53. Sameen S.

    Hi Tushar,

    Hope you are doing well. Thanks a lot for this fantastic tutorial I have three out of four processes/ services running, only problem I am facing is with datanode I guess. Following is my error code, guide please!
    ————————————————————————————————————————————————————–
    16/05/07 13:55:19 WARN datanode.DataNode: Invalid dfs.datanode.data.dir F:\hadoop-2.7.1\hadoop-2.7.1\etc\hadoop\data\namenode :
    0: Unknown error
    at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:241)
    at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:724)
    at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:502)
    at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:140)
    at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:156)
    at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2344)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2386)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2368)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2260)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2307)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2484)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2508)
    16/05/07 13:55:20 FATAL datanode.DataNode: Exception in secureMain
    java.io.IOException: All directories in dfs.datanode.data.dir are invalid: “/F:/hadoop-2.7.1/hadoop-2.7.1/etc/hadoop/data/namenode/”
    at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2395)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2368)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2260)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2307)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2484)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2508)
    16/05/07 13:55:20 INFO util.ExitUtil: Exiting with status 1
    16/05/07 13:55:20 INFO datanode.DataNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down DataNode at Sameen/192.168.192.1
    —————————————————————————————————————————————————————-

    P.S: I am still newbie to Hadoop environment!
    Thanks,

    Reply
    1. Sameen S.

      Ok I have got it. issue resolved, there was an error in path in hdfs-site.xml, I was giving namenode path two times and skipping the datanode that’s why it wasn’t appearing.
      now I am waiting for the next tutorial “In next tutorial we will see how to run mapreduce programs in windows using eclipse and this hadoop setup”.

      Any tentative plan when it is coming?

      Thanks,

      Reply
  54. MIQTA

    Everything went perfectly right I thought installation was done but I am unable to use GUI. http://localhost:8088/ returns the error that localhost refuses to connect. What could be the reason ?

    Reply
  55. kamakshaiah

    Every thing is fine, but I am not able to format namenode (hdfs namenode -format). Will you please let me know as how to do it.

    Thanks

    Reply
    1. Tushar Sarde Post author

      Please give me more details, what is a error when you run “hdfs namenode -format” command

      Reply
  56. Hiren

    Hi Tushar, Could do this. Thanks for the tutorial. Only problem I found in tutorial was the use of / should be replaced by \ in hdfs-site.xml.

    Reply
  57. Javed

    16/04/27 19:47:49 INFO common.Storage: Cannot lock storage C:\DDrive\Hadoop\hadoop-2.7.1.tar\hadoop-2.7.1\data\datanode. The directory is already locked

    after that i removed the in_use lock file… then again used hdfs namenode -format
    and stared again… but again datanode stoped.

    using the same java as suggested… C:\Program Files\Java\jdk1.7.0_79\bin

    priviously either namenode was not working then make the changes (‘/’ before C drive) like below then namenode stared but datanode stared getting failed.

    dfs.replication
    1

    dfs.namenode.name.dir
    /C:/DDrive/Hadoop/hadoop-2.7.1.tar/hadoop-2.7.1/data/datanode

    dfs.datanode.data.dir
    /C:/DDrive/Hadoop/hadoop-2.7.1.tar/hadoop-2.7.1/data/datanode

    Please suggest.

    Reply
  58. Sanjay T Patil

    HI Tushar,
    This article really helped to get hadoop running on windows 10, can you please share tutorial for mapreduce on windows?

    Thanks and Regards,
    Sanjay

    Reply
  59. nikunj mange

    hi, Thanks for the above tutorial. I followed the above steps and I could start the Hadoop. But when I issued the Jps command, the Namenode is not getting listed. Hence I am not able to access the – http://localhost:50070 link. Could you please help me in resolving this.

    I am new to this can you please also help me how to use in eclipse or some link..

    Reply
    1. ubosm

      the use of / should be replaced by \ in hdfs-site.xml

      Reply
  60. Ga

    Hi,where is the next tutorial?
    “In next tutorial we will see how to run mapreduce programs in windows using eclipse and this hadoop setup”

    Reply
  61. Aniket singh

    Hi Tushar,

    thanks a lot for this post. I am finally able to install hadoop which i could not earlier using cygwin.

    Do you have any other post for running the mapreduce job in hadoop? if yes then please send me the link.

    Thanks,
    Aniket

    Reply
  62. judson gabriel

    im trying installing hadoop 2.7.2 version by im having doubts on step 3 please help

    Reply
  63. midhun

    sir , i am getting a “””org.apache.hadoop.mapred.InvalidInputException: Input Pattern hdfs://localhost:9
    001/user/user54/dblp/raw-000/* matches 0 files””” while running a code can you please help me to solve the issue

    Reply
  64. Ahmed Attia

    First of all this tutorial is perfect (A+) but i need bit help
    I’m trying to use hadoop (2.7.1) for the first time using cygwin on win 8. I followed the configuration and i think every thing is well. However when i issue sbin/start-all.sh, i get the following error

    WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
    Also when i try to use the browser and issue http://localhost:50030/ or http://localhost:50070/ i get the following message

    This site can’t be reached
    localhost refused to connect.
    How can I get my computer to stop refusing the connection? I don’t know much about networking or servers. Thanks in advance

    Reply
  65. Naveen

    I am getting the below error. Please help

    C:\Users\Swaroopa\Downloads\Hadoop\hadoop-2.7.1\bin>
    C:\Users\Swaroopa\Downloads\Hadoop\hadoop-2.7.1\bin>hdfs
    The system cannot find the path specified.
    Error: JAVA_HOME is incorrectly set.
    Please update C:\Users\Swaroopa\Downloads\Hadoop\hadoop-2.7.1\conf\hadoop-env.cmd
    The system cannot find the path specified.
    Error: JAVA_HOME is incorrectly set.
    Please update C:\Users\Swaroopa\Downloads\Hadoop\hadoop-2.7.1\conf\hadoop-env.cmd
    Hadoop common not found.

    Reply
  66. Radha Phadte

    Hi

    Thank you for giving procedure for Hadoop Installation.

    I tried installing hadoop 2.7.1 by following the procedure given by you. But It is not successful.

    My Java version is 1.8.0

    Yarn resource manager and Yarn node manager getting started.

    But Data Node and Name Node are giving errors. Its a error from java. The error is: “Unsatisfied Linker error”.

    Please Help me through this…

    Thank You Smile | 🙂

    Reply
    1. Kunal Relia

      Hi. I am facing the same Problem. Help on this matter will be highly appreciated.

      Reply
      1. Prabakaran

        You might be using JDK older than 1.7.1 , use a newer JDK like 1.8 and it should start fine

        Reply
  67. anu

    Hi Tushar,

    Thanks very much for the tutorial. Its very crisp and easy to learn.

    I followed the tutorial, but I am having problems with the ‘start-all.cmd’ command from sbin. It gives the following error:

    ‘C:/ “C:\” org.apache.hadoop.util.PlatformName’ is not recognized as an internal or external command, operable program or batch file.
    The system cannot execute the specified program.
    The system cannot execute the specified program.

    Can you please help? I have followed the tutorial exactly but I still can’t start all the hadoop components.

    Reply
    1. anu

      The first C:/ is C:/javapath and the second C:\ is C:/hadoop-share-common-lib-path.

      Reply
  68. Zem Wilbury

    Great tutorial. The only one among many I could use. Congrats!

    Reply
  69. Ricardo Merino

    Great tutorial!! thank you so much.

    As a new user of Hadoop I have a question:
    Do I need to format the namenode every time I want to start the daemons?

    Thanks alot again!

    Reply
    1. Tushar Sarde Post author

      No you don’t need to, if you format the namenode all, metadata related information will be gone (metadata like – file size, replication etc)

      Reply
  70. Praveen Sahu

    Hi Tushar,

    Thanks a lot for summing up the process.
    I am usinf hadoop 2.7.1 ,jdk 1.7 and all the config provided by you.While running hadoop namenode -format I am getting this error
    WARN namenode.FSEditLog: No class configured for F, dfs.namenode.edits.journal-plugin.F is empty
    16/03/12 13:40:36 ERROR namenode.NameNode: Failed to start namenode.
    java.lang.IllegalArgumentException: No class configured for F
    at org.apache.hadoop.hdfs.server.namenode.FSEditLog.getJournalClass(FSEditLog.java:1615)
    at org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEditLog.java:1629)
    at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournals(FSEditLog.java:282)
    at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournalsForWrite(FSEditLog.java:247)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:985)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1429)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
    16/03/12 13:40:36 INFO util.ExitUtil: Exiting with status 1
    16/03/12 13:40:36 INFO namenode.NameNode: SHUTDOWN_MSG:
    /************************************************************

    Kinldy help

    Reply
    1. Tushar Sarde Post author

      Looks like you don’t have F driver and what path are you setting into your hdfs-site.xml file ?

      Reply
      1. Praveen Sahu

        It is resolved.Actually I was giving space in java locations path.
        Now if I want to run a hadoop program please let me know if there is any tutorial.

        Installation tutorial is great man .Thanks!!

        Reply
    2. Vipin Singh

      change the property in hdfs-site.xml

      basically you will have to change the drive name with file:
      Eg F:/somefolder/otherfolder to file:/somefolder/otherfolder

      dfs.replication
      1

      dfs.namenode.name.dir
      file:/Hadoop/Hadoop2.7/hadoop-2.7.1/data/namenode

      dfs.datanode.data.dir
      file:/Hadoop/Hadoop2.7/hadoop-2.7.1/data/datanode

      Reply
  71. Sameen S.

    Hi Tushar,
    Thank you for such a plain and vivid tutorial on this, It is really helpful even for beginners.
    Why I am getting following message
    “hdfs is not recognized as internal or external command”
    P.S: I am quite newbie to Hadoop, Please Guide!

    Reply
    1. Tushar Sarde Post author

      You need to set your HADOOP_HOME and HADOOP_CMD path properly

      your windows CMD is not able to find out hadoop.cmd thats why you are getting hdfs is not recognized as internal or external command
      Like in Java we get java is not recognized as internal or external command if we have not set any path.

      Check your system path again and let me know

      Thx

      Reply
  72. Mayank

    Hi Tushar,

    I am using your setting, but I am using the Windows 32 bit version, Can you please give the winutils.exe , dll complied for windows 32 bit version or please guide us how you are building it otherwise.

    Thanks for the tutorial, its really nice.

    Reply
    1. Tushar Sarde Post author

      Hi I will build that for you, actually its a very long process, i will upload the tutorial but it will take time 🙁

      Reply
      1. yash

        Hi Tushar,

        Any luck with win 7 32 bit Utils please ? thanks in advance.

        Reply
        1. Tushar Sarde Post author

          Not tried yet Yash sorry for this but i will upload it in November 2016

          Reply
      2. Ritika

        Hi Tushar,
        Can you please build for 32 bit and share. I am trying to setup hadoop on 32 bit Windows. Thanks in advance.

        Regards
        Ritika

        Reply
        1. Tushar Sarde Post author

          Hi Ritika,

          32 bit tutorial will be available from Oct. 2016, sorry for the delay

          Thanks
          Tushar

          Reply
  73. MR

    Thanks for this tutorial!!
    Though I have a problem in the last part

    In step 3, I replaced my bin folder with yours, but when I try executing hdfs namenode -format, I get an error:

    C:\>hdfs namenode -format
    ‘hdfs’ is not recognized as an internal or external command,
    operable program or batch file.

    (I have hadoop 2.7.2 and jdk 1.8, if it would make things any different. And I only had a template for the mapreduce: mapred-site.xml.template)

    Reply
      1. MR

        Thank you for your reply.

        I redid the tutorial from scratch with hadoop 2.7.1, and I’m still getting the same error message.
        And I don’t have mapred-site.xml, just mapred-site.xml.template (yes, even in 2.7.1).
        So I added the (your code here) at the end of the template document.

        Reply
        1. Jay

          I m getting the same issue MR..if u found the solution plz tell me also..thanx in advance

          Reply
    1. Zem Wilbury

      Go to the bin directory and try

      ./hdfs namenode -format

      Reply
      1. Sameen S.

        Hi Zem,

        I tried this too, but none of the above solutions by Tushar is working for me. And tried this one, but same error appears. Though I am using Hadoop 2.7.1. Please guide!

        Reply
    1. Tushar Sarde Post author

      Kindly explain me your question ? jps is just a command.

      Reply
    2. Malla

      Go to your Java 7/8 installation directory (for ex: C:\Program Files\Java\jdk1.8.0_91\bin) and try below command:

      C:\Program Files\Java\jdk1.8.0_91\bin>jps
      8288 GradleDaemon
      8752
      7092 DataNode
      7556 Jps
      2328 NameNode
      9800 ResourceManager
      9864 NodeManager

      C:\Program Files\Java\jdk1.8.0_91\bin>

      Reply
  74. karhtick

    when is use ‘hdfs namenode -format’ in command prompt it says hdfs is not recognized as internal or external command

    Reply
    1. Tushar Sarde Post author

      Add this in hdfs-site.xml

      name - fs.checkpoint.dir
      value - F:\secondarynamenode

      and start the services again
      sbin/start-all.cmd

      Reply
  75. karhtick

    i cant get you what you are going to say this following step

    Step 3 – Start everything

    Very Important step

    Before starting everything you need to add some [dot].dll and [dot].exe files of windows please download bin folder from my github repository – sardetushar_gitrepo_download

    bin folder – this contains .dll and .exe file (winutils.exe for hadoop 2.7.1)

    Now delete you existing bin folder and replace with new one (downloaded from my repo )

    Reply
    1. Tushar Sarde Post author

      i am saying delete your existing bin folder and download new bin folder from my repository which contains .dll and .exe files

      Reply
  76. raj

    Hi,

    Thanks for your post.
    I tried to follow many links to install but yours is very simple.

    I tried to install hadoop on windows 7 in 32bit machine.I followed your steps.Java version is 1.7.0_95.
    I have few doubts,please clear it if it’s possible.

    1,when i execute hdfs version, i got error.
    when i execute hadoop version,it shows version details. What is the difference and issue?

    D:\Hadoop_TEST\Hadoop\hadoop-2.7.1\bin>hdfs version
    Error: Could not find or load main class version

    D:\Hadoop_TEST\Hadoop\hadoop-2.7.1\bin>hadoop version
    Hadoop 2.7.1
    Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r 15ecc87ccf4a0228f35af08fc56de536e6ce657a
    Compiled by jenkins on 2015-06-29T06:04Z
    Compiled with protoc 2.5.0
    From source with checksum fc0a1a23fc1868e4d5ee7fa2b28a58a
    This command was run using /D:/Hadoop_TEST/Hadoop/hadoop-2.7.1/share/hadoop/common/hadoop-common-2.7.1.jar

    2,I got status 0 When execute hdfs namenode -format.It created 4 files ( fsimage_0000000000000000000,fsimage_0000000000000000000.md5,seen_txid and VERSION )
    but i got below errors and waring while executing the command.

    Log
    —————

    p_TEST\Hadoop\hadoop-2.7.1\share\hadoop\common\lib\hadoop-annotations-2.7.1.jar;D:\Hadoop_TEST\Hadoop\hadoop-2.7.1\share\hadoop\common\lib\hadoop-auth2.7.1.jar;D:\Hadoop_TEST\Hadoop\hadoop2.7.1\share\hadoop\common\lib\hamcrest-core-1.3.jar;D:\Hadoop_TEST\Hadoop\hadoop-2.7.1\share\hadoop\common\lib\htrace-core-3.1.0-i
    .
    .
    .

    STARTUP_MSG: java = 1.7.0_95
    ************************************************************/
    16/02/19 14:42:52 INFO namenode.NameNode: createNameNode [-format]
    16/02/19 14:42:53 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
    16/02/19 14:42:53 ERROR common.Util: Syntax error in URI D:\Hadoop_TEST\Hadoop\Data. Please check hdfs configuration.
    java.net.URISyntaxException: Illegal character in opaque part at index 2: D:\Hadoop_TEST\Hadoop\Data
    at java.net.URI$Parser.fail(URI.java:2829)
    at java.net.URI$Parser.checkChars(URI.java:3002)
    at java.net.URI$Parser.parse(URI.java:3039)
    at java.net.URI.(URI.java:595)
    at org.apache.hadoop.hdfs.server.common.Util.stringAsURI(Util.java:48)
    at org.apache.hadoop.hdfs.server.common.Util.stringCollectionAsURIs(Util
    .java:98) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getStorageDirs(FS
    Namesystem.java:1400) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceDirs(
    FSNamesystem.java:1355) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:
    966) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1429) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
    16/02/19 14:42:53 WARN common.Util: Path D:\Hadoop_TEST\Hadoop\Data should be specified as a URI in configuration files. Please update hdfs configuration.
    16/02/19 14:42:53 ERROR common.Util: Syntax error in URI D:\Hadoop_TEST\Hadoop\Data. Please check hdfs configuration.
    java.net.URISyntaxException: Illegal character in opaque part at index 2: D:\Hadoop_TEST\Hadoop\Data
    at java.net.URI$Parser.fail(URI.java:2829)
    at java.net.URI$Parser.checkChars(URI.java:3002)
    at java.net.URI$Parser.parse(URI.java:3039)
    at java.net.URI.(URI.java:595)
    at org.apache.hadoop.hdfs.server.common.Util.stringAsURI(Util.java:48)
    at org.apache.hadoop.hdfs.server.common.Util.stringCollectionAsURIs(Util
    .java:98) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getStorageDirs(FS
    .
    .
    .
    .
    16/02/19 14:42:56 INFO namenode.FSImage: Allocated new BlockPoolId: BP-507165786-10.219.149.100-1455873176446
    16/02/19 14:42:56 INFO common.Storage: Storage directory D:\Hadoop_TEST\Hadoop\Data has been successfully formatted.
    16/02/19 14:42:56 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
    16/02/19 14:42:56 INFO util.ExitUtil: Exiting with status 0
    16/02/19 14:42:56 INFO namenode.NameNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down NameNode at PC195/10.219.149.100
    ************************************************************/

    D:\Hadoop_TEST\Hadoop\hadoop-2.7.1\bin>

    could you please share Am i right track?

    Thanks ,
    Raj

    Note : Some what difficult to align the text.

    Reply
    1. Tushar Sarde Post author

      Hi Raj,

      I apologise but use this setup for 64 bit windows machine, i have build dll using 64 bit machine.

      Reply
  77. raj

    Hi,

    Thanks for your post.
    I tried to follow many links to install but yours is very simple.

    I tried to install hadoop on windows 7 in 32bit machine.I followed your steps.Java version is 1.7.0_95.
    I have few doubts,please clear it if it’s possible.

    1,when i execute hdfs version, i got error.
    when i execute hadoop version,it shows version details. What is the difference and issue?

    D:\Hadoop_TEST\Hadoop\hadoop-2.7.1\bin>hdfs version
    Error: Could not find or load main class version

    D:\Hadoop_TEST\Hadoop\hadoop-2.7.1\bin>hadoop version
    Hadoop 2.7.1
    Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r 15ecc87ccf4a022
    8f35af08fc56de536e6ce657a
    Compiled by jenkins on 2015-06-29T06:04Z
    Compiled with protoc 2.5.0
    From source with checksum fc0a1a23fc1868e4d5ee7fa2b28a58a
    This command was run using /D:/Hadoop_TEST/Hadoop/hadoop-2.7.1/share/hadoop/comm
    on/hadoop-common-2.7.1.jar

    2,I got status 0 When execute hdfs namenode -format.It created 4 files ( fsimage_0000000000000000000,fsimage_0000000000000000000.md5,seen_txid and VERSION )
    but i got below errors and waring while executing the command.

    p_TEST\Hadoop\hadoop-2.7.1\share\hadoop\common\lib\hadoop-annotations-2.7.1.jar;

    STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 15e
    cc87ccf4a0228f35af08fc56de536e6ce657a; compiled by ‘jenkins’ on 2015-06-29T06:04
    Z
    STARTUP_MSG: java = 1.7.0_95
    ************************************************************/
    16/02/19 14:42:52 INFO namenode.NameNode: createNameNode [-format]
    16/02/19 14:42:53 WARN util.NativeCodeLoader: Unable to load native-hadoop libra
    ry for your platform… using builtin-java classes where applicable
    16/02/19 14:42:53 ERROR common.Util: Syntax error in URI D:\Hadoop_TEST\Hadoop\D
    ata. Please check hdfs configuration.
    java.net.URISyntaxException: Illegal character in opaque part at index 2: D:\Had
    oop_TEST\Hadoop\Data
    at java.net.URI$Parser.fail(URI.java:2829)
    at java.net.URI$Parser.checkChars(URI.java:3002)
    at java.net.URI$Parser.parse(URI.java:3039)
    at java.net.URI.(URI.java:595)
    at org.apache.hadoop.hdfs.server.common.Util.stringAsURI(Util.java:48)
    at org.apache.hadoop.hdfs.server.common.Util.stringCollectionAsURIs(Util
    .java:98)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getStorageDirs(FS
    Namesystem.java:1400)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceDirs(
    FSNamesystem.java:1355)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:
    966)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNo
    de.java:1429)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:15
    54)
    16/02/19 14:42:53 WARN common.Util: Path D:\Hadoop_TEST\Hadoop\Data should be sp
    ecified as a URI in configuration files. Please update hdfs configuration.
    16/02/19 14:42:53 ERROR common.Util: Syntax error in URI D:\Hadoop_TEST\Hadoop\D
    ata. Please check hdfs configuration.
    java.net.URISyntaxException: Illegal character in opaque part at index 2: D:\Had
    oop_TEST\Hadoop\Data
    at java.net.URI$Parser.fail(URI.java:2829)
    at java.net.URI$Parser.checkChars(URI.java:3002)
    at java.net.URI$Parser.parse(URI.java:3039)
    at java.net.URI.(URI.java:595)
    at org.apache.hadoop.hdfs.server.common.Util.stringAsURI(Util.java:48)
    at org.apache.hadoop.hdfs.server.common.Util.stringCollectionAsURIs(Util
    .java:98)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getStorageDirs(FS
    Namesystem.java:1400)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceEdits
    Dirs(FSNamesystem.java:1445)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceEdits
    Dirs(FSNamesystem.java:1414)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:
    971)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNo
    de.java:1429)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:15
    54)

    16/02/19 14:42:56 INFO namenode.NNStorageRetentionManager: Going to retain 1 ima
    ges with txid >= 0
    16/02/19 14:42:56 INFO util.ExitUtil: Exiting with status 0
    16/02/19 14:42:56 INFO namenode.NameNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down NameNode at PC195/10.219.149.100
    ************************************************************/

    D:\Hadoop_TEST\Hadoop\hadoop-2.7.1\bin>

    Reply
  78. Nithya

    I have used above steps to install hadoop. But I have got the following Error. Please suggest me to proceed further
    16/02/18 09:00:02 ERROR namenode.NameNode: Failed to start namenode.
    java.lang.IllegalArgumentException: No class configured for D
    at org.apache.hadoop.hdfs.server.namenode.FSEditLog.getJournalClass(FSEd
    itLog.java:1615)
    at org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEdit
    Log.java:1629)
    at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournals(FSEditL
    og.java:282)
    at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournalsForWrite
    (FSEditLog.java:247)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:
    985)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNo
    de.java:1429)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:15
    54)
    16/02/18 09:00:02 INFO util.ExitUtil: Exiting with status 1
    16/02/18 09:00:02 INFO namenode.NameNode: SHUTDOWN_MSG:

    Reply
  79. marco

    Wonderful tutorial.
    Now, could you indicate us the link for the tutorial “how to run mapreduce programs in windows using eclipse and this hadoop setup”.
    thanks
    Marco

    Reply
  80. Bhushan

    “Below is the msg which is shown before it shuts it down”

    [Fatal Error] core-site.xml:28:22: The processing instruction target matching “[xX][mM][lL]” is not allowed.
    16/02/15 16:24:56 FATAL conf.Configuration: error parsing conf core-site.xml
    org.xml.sax.SAXParseException; systemId: file:/C:/hadoop-2.7.1/etc/hadoop/core-site.xml; lineNumber: 28; columnNumber: 22; The processing instruction target matching “[xX][mM][lL]” is not allowed.

    Reply
    1. Tushar Sarde Post author

      Could you please paste your lines from 25 – 30 ? i need to see that lines

      Reply
      1. Bhushan

        I am not sure if you are able to view all the lines in the comments above. I have again pasted the lines within comments.

        <!–

        fs.defaultFS

        hdfs://localhost:9000

        –>

        Reply
        1. Tushar Sarde Post author

          Delete your existing core-site.xml file and use my github account core-site.xml file.

          Reply
  81. Bhushan

    Hi Tushar,

    I could run all of the steps up until the ‘jps’ command. The 4 windows with the processes did open but the final message was

    SHUTDOWN_MSG: Shutting down ResourceManager at Bhushan-PC/”some IP address”

    And similarly for Namenode, Datanode, NodeManager…..

    How to exactly get to step 4. When I try opening the links mentioned in step 4 it says
    ——————–
    This webpage is not available

    ERR_CONNECTION_REFUSED
    Hide details
    Google Chrome’s connection attempt to localhost was rejected. The website may be down, or your network may not be properly configured.
    ———————

    Could you kindly help me with this. My JAVA_HOME variable is properly set. I am using Windows 10.

    Thanks,
    Bhushan.

    Reply
    1. Zomel

      Make sure to keep the 4 windows open when you run the “jps” command and when you enter to the localhost urls. otherwise this won’t work

      Reply
  82. Anantharaman

    Can you please give setup files like this for Hadoop-2.7.2 also.

    Also, can you tell me how you made those exe, dll & lib files.

    Reply
    1. Tushar Sarde Post author

      Its a very long process, i made that exe using visual studio 2010 and hadoop source code.
      for your need i will write one tutorial for you on this, please wait for sometime.

      Reply
      1. Sumeet Gupta

        Hello Tushar,

        Hope you are doing good. Thanks for the tutorial. It is very descriptive and your efforts are highly appreciated.

        However, when I try to start the DFS through start-dfs.cmd, both NameNode and Datanode are not starting due to different errors. If possible, could you please share your thoughts on what can I do to fix this?

        Please note that I am trying to install it on Windows 10 64 but PC. The path which I am trying to allocate to namenode and datanode exists. I am appending logs from both the (datanode and namenode) windows below for your reference.

        Awaiting your response. Many thanks for your time

        Kind Regards,
        Sumeet

        —————————————————————————-DATANODE—————————————————————————-
        DEPRECATED: Use of this script to execute hdfs command is deprecated.
        Instead use the hdfs command for it.
        16/08/13 23:29:05 INFO datanode.DataNode: STARTUP_MSG:
        /************************************************************
        STARTUP_MSG: Starting DataNode
        STARTUP_MSG: host = Sumeet-BOT/192.168.56.1
        STARTUP_MSG: args = []
        STARTUP_MSG: version = 2.7.1
        STARTUP_MSG: classpath = C:\Installs\hadoop-2.7.1\etc\hadoop;C:\Installs\hadoop-2.7.1\share\hadoop\common\lib\activation-1.1.jar;C:\Installs\hadoop-2.7.1\share\hadoop\common\lib\apacheds-i18n-2.0.0-M15.jar;C:\Installs\hadoop-2.7.1\share\hadoop\common\lib\apacheds-kerberos-codec-2.0.0-
        —————
        ———————-
        —————————-
        16/08/13 23:14:02 ERROR namenode.NameNode: Failed to start namenode.
        java.io.IOException: All specified directories are not accessible or do not exist.
        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:208)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:975)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:584)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:644)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:811)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:795)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
        16/08/13 23:14:02 INFO util.ExitUtil: Exiting with status 1
        16/08/13 23:14:02 INFO namenode.NameNode: SHUTDOWN_MSG:
        /************************************************************
        SHUTDOWN_MSG: Shutting down NameNode at Sumeet-BOT/192.168.56.1
        ************************************************************/

        Reply
  83. Imelda

    Thank you for this tutorial, but I have an obstacles at the step of hdfs namenode -format.

    After i execute “hdfs namenode -format”, the cmd give feedback like this :

    The system cannot find the path specified.
    Error : JAVA_HOME is incorrectly set.
    Please update C :\hadoop-2.7.1\conf\hadoop-env.cmd
    ‘-Dhadoop.security.logger’ is not recognized as an internal or external command, operable program or batch file.

    I just wondering, why after i execute “hdfs namenode -format”, the program can detect dir C;\hadoop-2.7.1\conf\hadoop-env.cmd,because after i check and recheck there is no conf directory, and hadoop-env.cmd file is in etc directory.

    Do I missed something, because I think I have followed your instruction above.
    Thnks

    Reply
    1. Tushar Sarde Post author

      Hi Imelda,

      Have you configured JAVA_HOME in your windows machine in environment variable also in hadoop-env.cmd

      Like this in hadoop-env.cmd –
      set JAVA_HOME=C:\PROGRA~1\Java\jdk1.7.0_67

      Can you please confirm if java is installed successfully.

      Thanks

      Reply
      1. Anonymous

        Hi Tushar Sarde,

        Of course I have configure JAVA_HOME in my windows machine just like in hadoop-env.cmd

        And I can’t solve the problem yet…

        Reply
        1. Tushar Sarde Post author

          You need to recheck everything again, you are missing something !
          Please verify hadoop-env.cmd file in notepad++ for better reading, may be some # or something is missing

          Reply
      2. Noel

        set JAVA_HOME=’C:/Program%20%Files/Java/jdk1.8.0_74′ but still getting the error

        F:\dev\Installed\hadoop-2.7.1\bin>hdfs namenode -format
        Error: JAVA_HOME is incorrectly set.
        Please update F:\dev\Installed\hadoop-2.7.1\conf\hadoop-env.cmd
        ‘-Dhadoop.security.logger’ is not recognized as an internal or external command,
        operable program or batch file.

        Reply
        1. Tushar Sarde Post author

          set path like this

          C:\PROGRA~1\Java\jdk1.8.0_74

          in your hadoop-env.cmd file — – ref my Github files – – github Click here

          Reply
        2. judson gabriel

          im a newbie to hadoop
          1).please check my HADOOP_HOME=E:\Hadoop2.7.1\hadoop-2.7.1-src\hadoop-mapreduce-project\bin variable is correct or please help me to install im using hadoop.2.7.1 and jdk1.8 and windows10 64 bit os with terminal of system32 terminal.
          2).i finished all steps u listed all in your blog in the final step at terminal it shows C:\Users\judson>C:\Users\judson>hdfs namenode -format
          ‘C:\Users\judson’ is not recognized as an internal or external command,
          operable program or batch file.
          and
          3).then in your step 3:where can i replace your repositroy bin file.i replaced it on E:\Hadoop2.7.1\hadoop-2.7.1-src\hadoop-mapreduce-project\bin is it correct please sort out these problems and help me please
          because i am eager to learn about hadoop but i stuck in the first step of installation over a month please help me thank you sir.

          Reply
          1. Tushar Sarde Post author

            C:\Users\judson>C:\Users\judson>hdfs namenode -format

            It seems u have copy pasted path 2 times, check out above and make it like

            C:\Users\judson>hdfs namenode -format

            Reply
  84. Sudhish

    hi, Thanks for the above tutorial. I followed the above steps and I could start the Hadoop. But when I issued the Jps command, the Namenode is not getting listed. Hence I am not able to access the – http://localhost:50070 link. Could you please help me in resolving this.

    Reply
    1. Tushar Sarde Post author

      Can you please copy-paste your . . hdfs-site.xml ? May be you missed something

      Reply
    2. Bhushan

      It might have happened because you might have replaced your etc folder as well. Because the Java_home for you and the person who wrote the post might be different. So I would suggest running step 2 after step 3. That might solve the problem.

      @Tushar : Thanks for the post! It’s been months since I am finding this solution. I hope it works w/o any trouble

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *