Hadoop >

note.png最新の設定情報は、Hadoopデフォルト設定をご参照ください。

目次

User limits

  • Hadoopの各デーモンは、ライブラリ参照のため多くのファイルをオープンしたり、多くのプロセスを起動させます。そのため、システムによるそれらのリソース制限をあらかじめ緩和しておく必要があります。
  • 通常、limitsファイルにて緩和設定を行いますが、この設定は PAM(Pluggable Authentication Modules)認証を経た場合にのみ有効となりますので、各デーモンの起動時にそれらが反映されるかどうかをあらかじめ確認しておくとよいでしょう。Hadoopの起動スクリプトでは、su コマンドを用いていますので、以下の要領で反映状況を確認できます。
    $ sudo su -s /bin/bash alice -c 'ulimit -a'
    core file size          (blocks, -c) 0
    data seg size           (kbytes, -d) unlimited
    scheduling priority             (-e) 0
    file size               (blocks, -f) unlimited
    pending signals                 (-i) 9858
    max locked memory       (kbytes, -l) 64
    max memory size         (kbytes, -m) unlimited
    open files                      (-n) 1024
    pipe size            (512 bytes, -p) 8
    POSIX message queues     (bytes, -q) 819200
    real-time priority              (-r) 0
    stack size              (kbytes, -s) 10240
    cpu time               (seconds, -t) unlimited
    max user processes              (-u) 1024
    virtual memory          (kbytes, -v) unlimited
    file locks                      (-x) unlimited
    
    $ sudo su -s /bin/bash hdfs -c 'ulimit -a'
    core file size          (blocks, -c) 0
    data seg size           (kbytes, -d) unlimited
    scheduling priority             (-e) 0
    file size               (blocks, -f) unlimited
    pending signals                 (-i) 9858
    max locked memory       (kbytes, -l) 64
    max memory size         (kbytes, -m) unlimited
    open files                      (-n) 32768
    pipe size            (512 bytes, -p) 8
    POSIX message queues     (bytes, -q) 819200
    real-time priority              (-r) 0
    stack size              (kbytes, -s) 10240
    cpu time               (seconds, -t) unlimited
    max user processes              (-u) 65536
    virtual memory          (kbytes, -v) unlimited
    file locks                      (-x) unlimited
  • 参考までに、各ディストリビューションの設定内容を以下の通りです。

CDH3

$ cat /etc/security/limits.d/99-hadoop.nofiles.conf
hdfs - nofile 32768
mapred - nofile 32768
$ cat /etc/security/limits.d/99-hadoop.nproc.conf
hdfs - nproc 65536
mapred - nproc 65536

HDP1.2

$ cat /etc/security/limits.d/hdfs.conf
...
hdfs   - nofile 32768
hdfs   - nproc  65536
$ cat /etc/security/limits.d/mapred.conf
...
mapred - nofile 32768
mapred - nproc  65536

CDH4

$ cat /etc/security/limits.d/hdfs.conf
...
hdfs - nofile 32768
hdfs - nproc  65536
$ cat /etc/security/limits.d/yarn.conf
...
yarn   - nofile 32768
yarn   - nproc  65536
$ cat /etc/security/limits.d/mapreduce.conf
...
mapred    - nofile 32768
mapred    - nproc  65536

HDP2.0

$ cat /etc/security/limits.d/hdfs.conf 
...
hdfs   - nofile 32768
hdfs   - nproc  65536
$ cat /etc/security/limits.d/yarn.conf 
...
yarn   - nofile 32768
yarn   - nproc  65536
$ cat /etc/security/limits.d/mapreduce.conf 
...
mapred    - nofile 32768
mapred    - nproc  65536

Hadoop本体

1.x系

  1. 各Hadoopディストリビューションのドキュメントには、デフォルト設定を示す *-default.xml が同梱されていますが、内容が必ずしも正しくない場合があります。以下では動作する最低限の設定(fs.default.name、mapred.job.trackerのみ設定の疑似分散)で構築したクラスタ上でサンプルジョブを実行し、そのジョブ設定一覧をスナップショットして各ディストリビューションのデフォルト設定を調査しています。クラスタ構築の要領は以下の通りです。
    $ cd $HADOOP_PREFIX
    $ sudo mkdir logs
    $ sudo chown hadoop:hadoop logs/
    $ sudo chmod 775 logs/
    $ sudo -u hdfs ./bin/hadoop namenode -format
    $ sudo -u hdfs ./bin/hadoop-daemon.sh start namenode
    $ sudo -u hdfs ./bin/hadoop-daemon.sh start datanode
    $ sudo -u hdfs ./bin/hadoop fs -mkdir /tmp
    $ sudo -u hdfs ./bin/hadoop fs -chmod 777 /tmp
    $ sudo -u mapred ./bin/hadoop-daemon.sh start jobtracker
    $ sudo -u mapred ./bin/hadoop-daemon.sh start tasktracker
    $ sudo -u hdfs ./bin/hadoop fs -mkdir /user/alice
    $ sudo -u hdfs ./bin/hadoop fs -chown alice:alice /user/alice
    $ sudo -u alice ./bin/hadoop jar hadoop-examples-*.jar pi 5 10
  2. 結果の設定一覧の各TSVファイル(default.tsv)は以下の通りです。
    1. Apache Hadoop 1.1.x
    2. Apache Hadoop 1.0.x
    3. CDH3
    4. HDP1.2

Apache Hadoop 1.0 と 1.1 の差異

  • 1.0.4 と 1.1.2 の差異
    1. $ diff -U 0 localhost-1.0/default.tsv localhost-1.1/default.tsv
    2. --- localhost-1.0/default.tsv 2013-05-17 19:07:13.324619781 +0900
    3. +++ localhost-1.1/default.tsv 2013-05-17 19:17:46.687661551 +0900
    4. @@ -10,0 +11 @@
    5. +dfs.client.use.datanode.hostname false
    6. @@ -21,0 +23,2 @@
    7. +dfs.datanode.max.xcievers 4096
    8. +dfs.datanode.use.datanode.hostname false
    9. @@ -33,0 +37 @@
    10. +dfs.namenode.check.stale.datanode false
    11. @@ -39,0 +44,2 @@
    12. +dfs.namenode.invalidate.work.pct.per.iteration 0.32f
    13. +dfs.namenode.kerberos.internal.spnego.principal ${dfs.web.authentication.kerberos.principal}
    14. @@ -40,0 +47,3 @@
    15. +dfs.namenode.replication.work.multiplier.per.iteration 2
    16. +dfs.namenode.safemode.min.datanodes 0
    17. +dfs.namenode.stale.datanode.interval 30000
    18. @@ -51 +60 @@
    19. -dfs.support.append false
    20. +dfs.secondary.namenode.kerberos.internal.spnego.principal ${dfs.web.authentication.kerberos.principal}
    21. @@ -74,0 +84,6 @@
    22. +hadoop.http.authentication.kerberos.keytab ${user.home}/hadoop.keytab
    23. +hadoop.http.authentication.kerberos.principal HTTP/localhost@LOCALHOST
    24. +hadoop.http.authentication.signature.secret.file ${user.home}/hadoop-http-auth-signature-secret
    25. +hadoop.http.authentication.simple.anonymous.allowed true
    26. +hadoop.http.authentication.token.validity 36000
    27. +hadoop.http.authentication.type simple
    28. @@ -77,0 +93 @@
    29. +hadoop.relaxed.worker.version.check false
    30. @@ -83,0 +100 @@
    31. +hadoop.security.use-weak-http-crypto false
    32. @@ -122,0 +140 @@
    33. +mapred.disk.healthChecker.interval 60000
    34. @@ -210,0 +229,2 @@
    35. +mapreduce.ifile.readahead true
    36. +mapreduce.ifile.readahead.bytes 4194304
    37. @@ -215 +235,4 @@
    38. -mapreduce.job.counters.limit 120
    39. +mapreduce.job.counters.counter.name.max 64
    40. +mapreduce.job.counters.group.name.max 128
    41. +mapreduce.job.counters.groups.max 50
    42. +mapreduce.job.counters.max 120

Apache Hadoop と CDH3 の差異

  1. $ diff -U 0 hadoop_default_conf-apache1.0.4.tsv hadoop_default_conf-cdh3.tsv
  2. --- hadoop_default_conf-apache1.0.4.tsv 2013-03-26 22:15:20.774527826 +0900
  3. +++ hadoop_default_conf-cdh3.tsv 2013-03-26 19:53:18.120266266 +0900
  4. @@ -10,0 +11 @@
  5. +dfs.client.use.datanode.hostname false
  6. @@ -13 +14,2 @@
  7. -dfs.datanode.data.dir.perm 755
  8. +dfs.datanode.data.dir.perm 700
  9. +dfs.datanode.directoryscan.threads 1
  10. @@ -15,0 +18,2 @@
  11. +dfs.datanode.drop.cache.behind.reads false
  12. +dfs.datanode.drop.cache.behind.writes false
  13. @@ -21,0 +26,3 @@
  14. +dfs.datanode.readahead.bytes 4193404
  15. +dfs.datanode.sync.behind.writes false
  16. +dfs.datanode.use.datanode.hostname false
  17. @@ -30,0 +38 @@
  18. +dfs.image.transfer.bandwidthPerSec 0
  19. @@ -39,0 +48,2 @@
  20. +dfs.namenode.invalidate.work.pct.per.iteration 0.32
  21. +dfs.namenode.kerberos.internal.spnego.principal ${dfs.web.authentication.kerberos.principal}
  22. @@ -40,0 +51,2 @@
  23. +dfs.namenode.name.dir.restore false
  24. +dfs.namenode.replication.work.multiplier.per.iteration 2
  25. @@ -48,0 +61 @@
  26. +dfs.safemode.min.datanodes 0
  27. @@ -51 +64 @@
  28. -dfs.support.append false
  29. +dfs.secondary.namenode.kerberos.internal.spnego.principal ${dfs.web.authentication.kerberos.principal}
  30. @@ -52,0 +66,2 @@
  31. +dfs.webhdfs.enabled false
  32. +fs.automatic.close true
  33. @@ -71,0 +87 @@
  34. +fs.s3n.block.size 67108864
  35. @@ -74,0 +91,10 @@
  36. +group.name alice
  37. +hadoop.fuse.connection.timeout 300
  38. +hadoop.fuse.timer.period 5
  39. +hadoop.http.authentication.kerberos.keytab ${user.home}/hadoop.keytab
  40. +hadoop.http.authentication.kerberos.principal HTTP/_HOST@LOCALHOST
  41. +hadoop.http.authentication.signature.secret.file ${user.home}/hadoop-http-auth-signature-secret
  42. +hadoop.http.authentication.simple.anonymous.allowed true
  43. +hadoop.http.authentication.token.validity 36000
  44. +hadoop.http.authentication.type simple
  45. +hadoop.kerberos.kinit.command kinit
  46. @@ -77,0 +104 @@
  47. +hadoop.relaxed.worker.version.check true
  48. @@ -81,0 +109 @@
  49. +hadoop.security.instrumentation.requires.admin false
  50. @@ -83,0 +112 @@
  51. +hadoop.security.use-weak-http-crypto true
  52. @@ -85,0 +115 @@
  53. +hadoop.workaround.non.threadsafe.getpwuid false
  54. @@ -87 +117 @@
  55. -io.compression.codecs org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,
  56. org.apache.hadoop.io.compress.SnappyCodec
  57. +io.compression.codecs org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,
  58. org.apache.hadoop.io.compress.DeflateCodec,org.apache.hadoop.io.compress.SnappyCodec
  59. @@ -109,0 +140 @@
  60. +jobclient.completion.poll.interval 5000
  61. @@ -110,0 +142 @@
  62. +jobclient.progress.monitor.poll.interval 1000
  63. @@ -121 +152,0 @@
  64. -mapred.combine.recordsBeforeProgress 10000
  65. @@ -122,0 +154 @@
  66. +mapred.disk.healthChecker.interval 60000
  67. @@ -146,2 +177,0 @@
  68. -mapred.jobtracker.blacklist.fault-bucket-width 15
  69. -mapred.jobtracker.blacklist.fault-timeout-window 180
  70. @@ -148,0 +179 @@
  71. +mapred.jobtracker.instrumentation org.apache.hadoop.mapred.JobTrackerMetricsInst
  72. @@ -156,0 +188 @@
  73. +mapred.map.child.log.level INFO
  74. @@ -175,0 +208 @@
  75. +mapred.reduce.child.log.level INFO
  76. @@ -200,0 +234 @@
  77. +mapred.tasktracker.instrumentation org.apache.hadoop.mapred.TaskTrackerMetricsInst
  78. @@ -215,3 +249,5 @@
  79. -mapreduce.job.counters.limit 120
  80. -mapreduce.job.split.metainfo.maxsize 10000000
  81. +mapreduce.job.counters.counter.name.max 64
  82. +mapreduce.job.counters.group.name.max 128
  83. +mapreduce.job.counters.groups.max 50
  84. +mapreduce.job.counters.max 120
  85. @@ -219,0 +256 @@
  86. +mapreduce.jobtracker.split.metainfo.maxsize 10000000
  87. @@ -224,0 +262 @@
  88. +mapreduce.tasktracker.cache.local.numberdirectories 10000
  89. @@ -226 +263,0 @@
  90. -mapreduce.tasktracker.outofband.heartbeat.damper 1000000

Apache Hadoop と HDP1.2 の差異

  1. $ diff -U 0 hadoop_default_conf-apache1.0.4.tsv hadoop_default_conf-hdp1.2.tsv
  2. --- hadoop_default_conf-apache1.0.4.tsv 2013-03-26 22:15:20.774527826 +0900
  3. +++ hadoop_default_conf-hdp1.2.tsv 2013-03-26 19:53:51.764433103 +0900
  4. @@ -10,0 +11 @@
  5. +dfs.client.use.datanode.hostname false
  6. @@ -21,0 +23,2 @@
  7. +dfs.datanode.max.xcievers 4096
  8. +dfs.datanode.use.datanode.hostname false
  9. @@ -33,0 +37 @@
  10. +dfs.namenode.check.stale.datanode false
  11. @@ -39,0 +44,2 @@
  12. +dfs.namenode.invalidate.work.pct.per.iteration 0.32f
  13. +dfs.namenode.kerberos.internal.spnego.principal ${dfs.web.authentication.kerberos.principal}
  14. @@ -40,0 +47,3 @@
  15. +dfs.namenode.replication.work.multiplier.per.iteration 2
  16. +dfs.namenode.safemode.min.datanodes 0
  17. +dfs.namenode.stale.datanode.interval 30000
  18. @@ -51 +60 @@
  19. -dfs.support.append false
  20. +dfs.secondary.namenode.kerberos.internal.spnego.principal ${dfs.web.authentication.kerberos.principal}
  21. @@ -74,0 +84,6 @@
  22. +hadoop.http.authentication.kerberos.keytab ${user.home}/hadoop.keytab
  23. +hadoop.http.authentication.kerberos.principal HTTP/localhost@LOCALHOST
  24. +hadoop.http.authentication.signature.secret.file ${user.home}/hadoop-http-auth-signature-secret
  25. +hadoop.http.authentication.simple.anonymous.allowed true
  26. +hadoop.http.authentication.token.validity 36000
  27. +hadoop.http.authentication.type simple
  28. @@ -77,0 +93 @@
  29. +hadoop.relaxed.worker.version.check false
  30. @@ -81,0 +98 @@
  31. +hadoop.security.instrumentation.requires.admin false
  32. @@ -83,0 +101 @@
  33. +hadoop.security.use-weak-http-crypto false
  34. @@ -122,0 +141 @@
  35. +mapred.disk.healthChecker.interval 60000
  36. @@ -210,0 +230,2 @@
  37. +mapreduce.ifile.readahead true
  38. +mapreduce.ifile.readahead.bytes 4194304
  39. @@ -215,2 +236,5 @@
  40. -mapreduce.job.counters.limit 120
  41. +mapreduce.job.counters.counter.name.max 64
  42. +mapreduce.job.counters.group.name.max 128
  43. +mapreduce.job.counters.groups.max 50
  44. +mapreduce.job.counters.max 120

2.x系

  1. Hadoop2.x系では設定ファイルの管理方法が整理されたため、以下の動作に必要な最小限の設定を行った上でWeb UI(http://localhost:8088/conf)経由でデフォルト設定を調査しています。
    • プロパティ
      • fs.defaultFS: hdfs://localhost:9000
      • mapreduce.framework.name: yarn
    • capacity-scheduler.xml の追加(ディストリビューションに含まれない場合)
    • クラスタ構築の要領は以下のとおり。
      $ cd $HADOOP_PREFIX
      $ sudo mkdir logs
      $ sudo chown hadoop:hadoop logs/
      $ sudo chmod 775 logs/
      $ sudo -u hdfs ./bin/hadoop namenode -format
      $ sudo -u hdfs ./sbin/hadoop-daemon.sh start namenode
      $ sudo -u hdfs ./sbin/hadoop-daemon.sh start datanode
      $ sudo -u hdfs ./bin/hadoop fs -mkdir /tmp
      $ sudo -u hdfs ./bin/hadoop fs -chmod 1777 /tmp
      $ sudo -u yarn ./sbin/yarn-daemon.sh start resourcemanager
      $ sudo -u yarn ./sbin/yarn-daemon.sh start nodemanager
      $ sudo -u mapred ./sbin/mr-jobhistory-daemon.sh start historyserver
      $ sudo -u hdfs ./bin/hadoop fs -mkdir -p /user/alice
      $ sudo -u hdfs ./bin/hadoop fs -chown alice:alice /user/alice
  2. 結果の設定一覧の各XMLファイル(default.xml)とプロパティでソート済みのTSVファイル(default.tsv)は以下の通りです。
    1. Apache Hadoop 2.x
    2. CDH4
    3. HDP2.0

Apache Hadoop 2.0.3a と 2.0.4a の差異

  1. $ diff -U 0 default-2.0.3a.tsv default-2.0.4a.tsv
  2. --- default-2.0.3a.tsv 2013-04-01 19:06:08.782217977 +0900
  3. +++ default-2.0.4a.tsv 2013-05-22 18:16:49.200977754 +0900
  4. @@ -73,0 +74 @@
  5. +io.compression.codec.bzip2.library system-native
  6. @@ -195,0 +197 @@
  7. +mapreduce.shuffle.max.connections 0
  8. @@ -288,0 +291 @@
  9. +yarn.nodemanager.pmem-check-enabled true
  10. @@ -295,0 +299 @@
  11. +yarn.nodemanager.vmem-check-enabled true

Apache Hadoop (2.0.3a) と CDH4 (4.2) の差異

  1. $ diff -U 0 localhost-2.0/default.tsv localhost-cdh4/default.tsv
  2. --- localhost-2.0/default.tsv 2013-04-01 20:06:08.782217977 +0900
  3. +++ localhost-cdh4/default.tsv 2013-04-01 20:07:07.795079173 +0900
  4. @@ -88 +87,0 @@
  5. -ipc.client.connect.timeout 20000
  6. @@ -103 +101,0 @@
  7. -mapreduce.application.classpath $HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*,$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
  8. @@ -117,2 +114,0 @@
  9. -mapreduce.job.classloader false
  10. -mapreduce.job.classloader.system.classes java.,javax.,org.apache.commons.logging.,org.apache.log4j.,org.apache.hadoop.
  11. @@ -123,3 +119,3 @@
  12. -mapreduce.job.end-notification.max.retry.interval 5000
  13. -mapreduce.job.end-notification.retry.attempts 0
  14. -mapreduce.job.end-notification.retry.interval 1000
  15. +mapreduce.job.end-notification.max.retry.interval 5
  16. +mapreduce.job.end-notification.retry.attempts 5
  17. +mapreduce.job.end-notification.retry.interval 1
  18. @@ -168 +163,0 @@
  19. -mapreduce.map.cpu.vcores 1
  20. @@ -180 +174,0 @@
  21. -mapreduce.reduce.cpu.vcores 1
  22. @@ -246,2 +239,0 @@
  23. -yarn.app.mapreduce.am.job.committer.cancel-timeout 60000
  24. -yarn.app.mapreduce.am.job.committer.commit-window 10000
  25. @@ -249 +240,0 @@
  26. -yarn.app.mapreduce.am.resource.cpu-vcores 1
  27. @@ -255 +246 @@
  28. -yarn.application.classpath $HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/share/hadoop/common/*,$HADOOP_COMMON_HOME/share/hadoop/common/lib/*,
  29. $HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,$HADOOP_YARN_HOME/share/hadoop/yarn/*,$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*
  30. +yarn.application.classpath $HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*,$HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*,
  31. $HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*,$YARN_HOME/*,$YARN_HOME/lib/*
  32. @@ -258,2 +249 @@
  33. -yarn.log-aggregation-enable false
  34. -yarn.log-aggregation.retain-check-interval-seconds -1
  35. +yarn.log-aggregation-enable true
  36. @@ -263,0 +254 @@
  37. +yarn.nodemanager.aux-services mapreduce.shuffle
  38. @@ -272 +263 @@
  39. -yarn.nodemanager.env-whitelist JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,HADOOP_YARN_HOME
  40. +yarn.nodemanager.env-whitelist JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,YARN_HOME
  41. @@ -277,4 +268 @@
  42. -yarn.nodemanager.linux-container-executor.cgroups.hierarchy /hadoop-yarn
  43. -yarn.nodemanager.linux-container-executor.cgroups.mount false
  44. -yarn.nodemanager.linux-container-executor.resources-handler.class org.apache.hadoop.yarn.server.nodemanager.util.DefaultLCEResourcesHandler
  45. -yarn.nodemanager.local-dirs ${hadoop.tmp.dir}/nm-local-dir
  46. +yarn.nodemanager.local-dirs /var/lib/hadoop-yarn/cache/${user.name}/nm-local-dir
  47. @@ -287 +275 @@
  48. -yarn.nodemanager.log-dirs ${yarn.log.dir}/userlogs
  49. +yarn.nodemanager.log-dirs /var/log/hadoop-yarn/containers
  50. @@ -290 +278 @@
  51. -yarn.nodemanager.remote-app-log-dir /tmp/logs
  52. +yarn.nodemanager.remote-app-log-dir /var/log/hadoop-yarn/apps
  53. @@ -292 +279,0 @@
  54. -yarn.nodemanager.resource.cpu-cores 8
  55. @@ -295 +281,0 @@
  56. -yarn.nodemanager.vcores-pcores-ratio 2
  57. @@ -308 +293,0 @@
  58. -yarn.resourcemanager.fs.rm-state-store.uri ${hadoop.tmp.dir}/yarn/system/rmstore
  59. @@ -312 +296,0 @@
  60. -yarn.resourcemanager.recovery.enabled false
  61. @@ -316 +300 @@
  62. -yarn.resourcemanager.scheduler.class org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
  63. +yarn.resourcemanager.scheduler.class org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.FifoScheduler
  64. @@ -318 +301,0 @@
  65. -yarn.resourcemanager.store.class org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
  66. @@ -321 +303,0 @@
  67. -yarn.scheduler.maximum-allocation-vcores 32
  68. @@ -323 +304,0 @@
  69. -yarn.scheduler.minimum-allocation-vcores 1

Apache Hadoop (2.0.3a) と HDP2.0 (2.0.0.2) の差異

  • 検証用に設定した内容の差異が出てしまいましたが、実質、この両ディストリビューションにデフォルト設定の差異はありません。
  1. $ diff -U 0 localhost-2.0/default.tsv localhost-hdp2.0/default.tsv
  2. --- localhost-2.0/default.tsv 2013-04-01 20:06:08.782217977 +0900
  3. +++ localhost-hdp2.0/default.tsv 2013-04-01 20:08:13.376032221 +0900
  4. @@ -12 +12 @@
  5. -fs.defaultFS hdfs://localhost:9000/
  6. +fs.defaultFS hdfs://localhost:9000

トップ   編集 凍結 差分 バックアップ 添付 複製 名前変更 リロード   新規 一覧 単語検索 最終更新   ヘルプ   最終更新のRSS
Last-modified: 2013-11-15 (金) 18:12:41 (1853d)