Hadoop运行jar包报错java.lang.Exception: java.lang.ArrayIndexOutOfBoundsException: 1


错误信息:

java.lang.Exception: java.lang.Arrayindexoutofboundsexception: 1
    at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:492)
    at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:552)
Caused by: java.lang.Arrayindexoutofboundsexception: 1
    at exper.Filter$Map.map(Filter.java:25)
    at exper.Filter$Map.map(Filter.java:19)
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
    at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
错误原因:源代码中设置数据的分隔符为空格,而数据集中代码分隔符为tab制表符
修改标红语句:

public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
String line = value.toString();
System.out.println(line);
String arr[] = line.split("\t");
newKey.set(arr[1]);
context.write(newKey, NullWritable.get());
System.out.println(newKey);
}

之后在Linux上运行jar文件
在存储jar包文件夹下输入命令:
hadoop jar MapreduceDemo-1.0-SNAPSHOT.jar exper.Filter /mymapreduce2/in/buyer_myfavorite1 /user/root/mymapreduce2/out
运行成功!

相关文章

hadoop搭建准备工作三台虚拟机:master、node1、node2检查时...
文件的更名和移动:    获取文件详细信息       遇...
目录一、背景1)小文件是如何产生的?2)文件块大小设置3)H...
目录一、概述二、HadoopDataNode多目录磁盘配置1)配置hdfs-...
平台搭建(伪分布式)伪分布式搭建在VM中搭建std-master修改...
 一、HDFS概述 1.1、HDFS产出背景及定义 1.1.1、HDFS产生...