eclipse连接spark 运行Wordcount

这是eclipse的日志:----------------------------------------------------------------------------------------------------------------------------------------Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties15/06/09 13:44:42 ... 显示全部
这是eclipse的日志:
----------------------------------------------------------------------------------------------------------------------------------------
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/06/09 13:44:42 INFO SecurityManager: Changing view acls to: Administrator
15/06/09 13:44:42 INFO SecurityManager: Changing modify acls to: Administrator
15/06/09 13:44:42 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(Administrator); users with modify permissions: Set(Administrator)
15/06/09 13:44:44 INFO Slf4jLogger: Slf4jLogger started
15/06/09 13:44:45 INFO Remoting: Starting remoting
15/06/09 13:44:46 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@34M5BESJQA541JT:62642]
15/06/09 13:44:46 INFO Utils: Successfully started service 'sparkDriver' on port 62642.
15/06/09 13:44:46 INFO SparkEnv: Registering MapOutputTracker
15/06/09 13:44:46 INFO SparkEnv: Registering BlockManagerMaster
15/06/09 13:44:46 INFO DiskBlockManager: Created local directory at C:UsersADMINI~1AppDataLocalTempspark-local-20150609134446-f67a
15/06/09 13:44:46 INFO MemoryStore: MemoryStore started with capacity 480.1 MB
15/06/09 13:44:47 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/06/09 13:44:48 INFO HttpFileServer: HTTP File server directory is C:UsersADMINI~1AppDataLocalTempspark-fe040419-e74c-4e08-afcd-111d84b2d4da
15/06/09 13:44:48 INFO HttpServer: Starting HTTP Server
15/06/09 13:44:48 INFO Utils: Successfully started service 'HTTP file server' on port 62644.
15/06/09 13:44:48 INFO Utils: Successfully started service 'SparkUI' on port 4040.
15/06/09 13:44:48 INFO SparkUI: Started SparkUI at http://34M5BESJQA541JT:4040
15/06/09 13:44:49 INFO AppClient$ClientActor: Connecting to master spark://10.9.174.118:7077...
15/06/09 13:44:52 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20150609134307-0000
15/06/09 13:44:52 INFO AppClient$ClientActor: Executor added: app-20150609134307-0000/0 on worker-20150609095031-PC01-49547 (PC01:49547) with 2 cores
15/06/09 13:44:52 INFO SparkDeploySchedulerBackend: Granted executor ID app-20150609134307-0000/0 on hostPort PC01:49547 with 2 cores, 512.0 MB RAM
15/06/09 13:44:52 INFO AppClient$ClientActor: Executor added: app-20150609134307-0000/1 on worker-20150609095120-PC03-46679 (PC03:46679) with 2 cores
15/06/09 13:44:52 INFO SparkDeploySchedulerBackend: Granted executor ID app-20150609134307-0000/1 on hostPort PC03:46679 with 2 cores, 512.0 MB RAM
15/06/09 13:44:53 INFO AppClient$ClientActor: Executor updated: app-20150609134307-0000/0 is now RUNNING
15/06/09 13:44:53 INFO AppClient$ClientActor: Executor updated: app-20150609134307-0000/1 is now RUNNING
15/06/09 13:44:53 INFO AppClient$ClientActor: Executor updated: app-20150609134307-0000/0 is now LOADING
15/06/09 13:44:53 INFO AppClient$ClientActor: Executor updated: app-20150609134307-0000/1 is now LOADING
15/06/09 13:44:53 INFO NettyBlockTransferService: Server created on 62671
15/06/09 13:44:53 INFO BlockManagerMaster: Trying to register BlockManager
15/06/09 13:44:53 INFO BlockManagerMasterActor: Registering block manager 34M5BESJQA541JT:62671 with 480.1 MB RAM, BlockManagerId(, 34M5BESJQA541JT, 62671)
15/06/09 13:44:53 INFO BlockManagerMaster: Registered BlockManager
15/06/09 13:44:54 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
15/06/09 13:44:55 INFO MemoryStore: ensureFreeSpace(138675) called with curMem=0, maxMem=503379394
15/06/09 13:44:55 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 135.4 KB, free 479.9 MB)
15/06/09 13:44:56 INFO MemoryStore: ensureFreeSpace(18512) called with curMem=138675, maxMem=503379394
15/06/09 13:44:56 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 18.1 KB, free 479.9 MB)
15/06/09 13:44:56 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 34M5BESJQA541JT:62671 (size: 18.1 KB, free: 480.0 MB)
15/06/09 13:44:56 INFO BlockManagerMaster: Updated info of block broadcast_0_piece0
15/06/09 13:44:56 INFO SparkContext: Created broadcast 0 from textFile at Test.java:20
15/06/09 13:44:58 INFO FileInputFormat: Total input paths to process : 1
15/06/09 13:44:59 INFO SparkContext: Starting job: collect at Test.java:54
15/06/09 13:44:59 INFO DAGScheduler: Registering RDD 3 (mapToPair at Test.java:32)
15/06/09 13:44:59 INFO DAGScheduler: Got job 0 (collect at Test.java:54) with 2 output partitions (allowLocal=false)
15/06/09 13:44:59 INFO DAGScheduler: Final stage: Stage 1(collect at Test.java:54)
15/06/09 13:44:59 INFO DAGScheduler: Parents of final stage: List(Stage 0)
15/06/09 13:44:59 INFO DAGScheduler: Missing parents: List(Stage 0)
15/06/09 13:44:59 INFO DAGScheduler: Submitting Stage 0 (MappedRDD[3] at mapToPair at Test.java:32), which has no missing parents
15/06/09 13:44:59 INFO MemoryStore: ensureFreeSpace(4224) called with curMem=157187, maxMem=503379394
15/06/09 13:44:59 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 4.1 KB, free 479.9 MB)
15/06/09 13:44:59 INFO MemoryStore: ensureFreeSpace(2988) called with curMem=161411, maxMem=503379394
15/06/09 13:44:59 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.9 KB, free 479.9 MB)
15/06/09 13:44:59 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 34M5BESJQA541JT:62671 (size: 2.9 KB, free: 480.0 MB)
15/06/09 13:44:59 INFO BlockManagerMaster: Updated info of block broadcast_1_piece0
15/06/09 13:44:59 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:838
15/06/09 13:44:59 INFO DAGScheduler: Submitting 2 missing tasks from Stage 0 (MappedRDD[3] at mapToPair at Test.java:32)
15/06/09 13:44:59 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
15/06/09 13:45:14 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
15/06/09 13:45:29 INFO AppClient$ClientActor: Executor updated: app-20150609134307-0000/0 is now EXITED (Command exited with code 1)
15/06/09 13:45:29 INFO SparkDeploySchedulerBackend: Executor app-20150609134307-0000/0 removed: Command exited with code 1
15/06/09 13:45:29 ERROR SparkDeploySchedulerBackend: Asked to remove non-existent executor 0
-----------------------------------------------------------------------------------------------------------------------------
这是sparkUI上的输出报错
----------------------15/06/09 13:44:08 INFO CoarseGrainedExecutorBackend: Registered signal handlers for [TERM, HUP, INT]15/06/09 13:44:09 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable15/06/09 13:44:09 INFO SecurityManager: Changing view acls to: solr,Administrator15/06/09 13:44:09 INFO SecurityManager: Changing modify acls to: solr,Administrator15/06/09 13:44:09 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(solr, Administrator); users with modify permissions: Set(solr, Administrator)15/06/09 13:44:11 INFO Slf4jLogger: Slf4jLogger started15/06/09 13:44:11 INFO Remoting: Starting remoting15/06/09 13:44:11 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://driverPropsFetcher@PC01:33054]15/06/09 13:44:11 INFO Utils: Successfully started service 'driverPropsFetcher' on port 33054.15/06/09 13:44:11 WARN Remoting: Tried to associate with unreachable remote address [akka.tcp://sparkDriver@34M5BESJQA541JT:62642]. Address is now gated for 5000 ms, all messages to this address will be delivered to dead letters. Reason: 34M5BESJQA541JT: Name or service not knownException in thread "main" java.lang.reflect.UndeclaredThrowableException        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1563)        at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:59)        at org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:115)        at org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:163)        at org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)Caused by: java.util.concurrent.TimeoutException: Futures timed out after [30 seconds]        at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)        at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)        at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)        at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)        at scala.concurrent.Await$.result(package.scala:107)        at org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$run$1.apply$mcV$sp(CoarseGrainedExecutorBackend.scala:127)        at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:60)        at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:59)        at java.security.AccessController.doPrivileged(Native Method)        at javax.security.auth.Subject.doAs(Subject.java:415)        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)        ... 4 more

-----------------------------------------------------------------------------------------------------------
请问是什么原因? 收起
参与4

查看其它 2 个回答solr 的回答

solr solr 研发工程师 某移动互联网公司
回复 2# 主力小白


    我已经知道了:lol
软件开发 · 2015-06-10
浏览1244

回答者

solr
研发工程师 某移动互联网公司
评论30

solr 最近回答过的问题

回答状态

  • 发布时间:2015-06-10
  • 关注会员:1 人
  • 回答浏览:1244
  • X社区推广