九色国产,午夜在线视频,新黄色网址,九九色综合,天天做夜夜做久久做狠狠,天天躁夜夜躁狠狠躁2021a,久久不卡一区二区三区

打開(kāi)APP
userphoto
未登錄

開(kāi)通VIP,暢享免費(fèi)電子書(shū)等14項(xiàng)超值服

開(kāi)通VIP
spark

   場(chǎng)景 :  在前端頁(yè)面提交一個(gè)spark-sql,然后通過(guò)spark-thrift-server 調(diào)用來(lái)執(zhí)行. sql邏輯很簡(jiǎn)單, 就是使用join關(guān)聯(lián)兩表(一個(gè)大表90G,一個(gè)小表3G)查詢(xún),前臺(tái)界面執(zhí)行出錯(cuò).后臺(tái)拋出java.lang.OutOfMemoryError: GC overhead limit exceeded 異常;

               直接使用./spark-sql --master yarn --executor-memory 4G --num-executors 19 啟動(dòng)spark-sql 在后臺(tái)執(zhí)行成功,說(shuō)明sql語(yǔ)句沒(méi)有問(wèn)題.

查看后臺(tái)spark日志,發(fā)現(xiàn)拋出java.lang.OutOfMemoryError: GC overhead limit exceeded 異常.詳細(xì)異常如下:

  1. Exception in thread "HiveServer2-Handler-Pool: Thread-199" 16/10/12 17:52:27 WARN NioEventLoop: Unexpected exception in the selector loop.
  2. 16/10/12 17:52:27 INFO YarnClientSchedulerBackend: Requesting to kill executor(s) 15
  3. Exception in thread "HiveServer2-Handler-Pool: Thread-167" java.lang.OutOfMemoryError: GC overhead limit exceeded
  4. at java.lang.StringCoding$StringDecoder.decode(StringCoding.java:149)
  5. at java.lang.StringCoding.decode(StringCoding.java:193)
  6. at java.lang.String.<init>(String.java:416)
  7. at java.lang.String.<init>(String.java:481)
  8. at org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:381)
  9. at org.apache.thrift.protocol.TBinaryProtocol.readString(TBinaryProtocol.java:374)
  10. at org.apache.hive.service.cli.thrift.TGetTablesReq$TGetTablesReqStandardScheme.read(TGetTablesReq.java:697)
  11. at org.apache.hive.service.cli.thrift.TGetTablesReq$TGetTablesReqStandardScheme.read(TGetTablesReq.java:666)
  12. at org.apache.hive.service.cli.thrift.TGetTablesReq.read(TGetTablesReq.java:569)
  13. at org.apache.hive.service.cli.thrift.TCLIService$GetTables_args$GetTables_argsStandardScheme.read(TCLIService.java:7000)
  14. at org.apache.hive.service.cli.thrift.TCLIService$GetTables_args$GetTables_argsStandardScheme.read(TCLIService.java:6985)
  15. at org.apache.hive.service.cli.thrift.TCLIService$GetTables_args.read(TCLIService.java:6932)
  16. at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:25)
  17. at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
  18. at org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
  19. at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
  20. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  21. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  22. at java.lang.Thread.run(Thread.java:745)
  23. Exception in thread "HiveServer2-Handler-Pool: Thread-178" 16/10/12 17:52:57 WARN SingleThreadEventExecutor: Unexpected exception from an event executor:
  24. java.lang.OutOfMemoryError: GC overhead limit exceeded
  25. java.lang.OutOfMemoryError: GC overhead limit exceeded
  26. at org.apache.thrift.protocol.TBinaryProtocol.readFieldBegin(TBinaryProtocol.java:245)
  27. at org.apache.hive.service.cli.thrift.THandleIdentifier$THandleIdentifierStandardScheme.read(THandleIdentifier.java:430)
  28. at org.apache.hive.service.cli.thrift.THandleIdentifier$THandleIdentifierStandardScheme.read(THandleIdentifier.java:423)
  29. at org.apache.hive.service.cli.thrift.THandleIdentifier.read(THandleIdentifier.java:357)
  30. at org.apache.hive.service.cli.thrift.TSessionHandle$TSessionHandleStandardScheme.read(TSessionHandle.java:336)
  31. at org.apache.hive.service.cli.thrift.TSessionHandle$TSessionHandleStandardScheme.read(TSessionHandle.java:321)
  32. at org.apache.hive.service.cli.thrift.TSessionHandle.read(TSessionHandle.java:264)
  33. at org.apache.hive.service.cli.thrift.TGetTablesReq$TGetTablesReqStandardScheme.read(TGetTablesReq.java:681)
  34. at org.apache.hive.service.cli.thrift.TGetTablesReq$TGetTablesReqStandardScheme.read(TGetTablesReq.java:666)
  35. at org.apache.hive.service.cli.thrift.TGetTablesReq.read(TGetTablesReq.java:569)
  36. at org.apache.hive.service.cli.thrift.TCLIService$GetTables_args$GetTables_argsStandardScheme.read(TCLIService.java:7000)
  37. at org.apache.hive.service.cli.thrift.TCLIService$GetTables_args$GetTables_argsStandardScheme.read(TCLIService.java:6985)
  38. at org.apache.hive.service.cli.thrift.TCLIService$GetTables_args.read(TCLIService.java:6932)
  39. at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:25)
  40. at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
  41. at org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
  42. at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
  43. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  44. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  45. at java.lang.Thread.run(Thread.java:745)
  46. 16/10/12 17:53:22 INFO HiveMetaStore: 26: get_databases: *
  47. 16/10/12 17:53:22 INFO HiveMetaStore: 8: get_databases: *
  48. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_databases: *
  49. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_databases: *
  50. 16/10/12 17:53:22 INFO HiveMetaStore: 15: get_databases: *
  51. 16/10/12 17:53:22 INFO HiveMetaStore: 18: get_databases: *
  52. 16/10/12 17:53:22 INFO HiveMetaStore: 28: get_databases: *
  53. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_databases: *
  54. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_databases: *
  55. 16/10/12 17:53:22 WARN HeartbeatReceiver: Removing executor 15 with no recent heartbeats: 148218 ms exceeds timeout 120000 ms
  56. 16/10/12 17:53:22 ERROR YarnScheduler: Lost executor 15 on node18.it.leap.com: Executor heartbeat timed out after 148218 ms
  57. 16/10/12 17:53:22 INFO DAGScheduler: Executor lost: 15 (epoch 4)
  58. 16/10/12 17:53:22 INFO BlockManagerMasterEndpoint: Trying to remove executor 15 from BlockManagerMaster.
  59. 16/10/12 17:53:22 INFO BlockManagerMasterEndpoint: Removing block manager BlockManagerId(15, node18.it.leap.com, 62536)
  60. 16/10/12 17:53:22 WARN HeartbeatReceiver: Removing executor 16 with no recent heartbeats: 147212 ms exceeds timeout 120000 ms
  61. 16/10/12 17:53:22 ERROR YarnScheduler: Lost executor 16 on node18.it.leap.com: Executor heartbeat timed out after 147212 ms
  62. java.lang.OutOfMemoryError: GC overhead limit exceeded
  63. 16/10/12 17:53:22 INFO HiveMetaStore: 4: get_databases: *
  64. 16/10/12 17:53:22 WARN NioEventLoop: Unexpected exception in the selector loop.
  65. java.lang.OutOfMemoryError: GC overhead limit exceeded
  66. 16/10/12 17:53:22 INFO HiveMetaStore: 16: get_databases: *
  67. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_databases: *
  68. 16/10/12 17:53:22 INFO HiveMetaStore: 3: get_databases: *
  69. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_databases: *
  70. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_databases: *
  71. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_databases: *
  72. 16/10/12 17:53:22 INFO HiveMetaStore: 29: get_databases: *
  73. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_databases: *
  74. 16/10/12 17:53:22 INFO HiveMetaStore: 29: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
  75. 16/10/12 17:53:22 INFO ObjectStore: ObjectStore, initialize called
  76. 16/10/12 17:53:22 INFO BlockManagerMaster: Removed 15 successfully in removeExecutor
  77. 16/10/12 17:53:22 INFO DAGScheduler: Executor lost: 16 (epoch 4)
  78. 16/10/12 17:53:22 INFO BlockManagerMasterEndpoint: Trying to remove executor 16 from BlockManagerMaster.
  79. 16/10/12 17:53:22 INFO BlockManagerMasterEndpoint: Removing block manager BlockManagerId(16, node18.it.leap.com, 41766)
  80. 16/10/12 17:53:22 INFO BlockManagerMaster: Removed 16 successfully in removeExecutor
  81. 16/10/12 17:53:22 INFO HiveMetaStore: 1: get_databases: *
  82. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_databases: *
  83. 16/10/12 17:53:22 INFO HiveMetaStore: 10: get_databases: *
  84. 16/10/12 17:53:22 INFO HiveMetaStore: 17: get_databases: *
  85. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_databases: *
  86. 16/10/12 17:53:22 INFO HiveMetaStore: 21: get_databases: *
  87. 16/10/12 17:53:22 INFO HiveMetaStore: 14: get_databases: *
  88. 16/10/12 17:53:22 INFO HiveMetaStore: 19: get_databases: *
  89. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_databases: *
  90. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_databases: *
  91. 16/10/12 17:53:22 INFO HiveMetaStore: 27: get_databases: *
  92. 16/10/12 17:53:22 INFO HiveMetaStore: 20: get_databases: *
  93. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_databases: *
  94. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_databases: *
  95. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_databases: *
  96. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_databases: *
  97. 16/10/12 17:53:22 INFO DAGScheduler: Host added was in lost list earlier: node18.it.leap.com
  98. Exception in thread "broadcast-exchange-0" 16/10/12 17:53:22 ERROR TThreadPoolServer: ExecutorService threw error: java.lang.OutOfMemoryError: GC overhead limit exceeded
  99. java.lang.OutOfMemoryError: GC overhead limit exceeded
  100. java.lang.OutOfMemoryError: GC overhead limit exceeded
  101. 16/10/12 17:53:22 ERROR ThriftCLIService: Error starting HiveServer2: could not start ThriftBinaryCLIService
  102. java.lang.OutOfMemoryError: GC overhead limit exceeded
  103. Exception in thread "HiveServer2-Handler-Pool: Thread-159" java.lang.OutOfMemoryError: GC overhead limit exceeded
  104. at java.lang.StringBuilder.toString(StringBuilder.java:405)
  105. at javax.security.sasl.Sasl.createSaslServer(Sasl.java:499)
  106. at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:140)
  107. at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
  108. at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
  109. at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
  110. at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
  111. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  112. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  113. at java.lang.Thread.run(Thread.java:745)
  114. Exception in thread "HiveServer2-Handler-Pool: Thread-149" java.lang.OutOfMemoryError: GC overhead limit exceeded
  115. 16/10/12 17:53:22 INFO HiveMetaStore: 26: get_databases: *
  116. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_databases: *
  117. 16/10/12 17:53:22 INFO HiveMetaStore: 26: get_tables: db=default pat=PROBABLYNOT
  118. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_tables: db=default pat=PROBABLYNOT
  119. 16/10/12 17:53:22 INFO HiveServer2: Shutting down HiveServer2
  120. 16/10/12 17:53:22 INFO HiveMetaStore: 20: get_databases: *
  121. 16/10/12 17:53:22 INFO HiveMetaStore: 10: get_databases: *
  122. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_databases: *
  123. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_databases: *
  124. 16/10/12 17:53:22 INFO HiveMetaStore: 10: get_tables: db=default pat=PROBABLYNOT
  125. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_tables: db=default pat=PROBABLYNOT
  126. 16/10/12 17:53:22 INFO ServerConnector: Stopped ServerConnector@624d190f{HTTP/1.1}{0.0.0.0:4040}
  127. 16/10/12 17:53:22 INFO HiveMetaStore: 15: get_databases: *
  128. 16/10/12 17:53:22 INFO HiveMetaStore: 4: get_databases: *
  129. 16/10/12 17:53:22 INFO HiveMetaStore: 16: get_databases: *
  130. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_databases: *
  131. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_databases: *
  132. 16/10/12 17:53:22 INFO HiveMetaStore: 18: get_databases: *
  133. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_databases: *
  134. 16/10/12 17:53:22 INFO BlockManagerMasterEndpoint: Registering block manager node18.it.leap.com:62536 with 408.9 MB RAM, BlockManagerId(15, node18.it.leap.com, 62536)
  135. 16/10/12 17:53:22 INFO HiveMetaStore: 4: get_tables: db=default pat=PROBABLYNOT
  136. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_tables: db=default pat=PROBABLYNOT
  137. 16/10/12 17:53:22 INFO HiveMetaStore: 16: get_tables: db=default pat=PROBABLYNOT
  138. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_tables: db=default pat=PROBABLYNOT
  139. 16/10/12 17:53:22 INFO HiveMetaStore: 18: get_tables: db=default pat=PROBABLYNOT
  140. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_tables: db=default pat=PROBABLYNOT
  141. 16/10/12 17:53:22 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@5ae5d5ca{/stages/stage/kill,null,UNAVAILABLE}
  142. 16/10/12 17:53:22 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@cb37961{/api,null,UNAVAILABLE}
  143. 16/10/12 17:53:22 INFO HiveMetaStore: 4: get_multi_table : db=default tbls=
  144. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_multi_table : db=default tbls=
  145. 16/10/12 17:53:22 INFO HiveMetaStore: 16: get_multi_table : db=default tbls=
  146. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_multi_table : db=default tbls=
  147. 16/10/12 17:53:22 INFO HiveMetaStore: 18: get_multi_table : db=default tbls=
  148. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_multi_table : db=default tbls=
  149. 16/10/12 17:53:22 INFO BlockManagerMasterEndpoint: Registering block manager node18.it.leap.com:41766 with 408.9 MB RAM, BlockManagerId(16, node18.it.leap.com, 41766)
  150. 16/10/12 17:53:22 INFO HiveMetaStore: 28: get_databases: *
  151. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_databases: *
  152. 16/10/12 17:53:22 INFO HiveMetaStore: 21: get_databases: *
  153. 16/10/12 17:53:22 INFO HiveMetaStore: 3: get_databases: *
  154. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_databases: *
  155. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_databases: *
  156. 16/10/12 17:53:22 INFO ThriftCLIService: Thrift server has stopped
  157. 16/10/12 17:53:22 INFO AbstractService: Service:ThriftBinaryCLIService is stopped.
  158. 16/10/12 17:53:22 INFO HiveMetaStore: 17: get_databases: *
  159. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_databases: *
  160. 16/10/12 17:53:22 INFO HiveMetaStore: 26: get_multi_table : db=default tbls=
  161. 16/10/12 17:53:22 INFO HiveMetaStore: 27: get_databases: *
  162. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_databases: *
  163. 16/10/12 17:53:22 INFO ThriftCLIService: Session disconnected without closing properly, close it now
  164. 16/10/12 17:53:22 INFO HiveMetaStore: 20: get_tables: db=default pat=PROBABLYNOT
  165. 16/10/12 17:53:22 INFO HiveMetaStore: 1: get_databases: *
  166. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_tables: db=default pat=PROBABLYNOT
  167. 16/10/12 17:53:22 INFO HiveMetaStore: 14: get_databases: *
  168. 16/10/12 17:53:22 INFO HiveMetaStore: 19: get_databases: *
  169. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_databases: *
  170. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_databases: *
  171. 16/10/12 17:53:22 INFO HiveMetaStore: 17: get_tables: db=default pat=PROBABLYNOT
  172. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_tables: db=default pat=PROBABLYNOT
  173. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_databases: *
  174. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_multi_table : db=default tbls=
  175. 16/10/12 17:53:22 INFO AbstractService: Service:OperationManager is stopped.
  176. 16/10/12 17:53:22 INFO ThriftCLIService: Session disconnected without closing properly, close it now
  177. 16/10/12 17:53:22 INFO BlockManagerInfo: Added broadcast_10_piece0 in memory on node18.it.leap.com:62536 (size: 27.0 KB, free: 408.9 MB)
  178. 16/10/12 17:53:22 INFO HiveMetaStore: 10: get_multi_table : db=default tbls=
  179. 16/10/12 17:53:22 INFO Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
  180. 16/10/12 17:53:22 INFO HiveMetaStore: 3: get_tables: db=default pat=PROBABLYNOT
  181. 16/10/12 17:53:22 INFO HiveMetaStore: 21: get_tables: db=default pat=PROBABLYNOT
  182. 16/10/12 17:53:22 INFO HiveMetaStore: 28: get_tables: db=default pat=PROBABLYNOT
  183. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_tables: db=default pat=PROBABLYNOT
  184. 16/10/12 17:53:22 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@61ad39{/,null,UNAVAILABLE}
  185. 16/10/12 17:53:22 INFO HiveMetaStore: 8: get_databases: *
  186. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_databases: *
  187. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_databases: *
  188. 16/10/12 17:53:22 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
  189. 16/10/12 17:53:22 INFO HiveMetaStore: 17: get_multi_table : db=default tbls=
  190. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_tables: db=default pat=PROBABLYNOT
  191. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_multi_table : db=default tbls=
  192. 16/10/12 17:53:22 INFO HiveMetaStore: 21: get_multi_table : db=default tbls=
  193. 16/10/12 17:53:22 INFO HiveMetaStore: 15: get_tables: db=default pat=PROBABLYNOT
  194. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_tables: db=default pat=PROBABLYNOT
  195. 16/10/12 17:53:22 INFO HiveMetaStore: 28: get_multi_table : db=default tbls=
  196. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_multi_table : db=default tbls=
  197. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_tables: db=default pat=PROBABLYNOT
  198. 16/10/12 17:53:22 INFO BlockManagerInfo: Added broadcast_10_piece0 in memory on node18.it.leap.com:41766 (size: 27.0 KB, free: 408.9 MB)
  199. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_multi_table : db=default tbls=
  200. 16/10/12 17:53:22 INFO HiveMetaStore: 20: get_multi_table : db=default tbls=
  201. 16/10/12 17:53:22 INFO AbstractService: Service:SessionManager is stopped.
  202. 16/10/12 17:53:22 INFO ThriftCLIService: Session disconnected without closing properly, close it now
  203. 16/10/12 17:53:22 INFO HiveMetaStore: 19: get_tables: db=default pat=PROBABLYNOT
  204. 16/10/12 17:53:22 INFO HiveMetaStore: 14: get_tables: db=default pat=PROBABLYNOT
  205. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_tables: db=default pat=PROBABLYNOT
  206. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_tables: db=default pat=PROBABLYNOT
  207. 16/10/12 17:53:22 INFO HiveMetaStore: 15: get_multi_table : db=default tbls=
  208. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_multi_table : db=default tbls=
  209. 16/10/12 17:53:22 INFO ThriftCLIService: Session disconnected without closing properly, close it now
  210. 16/10/12 17:53:22 INFO HiveMetaStore: 3: get_multi_table : db=default tbls=
  211. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_multi_table : db=default tbls=
  212. 16/10/12 17:53:22 INFO HiveMetaStore: 19: get_multi_table : db=default tbls=
  213. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_multi_table : db=default tbls=
  214. 16/10/12 17:53:22 INFO HiveMetaStore: 14: get_multi_table : db=default tbls=
  215. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_multi_table : db=default tbls=
  216. 16/10/12 17:53:22 INFO ThriftCLIService: Session disconnected without closing properly, close it now
  217. 16/10/12 17:53:22 INFO ThriftCLIService: Session disconnected without closing properly, close it now
  218. 16/10/12 17:53:22 INFO ThriftCLIService: Session disconnected without closing properly, close it now
  219. 16/10/12 17:53:22 INFO ThriftCLIService: Session disconnected without closing properly, close it now
  220. 16/10/12 17:53:22 INFO HiveMetaStore: 27: get_tables: db=default pat=PROBABLYNOT
  221. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_multi_table : db=default tbls=
  222. 16/10/12 17:53:22 INFO ThriftCLIService: Session disconnected without closing properly, close it now
  223. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_multi_table : db=default tbls=
  224. 16/10/12 17:53:22 INFO ObjectStore: Initialized ObjectStore
  225. 16/10/12 17:53:22 INFO HiveMetaStore: 1: get_tables: db=default pat=PROBABLYNOT
  226. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_tables: db=default pat=PROBABLYNOT
  227. 16/10/12 17:53:22 INFO HiveMetaStore: 1: get_multi_table : db=default tbls=
  228. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_multi_table : db=default tbls=
  229. 16/10/12 17:53:22 INFO HiveMetaStore: 29: get_databases: *
  230. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_databases: *
  231. 16/10/12 17:53:22 INFO ThriftCLIService: Session disconnected without closing properly, close it now
  232. 16/10/12 17:53:22 INFO ThriftCLIService: Session disconnected without closing properly, close it now
  233. 16/10/12 17:53:22 INFO HiveMetaStore: 29: get_tables: db=default pat=PROBABLYNOT
  234. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_tables: db=default pat=PROBABLYNOT
  235. 16/10/12 17:53:22 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@644b20f8{/static,null,UNAVAILABLE}
  236. 16/10/12 17:53:22 INFO ThriftCLIService: Session disconnected without closing properly, close it now
  237. 16/10/12 17:53:22 INFO HiveMetaStore: 8: get_tables: db=default pat=PROBABLYNOT
  238. 16/10/12 17:53:22 INFO BlockManagerInfo: Added broadcast_10_piece0 in memory on node18.it.leap.com:62536 (size: 27.0 KB, free: 408.9 MB)
  239. 16/10/12 17:53:22 INFO ThriftCLIService: Session disconnected without closing properly, close it now
  240. 16/10/12 17:53:22 INFO ThriftCLIService: Session disconnected without closing properly, close it now
  241. 16/10/12 17:53:23 INFO BlockManagerInfo: Added broadcast_10_piece0 in memory on node18.it.leap.com:41766 (size: 27.0 KB, free: 408.9 MB)
  242. 16/10/12 17:53:22 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_tables: db=default pat=PROBABLYNOT
  243. 16/10/12 17:53:23 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_tables: db=default pat=PROBABLYNOT
  244. 16/10/12 17:53:22 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@1355c642{/executors/threadDump/json,null,UNAVAILABLE}
  245. 16/10/12 17:53:23 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@7aa3bb2d{/executors/threadDump,null,UNAVAILABLE}
  246. 16/10/12 17:53:23 INFO ExecutorAllocationManager: Removing executor 15 because it has been idle for 60 seconds (new desired total will be 1)
  247. 16/10/12 17:53:23 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@34659b02{/executors/json,null,UNAVAILABLE}
  248. 16/10/12 17:53:23 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@22c11d39{/executors,null,UNAVAILABLE}
  249. 16/10/12 17:53:23 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@358aaa85{/environment/json,null,UNAVAILABLE}
  250. 16/10/12 17:53:23 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@6f0aea07{/environment,null,UNAVAILABLE}
  251. 16/10/12 17:53:23 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@5163cc8f{/storage/rdd/json,null,UNAVAILABLE}
  252. 16/10/12 17:53:23 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@7c8c17a2{/storage/rdd,null,UNAVAILABLE}
  253. 16/10/12 17:53:23 INFO HiveMetaStore: 29: get_multi_table : db=default tbls=
  254. 16/10/12 17:53:23 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@14bed4e9{/storage/json,null,UNAVAILABLE}
  255. 16/10/12 17:53:23 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@753e481{/storage,null,UNAVAILABLE}
  256. 16/10/12 17:53:23 INFO HiveMetaStore: 8: get_multi_table : db=default tbls=
  257. 16/10/12 17:53:23 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@738e5c64{/stages/pool/json,null,UNAVAILABLE}
  258. 16/10/12 17:53:23 INFO HiveMetaStore: 27: get_multi_table : db=default tbls=
  259. 16/10/12 17:53:23 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@7b29c61f{/stages/pool,null,UNAVAILABLE}
  260. 16/10/12 17:53:23 INFO YarnClientSchedulerBackend: Requesting to kill executor(s) 15
  261. 16/10/12 17:53:23 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@61ae508a{/stages/stage/json,null,UNAVAILABLE}
  262. 16/10/12 17:53:23 INFO YarnClientSchedulerBackend: Requesting to kill executor(s) 16
  263. 16/10/12 17:53:23 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@4371c2d5{/stages/stage,null,UNAVAILABLE}
  264. 16/10/12 17:53:23 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@2a2a5dc3{/stages/json,null,UNAVAILABLE}
  265. 16/10/12 17:53:23 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@2943ebbf{/stages,null,UNAVAILABLE}
  266. 16/10/12 17:53:23 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_multi_table : db=default tbls=
  267. 16/10/12 17:53:23 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@1b7b0401{/jobs/job/json,null,UNAVAILABLE}
  268. 16/10/12 17:53:23 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_multi_table : db=default tbls=
  269. 16/10/12 17:53:23 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@44c076b{/jobs/job,null,UNAVAILABLE}
  270. 16/10/12 17:53:23 INFO audit: ugi=hive ip=unknown-ip-addr cmd=get_multi_table : db=default tbls=
  271. 16/10/12 17:53:23 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@760dcdf2{/jobs/json,null,UNAVAILABLE}
  272. 16/10/12 17:53:23 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@725819eb{/jobs,null,UNAVAILABLE}
  273. 16/10/12 17:53:23 INFO SparkUI: Stopped Spark web UI at http://10.120.193.4:4040
  274. 16/10/12 17:53:23 INFO ThriftCLIService: Session disconnected without closing properly, close it now
  275. 16/10/12 17:53:23 INFO ThriftCLIService: Session disconnected without closing properly, close it now
  276. 16/10/12 17:53:23 INFO ThriftCLIService: Session disconnected without closing properly, close it now
  277. 16/10/12 17:53:23 INFO BlockManagerInfo: Added broadcast_10_piece0 in memory on node18.it.leap.com:62536 (size: 27.0 KB, free: 408.9 MB)
  278. 16/10/12 17:53:23 INFO BlockManagerInfo: Added broadcast_10_piece0 in memory on node18.it.leap.com:41766 (size: 27.0 KB, free: 408.9 MB)
  279. 16/10/12 17:53:23 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerBlockManagerAdded(1476266003023,BlockManagerId(15, node18.it.leap.com, 62536),428762726)
  280. 16/10/12 17:53:23 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerBlockManagerAdded(1476266003026,BlockManagerId(16, node18.it.leap.com, 41766),428762726)
  281. 16/10/12 17:53:23 INFO BlockManagerInfo: Added broadcast_10_piece0 in memory on node18.it.leap.com:62536 (size: 27.0 KB, free: 408.9 MB)
  282. 16/10/12 17:53:23 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerBlockUpdated(BlockUpdatedInfo(BlockManagerId(15, node18.it.leap.com, 62536),broadcast_10_piece0,StorageLevel(memory, 1 replicas),27695,0))
  283. 16/10/12 17:53:23 INFO BlockManagerInfo: Added broadcast_10_piece0 in memory on node18.it.leap.com:41766 (size: 27.0 KB, free: 408.9 MB)
  284. 16/10/12 17:53:23 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerBlockUpdated(BlockUpdatedInfo(BlockManagerId(16, node18.it.leap.com, 41766),broadcast_10_piece0,StorageLevel(memory, 1 replicas),27695,0))
  285. 16/10/12 17:53:23 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerBlockManagerAdded(1476266003032,BlockManagerId(15, node18.it.leap.com, 62536),428762726)
  286. 16/10/12 17:53:23 INFO BlockManagerInfo: Added broadcast_10_piece0 in memory on node18.it.leap.com:62536 (size: 27.0 KB, free: 408.9 MB)
  287. 16/10/12 17:53:23 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerBlockUpdated(BlockUpdatedInfo(BlockManagerId(15, node18.it.leap.com, 62536),broadcast_10_piece0,StorageLevel(memory, 1 replicas),27695,0))
  288. 16/10/12 17:53:23 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerBlockManagerAdded(1476266003035,BlockManagerId(16, node18.it.leap.com, 41766),428762726)
  289. 16/10/12 17:53:23 INFO BlockManagerInfo: Added broadcast_10_piece0 in memory on node18.it.leap.com:41766 (size: 27.0 KB, free: 408.9 MB)
  290. 16/10/12 17:53:23 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerBlockUpdated(BlockUpdatedInfo(BlockManagerId(16, node18.it.leap.com, 41766),broadcast_10_piece0,StorageLevel(memory, 1 replicas),27695,0))
  291. 16/10/12 17:53:23 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerBlockManagerAdded(1476266003040,BlockManagerId(15, node18.it.leap.com, 62536),428762726)
  292. 16/10/12 17:53:23 INFO BlockManagerInfo: Added broadcast_10_piece0 in memory on node18.it.leap.com:62536 (size: 27.0 KB, free: 408.9 MB)
  293. 16/10/12 17:53:23 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerBlockUpdated(BlockUpdatedInfo(BlockManagerId(15, node18.it.leap.com, 62536),broadcast_10_piece0,StorageLevel(memory, 1 replicas),27695,0))
  294. 16/10/12 17:53:23 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerBlockManagerAdded(1476266003044,BlockManagerId(16, node18.it.leap.com, 41766),428762726)
  295. 16/10/12 17:53:23 INFO BlockManagerInfo: Added broadcast_10_piece0 in memory on node18.it.leap.com:41766 (size: 27.0 KB, free: 408.9 MB)
  296. 16/10/12 17:53:23 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerBlockUpdated(BlockUpdatedInfo(BlockManagerId(16, node18.it.leap.com, 41766),broadcast_10_piece0,StorageLevel(memory, 1 replicas),27695,0))
  297. 16/10/12 17:53:23 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerBlockManagerAdded(1476266003048,BlockManagerId(15, node18.it.leap.com, 62536),428762726)
  298. 16/10/12 17:53:23 INFO BlockManagerInfo: Added broadcast_10_piece0 in memory on node18.it.leap.com:62536 (size: 27.0 KB, free: 408.9 MB)
  299. 16/10/12 17:53:23 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerBlockUpdated(BlockUpdatedInfo(BlockManagerId(15, node18.it.leap.com, 62536),broadcast_10_piece0,StorageLevel(memory, 1 replicas),27695,0))
  300. 16/10/12 17:53:23 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerBlockManagerAdded(1476266003052,BlockManagerId(16, node18.it.leap.com, 41766),428762726)
  301. 16/10/12 17:53:23 INFO BlockManagerInfo: Added broadcast_10_piece0 in memory on node18.it.leap.com:41766 (size: 27.0 KB, free: 408.9 MB)
  302. 16/10/12 17:53:23 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerBlockUpdated(BlockUpdatedInfo(BlockManagerId(16, node18.it.leap.com, 41766),broadcast_10_piece0,StorageLevel(memory, 1 replicas),27695,0))
  303. 16/10/12 17:53:23 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerBlockManagerAdded(1476266003056,BlockManagerId(15, node18.it.leap.com, 62536),428762726)
  304. 16/10/12 17:53:23 INFO BlockManagerInfo: Added broadcast_10_piece0 in memory on node18.it.leap.com:62536 (size: 27.0 KB, free: 408.9 MB)
  305. 16/10/12 17:53:23 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerBlockUpdated(BlockUpdatedInfo(BlockManagerId(15, node18.it.leap.com, 62536),broadcast_10_piece0,StorageLevel(memory, 1 replicas),27695,0))
  306. 16/10/12 17:53:23 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerBlockManagerAdded(1476266003060,BlockManagerId(16, node18.it.leap.com, 41766),428762726)
  307. 16/10/12 17:53:23 INFO BlockManagerInfo: Added broadcast_10_piece0 in memory on node18.it.leap.com:41766 (size: 27.0 KB, free: 408.9 MB)
  308. 16/10/12 17:53:23 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerBlockUpdated(BlockUpdatedInfo(BlockManagerId(16, node18.it.leap.com, 41766),broadcast_10_piece0,StorageLevel(memory, 1 replicas),27695,0))
  309. 16/10/12 17:53:23 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerBlockManagerAdded(1476266003064,BlockManagerId(15, node18.it.leap.com, 62536),428762726)
  310. 16/10/12 17:53:23 INFO BlockManagerInfo: Added broadcast_10_piece0 in memory on node18.it.leap.com:62536 (size: 27.0 KB, free: 408.9 MB)
  311. 16/10/12 17:53:23 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerBlockUpdated(BlockUpdatedInfo(BlockManagerId(15, node18.it.leap.com, 62536),broadcast_10_piece0,StorageLevel(memory, 1 replicas),27695,0))
  312. 16/10/12 17:53:23 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerBlockManagerAdded(1476266003068,BlockManagerId(16, node18.it.leap.com, 41766),428762726)
  313. 16/10/12 17:53:23 INFO BlockManagerInfo: Added broadcast_10_piece0 in memory on node18.it.leap.com:41766 (size: 27.0 KB, free: 408.9 MB)
  314. 16/10/12 17:53:23 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerBlockUpdated(BlockUpdatedInfo(BlockManagerId(16, node18.it.leap.com, 41766),broadcast_10_piece0,StorageLevel(memory, 1 replicas),27695,0))
  315. 16/10/12 17:53:23 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerBlockManagerAdded(1476266003072,BlockManagerId(15, node18.it.leap.com, 62536),428762726)
  316. 16/10/12 17:53:23 INFO BlockManagerInfo: Added broadcast_10_piece0 in memory on node18.it.leap.com:62536 (size: 27.0 KB, free: 408.9 MB)
  317. 16/10/12 17:53:23 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerBlockUpdated(BlockUpdatedInfo(BlockManagerId(15, node18.it.leap.com, 62536),broadcast_10_piece0,StorageLevel(memory, 1 replicas),27695,0))
  318. 16/10/12 17:53:23 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerBlockManagerAdded(1476266003077,BlockManagerId(16, node18.it.leap.com, 41766),428762726)
  319. 16/10/12 17:53:23 INFO BlockManagerInfo: Added broadcast_10_piece0 in memory on node18.it.leap.com:41766 (size: 27.0 KB, free: 408.9 MB)
  320. 16/10/12 17:53:23 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerBlockUpdated(BlockUpdatedInfo(BlockManagerId(16, node18.it.leap.com, 41766),broadcast_10_piece0,StorageLevel(memory, 1 replicas),27695,0))
  321. 16/10/12 17:53:23 ERROR SparkExecuteStatementOperation: Error executing query, currentState CLOSED,
  322. org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:
  323. Exchange SinglePartition
  324. +- *LocalLimit 1000
  325. +- *HashAggregate(keys=[PART_NUMBER#649, PRODUCT_LOB#650, PRODUCT_BRAND#651, PRODUCT_FAMILY#652, PRODUCT_SUB_FAMILY#653, PRODUCT_CD#654], functions=[], output=[PART_NUMBER#649, PRODUCT_LOB#650, PRODUCT_BRAND#651, PRODUCT_FAMILY#652, PRODUCT_SUB_FAMILY#653, PRODUCT_CD#654])
  326. +- Exchange hashpartitioning(PART_NUMBER#649, PRODUCT_LOB#650, PRODUCT_BRAND#651, PRODUCT_FAMILY#652, PRODUCT_SUB_FAMILY#653, PRODUCT_CD#654, 200)
  327. +- *HashAggregate(keys=[PART_NUMBER#649, PRODUCT_LOB#650, PRODUCT_BRAND#651, PRODUCT_FAMILY#652, PRODUCT_SUB_FAMILY#653, PRODUCT_CD#654], functions=[], output=[PART_NUMBER#649, PRODUCT_LOB#650, PRODUCT_BRAND#651, PRODUCT_FAMILY#652, PRODUCT_SUB_FAMILY#653, PRODUCT_CD#654])
  328. +- *Project [CASE WHEN (substring(ZMAT#982, 1, 10) = 0000000000) THEN substring(ZMAT#982, 11, 8) WHEN (substring(ZMAT#982, 1, 9) = 000000000) THEN substring(ZMAT#982, 10, 9) WHEN (substring(ZMAT#982, 1, 5) = 00000) THEN substring(ZMAT#982, 6, 7) ELSE ZMAT#982 END AS PART_NUMBER#649, ZPRODH_D#1050 AS PRODUCT_LOB#650, ZBRAND_D#1040 AS PRODUCT_BRAND#651, ZSERIES_D#1044 AS PRODUCT_FAMILY#652, ZSUBSER_D#1046 AS PRODUCT_SUB_FAMILY#653, ZMAT#982 AS PRODUCT_CD#654]
  329. +- *BroadcastHashJoin [ZMAT#982], [MATERIAL#1063], Inner, BuildLeft
  330. :- BroadcastExchange HashedRelationBroadcastMode(List(input[0, string, true]))
  331. : +- *Project [zmat#982, zbrand_d#1040, zseries_d#1044, zsubser_d#1046, zprodh_d#1050]
  332. : +- *Filter isnotnull(ZMAT#982)
  333. : +- *BatchedScan parquet idl_bw.zoh_mds32_idl_p[zmat#982,zbrand_d#1040,zseries_d#1044,zsubser_d#1046,zprodh_d#1050] Format: ParquetFormat, InputPaths: hdfs://node4.it.leap.com:8020/apps/hive/warehouse/idl_bw.db/zoh_mds32_idl_p, PushedFilters: [IsNotNull(zmat)], ReadSchema: struct<zmat:string,zbrand_d:string,zseries_d:string,zsubser_d:string,zprodh_d:string>
  334. +- *Project [material#1063]
  335. +- *Filter (SALESORG#1064 IN (AR10,BR10,CA10,CO10,MX10,US10) && isnotnull(MATERIAL#1063))
  336. +- *BatchedScan parquet idl_bw.zoh_mms05_idl_p[material#1063,salesorg#1064] Format: ParquetFormat, InputPaths: hdfs://node4.it.leap.com:8020/apps/hive/warehouse/idl_bw.db/zoh_mms05_idl_p, PushedFilters: [In(salesorg, [AR10,BR10,CA10,CO10,MX10,US10], IsNotNull(material)], ReadSchema: struct<material:string,salesorg:string>
  337. at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:50)
  338. at org.apache.spark.sql.execution.exchange.ShuffleExchange.doExecute(ShuffleExchange.scala:113)
  339. at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
  340. at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
  341. at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
  342. at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  343. at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
  344. at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114)
  345. at org.apache.spark.sql.execution.InputAdapter.inputRDDs(WholeStageCodegenExec.scala:233)
  346. at org.apache.spark.sql.execution.BaseLimitExec$class.inputRDDs(limit.scala:63)
  347. at org.apache.spark.sql.execution.GlobalLimitExec.inputRDDs(limit.scala:103)
  348. at org.apache.spark.sql.execution.ProjectExec.inputRDDs(basicPhysicalOperators.scala:36)
  349. at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:361)
  350. at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
  351. at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
  352. at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
  353. at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  354. at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
  355. at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114)
  356. at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult$lzycompute(InsertIntoHiveTable.scala:237)
  357. at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult(InsertIntoHiveTable.scala:142)
  358. at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.doExecute(InsertIntoHiveTable.scala:313)
  359. at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
  360. at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
  361. at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
  362. at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  363. at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
  364. at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114)
  365. at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:86)
  366. at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:86)
  367. at org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand.run(CreateHiveTableAsSelectCommand.scala:94)
  368. at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:60)
  369. at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:58)
  370. at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
  371. at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
  372. at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
  373. at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
  374. at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  375. at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
  376. at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114)
  377. at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:86)
  378. at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:86)
  379. at org.apache.spark.sql.Dataset.<init>(Dataset.scala:186)
  380. at org.apache.spark.sql.Dataset.<init>(Dataset.scala:167)
  381. at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)
  382. at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:582)
  383. at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:682)
  384. at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:213)
  385. at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:157)
  386. at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:154)
  387. at java.security.AccessController.doPrivileged(Native Method)
  388. at javax.security.auth.Subject.doAs(Subject.java:415)
  389. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
  390. at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:167)
  391. at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
  392. at java.util.concurrent.FutureTask.run(FutureTask.java:262)
  393. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  394. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  395. at java.lang.Thread.run(Thread.java:745)
  396. Caused by: org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:
  397. Exchange hashpartitioning(PART_NUMBER#649, PRODUCT_LOB#650, PRODUCT_BRAND#651, PRODUCT_FAMILY#652, PRODUCT_SUB_FAMILY#653, PRODUCT_CD#654, 200)
  398. +- *HashAggregate(keys=[PART_NUMBER#649, PRODUCT_LOB#650, PRODUCT_BRAND#651, PRODUCT_FAMILY#652, PRODUCT_SUB_FAMILY#653, PRODUCT_CD#654], functions=[], output=[PART_NUMBER#649, PRODUCT_LOB#650, PRODUCT_BRAND#651, PRODUCT_FAMILY#652, PRODUCT_SUB_FAMILY#653, PRODUCT_CD#654])
  399. +- *Project [CASE WHEN (substring(ZMAT#982, 1, 10) = 0000000000) THEN substring(ZMAT#982, 11, 8) WHEN (substring(ZMAT#982, 1, 9) = 000000000) THEN substring(ZMAT#982, 10, 9) WHEN (substring(ZMAT#982, 1, 5) = 00000) THEN substring(ZMAT#982, 6, 7) ELSE ZMAT#982 END AS PART_NUMBER#649, ZPRODH_D#1050 AS PRODUCT_LOB#650, ZBRAND_D#1040 AS PRODUCT_BRAND#651, ZSERIES_D#1044 AS PRODUCT_FAMILY#652, ZSUBSER_D#1046 AS PRODUCT_SUB_FAMILY#653, ZMAT#982 AS PRODUCT_CD#654]
  400. +- *BroadcastHashJoin [ZMAT#982], [MATERIAL#1063], Inner, BuildLeft
  401. :- BroadcastExchange HashedRelationBroadcastMode(List(input[0, string, true]))
  402. : +- *Project [zmat#982, zbrand_d#1040, zseries_d#1044, zsubser_d#1046, zprodh_d#1050]
  403. : +- *Filter isnotnull(ZMAT#982)
  404. : +- *BatchedScan parquet idl_bw.zoh_mds32_idl_p[zmat#982,zbrand_d#1040,zseries_d#1044,zsubser_d#1046,zprodh_d#1050] Format: ParquetFormat, InputPaths: hdfs://node4.it.leap.com:8020/apps/hive/warehouse/idl_bw.db/zoh_mds32_idl_p, PushedFilters: [IsNotNull(zmat)], ReadSchema: struct<zmat:string,zbrand_d:string,zseries_d:string,zsubser_d:string,zprodh_d:string>
  405. +- *Project [material#1063]
  406. +- *Filter (SALESORG#1064 IN (AR10,BR10,CA10,CO10,MX10,US10) && isnotnull(MATERIAL#1063))
  407. +- *BatchedScan parquet idl_bw.zoh_mms05_idl_p[material#1063,salesorg#1064] Format: ParquetFormat, InputPaths: hdfs://node4.it.leap.com:8020/apps/hive/warehouse/idl_bw.db/zoh_mms05_idl_p, PushedFilters: [In(salesorg, [AR10,BR10,CA10,CO10,MX10,US10], IsNotNull(material)], ReadSchema: struct<material:string,salesorg:string>
  408. at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:50)
  409. at org.apache.spark.sql.execution.exchange.ShuffleExchange.doExecute(ShuffleExchange.scala:113)
  410. at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
  411. at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
  412. at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
  413. at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  414. at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
  415. at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114)
  416. at org.apache.spark.sql.execution.InputAdapter.inputRDDs(WholeStageCodegenExec.scala:233)
  417. at org.apache.spark.sql.execution.aggregate.HashAggregateExec.inputRDDs(HashAggregateExec.scala:138)
  418. at org.apache.spark.sql.execution.BaseLimitExec$class.inputRDDs(limit.scala:63)
  419. at org.apache.spark.sql.execution.LocalLimitExec.inputRDDs(limit.scala:96)
  420. at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:361)
  421. at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
  422. at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
  423. at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
  424. at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  425. at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
  426. at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114)
  427. at org.apache.spark.sql.execution.exchange.ShuffleExchange.prepareShuffleDependency(ShuffleExchange.scala:86)
  428. at org.apache.spark.sql.execution.exchange.ShuffleExchange$$anonfun$doExecute$1.apply(ShuffleExchange.scala:122)
  429. at org.apache.spark.sql.execution.exchange.ShuffleExchange$$anonfun$doExecute$1.apply(ShuffleExchange.scala:113)
  430. at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:49)
  431. ... 58 more
  432. Caused by: java.lang.InterruptedException
  433. at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1038)
  434. at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
  435. at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:208)
  436. at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:218)
  437. at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
  438. at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
  439. at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
  440. at scala.concurrent.Await$.result(package.scala:190)
  441. at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:190)
  442. at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec.doExecuteBroadcast(BroadcastExchangeExec.scala:120)
  443. at org.apache.spark.sql.execution.InputAdapter.doExecuteBroadcast(WholeStageCodegenExec.scala:229)
  444. at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:125)
  445. at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:125)
  446. at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
  447. at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  448. at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
  449. at org.apache.spark.sql.execution.SparkPlan.executeBroadcast(SparkPlan.scala:124)
  450. at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.prepareBroadcast(BroadcastHashJoinExec.scala:98)
  451. at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.codegenInner(BroadcastHashJoinExec.scala:197)
  452. at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.doConsume(BroadcastHashJoinExec.scala:82)
  453. at org.apache.spark.sql.execution.CodegenSupport$class.consume(WholeStageCodegenExec.scala:153)
  454. at org.apache.spark.sql.execution.ProjectExec.consume(basicPhysicalOperators.scala:30)
  455. at org.apache.spark.sql.execution.ProjectExec.doConsume(basicPhysicalOperators.scala:62)
  456. at org.apache.spark.sql.execution.CodegenSupport$class.consume(WholeStageCodegenExec.scala:153)
  457. at org.apache.spark.sql.execution.FilterExec.consume(basicPhysicalOperators.scala:79)
  458. at org.apache.spark.sql.execution.FilterExec.doConsume(basicPhysicalOperators.scala:194)
  459. at org.apache.spark.sql.execution.CodegenSupport$class.consume(WholeStageCodegenExec.scala:153)
  460. at org.apache.spark.sql.execution.BatchedDataSourceScanExec.consume(ExistingRDD.scala:225)
  461. at org.apache.spark.sql.execution.BatchedDataSourceScanExec.doProduce(ExistingRDD.scala:328)
  462. at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:83)
  463. at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:78)
  464. at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
  465. at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  466. at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
  467. at org.apache.spark.sql.execution.CodegenSupport$class.produce(WholeStageCodegenExec.scala:78)
  468. at org.apache.spark.sql.execution.BatchedDataSourceScanExec.produce(ExistingRDD.scala:225)
  469. at org.apache.spark.sql.execution.FilterExec.doProduce(basicPhysicalOperators.scala:113)
  470. at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:83)
  471. at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:78)
  472. at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
  473. at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  474. at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
  475. at org.apache.spark.sql.execution.CodegenSupport$class.produce(WholeStageCodegenExec.scala:78)
  476. at org.apache.spark.sql.execution.FilterExec.produce(basicPhysicalOperators.scala:79)
  477. at org.apache.spark.sql.execution.ProjectExec.doProduce(basicPhysicalOperators.scala:40)
  478. at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:83)
  479. at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:78)
  480. at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
  481. at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  482. at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
  483. at org.apache.spark.sql.execution.CodegenSupport$class.produce(WholeStageCodegenExec.scala:78)
  484. at org.apache.spark.sql.execution.ProjectExec.produce(basicPhysicalOperators.scala:30)
  485. at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.doProduce(BroadcastHashJoinExec.scala:77)
  486. at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:83)
  487. at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:78)
  488. at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
  489. at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  490. at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
  491. at org.apache.spark.sql.execution.CodegenSupport$class.produce(WholeStageCodegenExec.scala:78)
  492. at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.produce(BroadcastHashJoinExec.scala:38)
  493. at org.apache.spark.sql.execution.ProjectExec.doProduce(basicPhysicalOperators.scala:40)
  494. at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:83)
  495. at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:78)
  496. at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
  497. at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  498. at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
  499. at org.apache.spark.sql.execution.CodegenSupport$class.produce(WholeStageCodegenExec.scala:78)
  500. at org.apache.spark.sql.execution.ProjectExec.produce(basicPhysicalOperators.scala:30)
  501. at org.apache.spark.sql.execution.aggregate.HashAggregateExec.doProduceWithKeys(HashAggregateExec.scala:526)
  502. at org.apache.spark.sql.execution.aggregate.HashAggregateExec.doProduce(HashAggregateExec.scala:145)
  503. at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:83)
  504. at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:78)
  505. at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
  506. at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  507. at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
  508. at org.apache.spark.sql.execution.CodegenSupport$class.produce(WholeStageCodegenExec.scala:78)
  509. at org.apache.spark.sql.execution.aggregate.HashAggregateExec.produce(HashAggregateExec.scala:37)
  510. at org.apache.spark.sql.execution.WholeStageCodegenExec.doCodeGen(WholeStageCodegenExec.scala:309)
  511. at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:347)
  512. at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
  513. at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
  514. at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
  515. at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  516. at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
  517. at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114)
  518. at org.apache.spark.sql.execution.exchange.ShuffleExchange.prepareShuffleDependency(ShuffleExchange.scala:86)
  519. at org.apache.spark.sql.execution.exchange.ShuffleExchange$$anonfun$doExecute$1.apply(ShuffleExchange.scala:122)
  520. at org.apache.spark.sql.execution.exchange.ShuffleExchange$$anonfun$doExecute$1.apply(ShuffleExchange.scala:113)
  521. at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:49)
  522. ... 80 more
  523. 16/10/12 17:53:23 ERROR SparkExecuteStatementOperation: Error running hive query:
  524. org.apache.hive.service.cli.HiveSQLException: Illegal Operation state transition from CLOSED to ERROR
  525. at org.apache.hive.service.cli.OperationState.validateTransition(OperationState.java:92)
  526. at org.apache.hive.service.cli.OperationState.validateTransition(OperationState.java:98)
  527. at org.apache.hive.service.cli.operation.Operation.setState(Operation.java:126)
  528. at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:245)
  529. at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:157)
  530. at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:154)
  531. at java.security.AccessController.doPrivileged(Native Method)
  532. at javax.security.auth.Subject.doAs(Subject.java:415)
  533. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
  534. at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:167)
  535. at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
  536. at java.util.concurrent.FutureTask.run(FutureTask.java:262)
  537. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  538. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  539. at java.lang.Thread.run(Thread.java:745)
  540. 16/10/12 17:53:23 INFO AbstractService: Service:CLIService is stopped.
  541. 16/10/12 17:53:23 INFO AbstractService: Service:HiveServer2 is stopped.

             

             首先去查看spark-env.sh 文件修改下列參數(shù):

             SPARK_EXECUTOR_CORES="4"

             SPARK_EXECUTOR_MEMORY="4G" 

     SPARK_DRIVER_MEMORY="20G"

             執(zhí)行測(cè)試sql,發(fā)現(xiàn)還是一樣的異常.

             查看spark-thrift-sparkconf.conf 文件發(fā)現(xiàn)executor是動(dòng)態(tài)的.如下配置

            spark.dynamicAllocation.enabled true
            spark.dynamicAllocation.initialExecutors 0
            spark.dynamicAllocation.maxExecutors 200
            spark.dynamicAllocation.minExecutors 0

            將動(dòng)態(tài)的關(guān)閉,使用靜態(tài)的配置測(cè)試, 以下配置是靜態(tài)的

            首先將上述動(dòng)態(tài)配置刪除,加上下列靜態(tài)配置

            spark.executor.memory 10G
            spark.executor.instances 20
            spark.executor.cores 2
            spark.shuffle.service.enabled false

        然后重新啟動(dòng)spark-thrift-server ,  /sbin/start-thriftserver.sh  --properties-file  ../conf/spark-thrift-sparkconf.conf 

        使用beeline -u    "數(shù)據(jù)庫(kù)連接串"            連接thriftserver ,測(cè)試sql ,  結(jié)果還是失敗,同樣的異常

    

        最后通過(guò)跟蹤表數(shù)據(jù)操作流向,發(fā)現(xiàn)此sql語(yǔ)句在做join的時(shí)候,有很多笛卡爾積運(yùn)算,并且在運(yùn)算時(shí)會(huì)把小表broadcast到每臺(tái)worker上,造成OOM GC異常。

         解決方法:  在spark-thrift-sparkconf.conf 文件中增加下述配置:

         spark.sql.autoBroadcastJoinThreshold =-1

        重啟服務(wù),測(cè)試通過(guò).

      

           

   

本站僅提供存儲(chǔ)服務(wù),所有內(nèi)容均由用戶(hù)發(fā)布,如發(fā)現(xiàn)有害或侵權(quán)內(nèi)容,請(qǐng)點(diǎn)擊舉報(bào)
打開(kāi)APP,閱讀全文并永久保存 查看更多類(lèi)似文章
猜你喜歡
類(lèi)似文章
Apache Spark源碼走讀之11
Spark通過(guò)JdbcRdd連接Oracle數(shù)據(jù)庫(kù)(scala)
Spark的Dataset操作(一)
大數(shù)據(jù)IMF傳奇行動(dòng)絕密課程第59課:使用Java和Scala在IDE中實(shí)戰(zhàn)RDD和DataFrame轉(zhuǎn)換操作
sparksql 報(bào)錯(cuò)Container killed by YARN for exceeding memory limits. xGB of x GB physical memory used. C
Spark與Mysql(JdbcRDD)整合開(kāi)發(fā)(zh)
更多類(lèi)似文章 >>
生活服務(wù)
熱點(diǎn)新聞
分享 收藏 導(dǎo)長(zhǎng)圖 關(guān)注 下載文章
綁定賬號(hào)成功
后續(xù)可登錄賬號(hào)暢享VIP特權(quán)!
如果VIP功能使用有故障,
可點(diǎn)擊這里聯(lián)系客服!

聯(lián)系客服