安装开发者版本后,run job 超时

MessageQueue|ZMQ|Context_New
TigerGraph RESTPP_LOADER:
— Version —
TERM:dumb tput_flag:-Txterm-256color
product release_2.4.0_06-14-2019 145f1f26a7eb73df5d18d38cef8aade1365f424a 2019-05-14 03:16:07 -0700
olgp release_2.4.0_06-14-2019 4e7eaffbe680a21c16b849e80210b0f13ea1e1cc 2019-05-31 22:29:48 -0700
topology release_2.4.0_06-14-2019 c7bf0530ae2065e286c1e461c335e847967ef673 2019-06-05 07:16:45 -0700
gpe release_2.4.0_06-14-2019 cabd9c0c22a2018ebaf4e06fcbb97bec3468bbbc 2019-06-08 01:20:21 -0700
gse release_2.4.0_06-14-2019 cbb8b694ba42213fc005ef6b9c9a00ec97cac281 2019-06-05 15:36:54 -0700
third_party release_2.4.0_06-14-2019 2d8b86f9cbb7ec633ff8d15c0dda1916622ac9e0 2019-06-03 18:40:21 -0700
utility release_2.4.0_06-14-2019 d747c85417aba0d60bb164a75637fd851afb15e8 2019-06-07 18:22:21 -0700
realtime release_2.4.0_06-14-2019 27cbaa12be49082ebe4e4434ab93a649af26292f 2019-06-07 03:00:50 -0700
er release_2.4.0_06-14-2019 08ffcd8df0e8a5e4be173280ce741a91ee3c8dc2 2019-06-13 11:23:51 -0700
gle release_2.4.0_06-14-2019 f422d403dab4db5c96bd22822e79e7fd5a581283 2019-06-10 16:18:59 -0700
bigtest release_2.4.0_06-14-2019 aec8a3d768763dc6fa4d74cdb91862b9471821d5 2019-05-12 01:17:55 -0700
document release_2.4.0_06-14-2019 5fc24fa6cc96eb6b5e1a94b85bb2b2bbd297c236 2019-06-12 16:10:47 -0700
glive release_2.4.0_06-14-2019 b71b50c027f574bcf38d1614b6bec957bfea576e 2019-05-30 14:10:23 -0700
gap release_2.4.0_06-14-2019 53c9f2d90953541e8e510452d9d87a065f527a6e 2019-06-07 03:00:58 -0700
gst release_2.4.0_06-14-2019 d7a62ba1b1864aa1e4dbc1964c58ebbd065ce454 2019-06-07 03:00:46 -0700
gus release_2.4.0_06-14-2019 09fe058c84aeefce9eeafe10e25fc8b5f20fa393 2019-06-07 03:01:01 -0700
blue_features release_2.4.0_06-14-2019 9fcbfa5044503de824c8e8fa827ce3856f31395b 2019-06-08 01:20:30 -0700
blue_commons release_2.4.0_06-14-2019 8188bffebfee28daeea36fe8886d5b829d992a87 2019-04-24 22:33:44 -0700

[INFO] EOL =
(10)
==============args==================
sep_str:
(char) sep = ,
eol_str:
(char) eol =

job_name_or_path: social_load_social
path:
input_files:
is_directory: 0
ignoreheader: 0
progress_file: /tigergraph/tigergraph/logs/restpp/restpp_loader_logs/social/social.load_social.file.m1.1574843350746.progress
zk_url: 127.0.0.1:19999
worker_name: RESTPP-LOADER_1_1
gsql_pipe_fd: -1
concurrency: 256
batch_size: 1024
gsql_pipe_name:
transactional: 0
skipNLines: -1
firstNLines: -1

WARNING: Logging before InitGoogleLogging() is written to STDERR
I1127 16:29:11.396600 626 completion_queue_manager.cpp:18] Client CQManager(0x7f8d2304fda0) constructing
D1127 16:29:11.402363035 626 env_linux.c:77] Warning: insecure environment read function ‘getenv’ used
StatusHubAgentServer is listening at ipc:///var/tmp/tigergraph/root/gsql_626.
I1127 16:29:11.443584 626 gdict.cpp:293] Dictionary initialize start
2019-11-27 16:29:11,443:626(0x7f8d23b879e0):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.6
2019-11-27 16:29:11,443:626(0x7f8d23b879e0):ZOO_INFO@log_env@716: Client environment:host.name=node-1
2019-11-27 16:29:11,443:626(0x7f8d23b879e0):ZOO_INFO@log_env@723: Client environment:os.name=Linux
2019-11-27 16:29:11,443:626(0x7f8d23b879e0):ZOO_INFO@log_env@724: Client environment:os.arch=2.6.32-573.el6.x86_64
2019-11-27 16:29:11,443:626(0x7f8d23b879e0):ZOO_INFO@log_env@725: Client environment:os.version=#1 SMP Thu Jul 23 15:44:03 UTC 2015
2019-11-27 16:29:11,455:626(0x7f8d23b879e0):ZOO_INFO@log_env@733: Client environment:user.name=root
2019-11-27 16:29:11,455:626(0x7f8d23b879e0):ZOO_INFO@log_env@741: Client environment:user.home=/root
2019-11-27 16:29:11,455:626(0x7f8d23b879e0):ZOO_INFO@log_env@753: Client environment:user.dir=/tigergraph/tigergraph/logs
2019-11-27 16:29:11,455:626(0x7f8d23b879e0):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=127.0.0.1:19999 sessionTimeout=30000 watcher=0x6a1830 sessionId=0 sessionPasswd= context=0x7f8d2303b500 flags=0
I1127 16:29:11.472014 642 completion_queue_manager.cpp:89] Client CompletionQueue (0x7f8d2302bd20) begin processing.
2019-11-27 16:29:11,496:626(0x7f8d1bbfd700):ZOO_INFO@check_events@1705: initiated connection to server [127.0.0.1:19999]
2019-11-27 16:29:11,496:626(0x7f8d1bbfd700):ZOO_INFO@check_events@1752: session establishment complete on server [127.0.0.1:19999], sessionId=0x16eabf91f040002, negotiated timeout=30000
I1127 16:29:11.496572 644 zookeeper_context.cpp:230] Root Watcher SESSION_EVENT state = CONNECTED_STATE for path: NA
I1127 16:29:11.496680 644 zookeeper_context.cpp:81] ZooKeeper Connection is setup. Session id: 16eabf91f040002, previous client id:0
I1127 16:29:11.496691 644 zookeeper_watcher.cpp:314] Zk Session connected, notifying watchers
I1127 16:29:11.496696 644 zookeeper_watcher.cpp:321] --> Number of watchers: 0
I1127 16:29:11.496701 644 zookeeper_watcher.cpp:322] --> Callback time used(us): 3
I1127 16:29:11.507234 626 heartbeat_client.cpp:440] CLIENT: resolved server address: 127.0.0.1:17797
I1127 16:29:11.507280 626 channel_pool.cpp:11] Create channel for target: 127.0.0.1:17797
I1127 16:29:11.580770 626 async_client.cpp:46] Connected to 127.0.0.1:17797. ChannelState:2
W1127 16:29:11.594429 626 heartbeat_client.cpp:467] CLIENT: Detect server update, old server: , new server: 127.0.0.1:17797. Tried to start client session, session: 0x1037668 state:CLIENT_SESSION_ISSUED rc: kOk
I1127 16:29:11.604557 642 heartbeat_client.cpp:342] ClientSession is issued. Session 0x1037668 Server:127.0.0.1:17797
I1127 16:29:11.633569 646 single_thread_worker.cpp:85] SingleThreadWorker start: HeartbeatClient
I1127 16:29:11.633651 646 client_watcher_manager.cpp:31] Reconnect session, re-watch all paths. size:0
I1127 16:29:11.642088 626 gdict.cpp:340] Dictionary initialize succeed, took 199 milliseconds
E1127 16:29:11.686118 626 gconfig_general.cpp:321] Graph social failed to get end points. List endpoints files failed. rc:kNotFound
E1127 16:29:12.182878 626 gconfig_general.cpp:321] Graph social failed to get end points. List endpoints files failed. rc:kNotFound

16:29:11.642245 restppconfig.cpp:125] Engine_RefreshConfig|Start RefreshServerConfig
16:29:12.175818 restppconfig.cpp:132] Engine_RefreshConfig|Start RefreshSchema
16:29:12.180469 restppconfig.cpp:341] Engine_RefreshGraphSchemaCatalog|| rc.Ok() = 1
16:29:12.180475 restppconfig.cpp:138] Engine_RefreshConfig|Start RefreshEndpoints
16:29:12.182984 restppconfig.cpp:144] Engine_RefreshConfig|Start RefreshLoadingJobs
16:29:12.182990 restppconfig.cpp:352] Engine_RefreshLoadingJobs|received start
16:29:12.182996 restppconfig.cpp:359] Engine_RefreshLoadingJobs|received end
16:29:12.229056 restppconfig.cpp:397] Engine_LJM|Registered 1 loading jobs with total parsing JSON time: 10 (ms) and total register job time : 35 (ms)
16:29:12.229067 restppconfig.cpp:150] Engine_RefreshConfig|Start RefreshWorkerId
16:29:12.231761 restppconfig.cpp:156] Engine_RefreshConfig|Finish Catalog Refresh
16:29:12.231776 restppconfig.cpp:173] Engine_RefreshConfig|
Refresh graphCatalogYamlNode and schema_node success

16:29:12.241655 restppconfig.cpp:251] Engine_RefreshConfig|
Refresh Auth start

16:29:12.241661 restppconfig.cpp:260] Engine_RefreshConfig|
Finish Refresh Auth
[WARNING] INSTANCE > RPC > listener_count not defined, set to default value (1).
[WARNING] INSTANCE > RPC > port not defined, set to default value (44240).

16:29:12.294948 restppconfig.cpp:325] Engine_RefreshConfig|
Finish All ConfigRefresh

16:29:12.324297 gtimer.cpp:78] MessageQueue|Kafka|CreateWriter|loading-logLog folder at /tigergraph/tigergraph/logs/RESTPP-LOADER_1_1
DEVELOPER_EDITION
CREATE partition path /tigergraph/tigergraph/gstore/0/part/
E1127 16:29:13.003093 626 zookeeper_context.cpp:1178] Recursive delete /tigergraph/dict/objects/__services/RESTPP-LOADER/_runtime_nodes/RESTPP-LOADER_1_1 failed. PathCount:1 Rc:no node
DEVELOPER_EDITION
DEVELOPER_EDITION
DEVELOPER_EDITION

08:29:13.410000 gcleanup.cpp:27] System_GCleanUp|StartedMessageQueue|ZMQ|Context_New
[INFO] yamlfromfile = 0, with args.job_name_or_path = social_load_social
filename = /tigergraph/tigergraph/logs/restpp/restpp_loader_logs/social/social.load_social.file.m1.1574843350746.progress, progress size = 680
write: filename = /tigergraph/tigergraph/logs/restpp/restpp_loader_logs/social/social.load_social.file.m1.1574843350746.progress, mem = {“commandStr”:"/tigergraph/tigergraph/bin/poc_rest_loader --job social_load_social --totalTask 1 --zk 127.0.0.1:19999 --progress /tigergraph/tigergraph/logs/restpp/restpp_loader_logs/social/social.load_social.file.m1.1574843350746.progress --jobid social.load_social.file.m1.1574843350746",“config_list”:[{“EOL”:"\n",“HEADER”:“true”,“SEPARATOR”:",",“filename”:“file2”,“path”:"/home/tigergraph/friendship.csv"},{“EOL”:"\n",“HEADER”:“true”,“SEPARATOR”:",",“filename”:“file1”,“path”:"/home/tigergraph/person.csv"}],“graph_name”:“social”,“machine_id”:“1”,“version”:“v1”}
/home/tigergraph/friendship.csv

offset_line = 1, offset_line_ = 1, skipNLines = -1, firstNLines = -1
line_count = 1
[INFO] Start loading /home/tigergraph/friendship.csv, LineBatch = 1024, LineOffset = 1, ByteOffset = 21
%4|1574843414.491|METADATA|RESTPP-LOADER_1_1_CONSUMER#consumer-3| [thrd:main]: 127.0.0.1:30002/bootstrap: Metadata request failed: Local: Timed out (61833ms)
%3|1574843952.790|FAIL|RESTPP-LOADER_1_1_CONSUMER#consumer-3| [thrd:127.0.0.1:30002/bootstrap]: 127.0.0.1:30002/bootstrap: Receive failed: Disconnected
%3|1574843952.790|ERROR|RESTPP-LOADER_1_1_CONSUMER#consumer-3| [thrd:127.0.0.1:30002/bootstrap]: 127.0.0.1:30002/bootstrap: Receive failed: Disconnected
%3|1574843952.790|ERROR|RESTPP-LOADER_1_1_CONSUMER#consumer-3| [thrd:127.0.0.1:30002/bootstrap]: 1/1 brokers are down
E1127 16:39:13.869854 681 ioutil.cpp:201] [ERROR] LoadingCallback response: {“error”:true,“message”:“The query didn’t finish because it exceeded the query timeout threshold (600 seconds). To increase the query time, please check the error code for details.”,“results”:[],“code”:“REST-3002”}
Opening TokenBank.so
[ABORTED] loading is aborted, head = 1, tail = 1

/home/tigergraph/friendship.csv[ERROR] Loading aborted, finished loading first 1 lines (guaranteed)
destroy worker
Loading /home/tigergraph/friendship.csv failed.
E1127 16:40:14.560041 626 brain_daemon.cpp:187] Daemon @127.0.0.1:1000 is begin deleted without being shot down first. Daemon type: RESTPP-LOADER_1

16:40:14.560234 brain_daemon.cpp:524] Daemon @127.0.0.1:1000 is begin stopped without being UP. Current state is 0. Ignoring stop command.WARNING: Logging before InitGoogleLogging() is written to STDERR
I1127 16:40:14.674832 626 multi_producer_multi_consumer_list.h:59] default notify: 0 not notify: 0

16:40:14.672288 kafka_message.cpp:74] Comm_Kafka|MessageQueue|Kafka|Close|MessageQueue|ZMQ|Context_Destory
I1127 16:40:14.717272 626 gdict.cpp:348] Dictionary un-initialize start
I1127 16:40:14.950481 646 single_thread_worker.cpp:92] SingleThreadWorker stop: HeartbeatClient
W1127 16:40:14.951272 626 zookeeper_context.cpp:1042] Disconnect from ZooKeeper now
W1127 16:40:14.951387 642 heartbeat_client.cpp:51] Session read OnError with 127.0.0.1:17797. Try to stop client session 0x1037668.
I1127 16:40:14.951427 642 heartbeat_client.cpp:265] Canceling client session. Session 0x1037668 leaving from CLIENT_SESSION_SETUP to CLIENT_SESSION_EXPIRED. Statistics:{ SessionSetupCount: 1, SessionRecoverCount: 0, SessionWriteIssueCount: 660, SessionWriteFinsihCount: 660, SessionReadIssueCount: 661, SessionReadFinsihCount: 661 }
2019-11-27 16:40:14,953:626(0x7f8d23b879e0):ZOO_INFO@zookeeper_close@2511: Closing zookeeper sessionId=0x16eabf91f040002 to [127.0.0.1:19999]

I1127 16:40:14.953887 626 zookeeper_watcher.cpp:326] Zk Session Disconnected, notifying watchers
I1127 16:40:14.954165 626 zookeeper_watcher.cpp:332] --> Number of watchers notified: 1
I1127 16:40:14.954174 626 zookeeper_watcher.cpp:333] --> Callback time used(us): 17
W1127 16:40:14.954198 626 zookeeper_context.cpp:187] ZookeeperContext destructed, this: 0x7f8d2303b500
I1127 16:40:14.954205 626 brain_daemon.cpp:628] BrainDaemon session watcher destruction.
I1127 16:40:15.052054 642 completion_queue_manager.cpp:108] Client CompletionQueue (0x7f8d2302bd20) quit.
I1127 16:40:15.062021 626 completion_queue_manager.cpp:74] CQManager(0x7f8d2304fda0) stopped.
I1127 16:40:15.062090 626 completion_queue_manager.cpp:27] Client CQManager(0x7f8d2304fda0) destructing
I1127 16:40:15.073683 626 multi_producer_multi_consumer_list.h:59] default notify: 0 not notify: 0

16:40:14.699220 gcleanup.cpp:38] System_GCleanUp|Finished
16:40:14.954225 brain_daemon.cpp:643] Comm_Daemon|BrainDaemon zk connection disconnected.MessageQueue|ZMQ|Context_Destory

jps指令显示kafak是启动的, 请问rest-3002是什么状态码呢,有没有相关文档可提供

可以看message:
The query didn’t finish because it exceeded the query timeout threshold (600 seconds). To increase the query time, please check the error code for details.
您这个情况是query超时了。

日志显示是连接kafka 30002端口超时,是kafka down了吗?

现在是在运行官网的person friendship例子, schema都建好了, 只是run job的时候出了问题

您的机器配置如何?操作系统是什么?是否符合官网的要求?详参https://docs.tigergraph.com/admin/admin-guide/hw-and-sw-requirements。

是装在虚拟机里的, 虚拟机分配了8G, 实际内存应该不够, 是因为配置低导致kafka没起来吗?

使用Spark读写tg您推荐什么方式呢? Spark读写tg的jdbc开源包只能在github自己打包编译吗? 我看maven上没有对应的jar包, 望解答, 谢谢~

没有实体机的情况,个人推荐使用docker安装,风险会低点。
生产环境导入TigerGraph比较常用的方案是这样的:全量用csv导入TigerGraph,批量的kafka loader导入TigerGraph。至于TigerGraph导出的话,TigerGraph的query输出均为json。
jdbc是要自己打包编译的。

目前只考虑使用spark dataframe方式呢, 看来只能使用jdbc了, jdbc方式您在生产中有用吗? 有稳定版本吗? github上没有tag和releases呢

请参考https://docs.tigergraph.com/dev/data-loader-user-guides/spark-connection-via-jdbc-driver。