问题描述
我正在尝试在Jupyter单元中显示一个基于Three.js的交互式网格可视化工具。工作流程如下:
在实践中,主线程通过ZMQ套接字向服务器发送请求(每个请求都需要一个答复),然后服务器使用其他套接字对将所需的数据发送回主线程(很多“请求”,非常期望得到的答复很少),最终使用通过ipython内核的通信将数据发送到Javascript前端。到目前为止,一切都很好,并且可以正常工作,因为消息都沿相同方向流动:
Main thread (Python command) [ZMQ REQ] -> [ZMQ REP] Server (Data) [ZMQ XREQ] -> [ZMQ XREQ] Main thread (Data) [IPykernel Comm] -> [Ipykernel Comm] Javascript (display)
但是,当我要获取前端的状态以等待网格完成加载时,模式会有所不同:
Main thread (Status request) --> Server (Status request) --> Main thread (Waiting for reply)
| |
<--------------------------------Javascript (Processing) <--
这一次,服务器将请求发送到前端,前端不会将回复直接发送回服务器,而是发送到主线程,后者会将回复转发到服务器,最后转发到主线程
有一个明显的问题:主线程应该联合转发前端的回复并从服务器接收回复,这是不可能的。理想的解决方案是使服务器直接与前端通信,但是我不知道该怎么做,因为我无法在服务器端使用get_ipython().kernel.comm_manager.register_target
。我试图使用jupyter_client.BlockingKernelClient
在服务器端实例化一个ipython内核客户端,但是我并没有使用它来进行通信或注册目标。
解决方法
好的,所以我现在找到了一个解决方案,但这不是很好。确实,只是等待答复并保持主循环繁忙,我添加了一个超时并将其与内核的do_one_iteration
进行交错以强制处理消息:
while True:
try:
rep = zmq_socket.recv(flags=zmq.NOBLOCK).decode("utf-8")
except zmq.error.ZMQError:
kernel.do_one_iteration()
它可以工作,但不幸的是,它不是真正可移植的,并且与Jupyter评估堆栈搞混了(所有排队的评估将在此处而不是按顺序进行处理)...
或者,还有另一种方法更具吸引力:
import zmq
import asyncio
import nest_asyncio
nest_asyncio.apply()
zmq_socket.send(b"ready")
async def enforce_receive():
await kernel.process_one(True)
return zmq_socket.recv().decode("utf-8")
loop = asyncio.get_event_loop()
rep = loop.run_until_complete(enforce_receive())
但是在这种情况下,您需要提前知道您希望内核接收到一条消息,并且依靠nest_asyncio
也不是理想的选择。
此处是指向Github主题的问题的链接,以及示例notebook。
更新
我终于设法完全解决了我的问题,没有缺点。诀窍是分析每个传入的消息。不相关的消息按顺序放回队列中,而与通信相关的消息则当场处理:
class CommProcessor:
"""
@brief Re-implementation of ipykernel.kernelbase.do_one_iteration
to only handle comm messages on the spot,and put back in
the stack the other ones.
@details Calling 'do_one_iteration' messes up with kernel
'msg_queue'. Some messages will be processed too soon,which is likely to corrupt the kernel state. This method
only processes comm messages to avoid such side effects.
"""
def __init__(self):
self.__kernel = get_ipython().kernel
self.qsize_old = 0
def __call__(self,unsafe=False):
"""
@brief Check once if there is pending comm related event in
the shell stream message priority queue.
@param[in] unsafe Whether or not to assume check if the number
of pending message has changed is enough. It
makes the evaluation much faster but flawed.
"""
# Flush every IN messages on shell_stream only
# Note that it is a faster implementation of ZMQStream.flush
# to only handle incoming messages. It reduces the computation
# time from about 10us to 20ns.
# https://github.com/zeromq/pyzmq/blob/e424f83ceb0856204c96b1abac93a1cfe205df4a/zmq/eventloop/zmqstream.py#L313
shell_stream = self.__kernel.shell_streams[0]
shell_stream.poller.register(shell_stream.socket,zmq.POLLIN)
events = shell_stream.poller.poll(0)
while events:
_,event = events[0]
if event:
shell_stream._handle_recv()
shell_stream.poller.register(
shell_stream.socket,zmq.POLLIN)
events = shell_stream.poller.poll(0)
qsize = self.__kernel.msg_queue.qsize()
if unsafe and qsize == self.qsize_old:
# The number of queued messages in the queue has not changed
# since it last time it has been checked. Assuming those
# messages are the same has before and returning earlier.
return
# One must go through all the messages to keep them in order
for _ in range(qsize):
priority,t,dispatch,args = \
self.__kernel.msg_queue.get_nowait()
if priority <= SHELL_PRIORITY:
_,msg = self.__kernel.session.feed_identities(
args[1],copy=False)
msg = self.__kernel.session.deserialize(
msg,content=False,copy=False)
else:
# Do not spend time analyzing already rejected message
msg = None
if msg is None or not 'comm_' in msg['header']['msg_type']:
# The message is not related to comm,so putting it back in
# the queue after lowering its priority so that it is send
# at the "end of the queue",ie just at the right place:
# after the next unchecked messages,after the other
# messages already put back in the queue,but before the
# next one to go the same way. Note that every shell
# messages have SHELL_PRIORITY by default.
self.__kernel.msg_queue.put_nowait(
(SHELL_PRIORITY + 1,args))
else:
# Comm message. Processing it right now.
tornado.gen.maybe_future(dispatch(*args))
self.qsize_old = self.__kernel.msg_queue.qsize()
process_kernel_comm = CommProcessor()