问题描述
我刚刚在我的神经网络项目中遵循了这个 walkthrough。
如果我基本上总结本演练,我们训练我们的模型并导出干扰图,如演练部分所示。在这些训练步骤中,正如您猜测的那样,它显示了训练作业的 mAP 指标。在这些步骤之后,在最后一部分“运行干扰测试”中,我们测试我们的图像,并生成已在其上绘制标记边界框的图像。但它不会为每个图像生成像 mAP 等的指标。这部分的源码我看过了,但是没有找到它的配置和方法。
def run_inference_for_single_image(image,graph):
with graph.as_default():
with tf.Session() as sess:
# Get handles to input and output tensors
ops = tf.get_default_graph().get_operations()
all_tensor_names = {
output.name for op in ops for output in op.outputs}
tensor_dict = {}
for key in [
'num_detections','detection_Boxes','detection_scores','detection_classes','detection_masks'
]:
tensor_name = key + ':0'
if tensor_name in all_tensor_names:
tensor_dict[key] = tf.get_default_graph().get_tensor_by_name(
tensor_name)
if 'detection_masks' in tensor_dict:
# The following processing is only for single image
detection_Boxes = tf.squeeze(
tensor_dict['detection_Boxes'],[0])
detection_masks = tf.squeeze(
tensor_dict['detection_masks'],[0])
# Reframe is required to translate mask from Box coordinates to image coordinates and fit the image size.
real_num_detection = tf.cast(
tensor_dict['num_detections'][0],tf.int32)
detection_Boxes = tf.slice(detection_Boxes,[0,0],[
real_num_detection,-1])
detection_masks = tf.slice(detection_masks,-1,-1])
detection_masks_reframed = utils_ops.reframe_Box_masks_to_image_masks(
detection_masks,detection_Boxes,image.shape[0],image.shape[1])
detection_masks_reframed = tf.cast(
tf.greater(detection_masks_reframed,0.5),tf.uint8)
# Follow the convention by adding back the batch dimension
tensor_dict['detection_masks'] = tf.expand_dims(
detection_masks_reframed,0)
image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')
# Run inference
output_dict = sess.run(tensor_dict,Feed_dict={image_tensor: np.expand_dims(image,0)})
# all outputs are float32 numpy arrays,so convert types as appropriate
output_dict['num_detections'] = int(
output_dict['num_detections'][0])
output_dict['detection_classes'] = output_dict[
'detection_classes'][0].astype(np.uint8)
output_dict['detection_Boxes'] = output_dict['detection_Boxes'][0]
output_dict['detection_scores'] = output_dict['detection_scores'][0]
if 'detection_masks' in output_dict:
output_dict['detection_masks'] = output_dict['detection_masks'][0]
return output_dict
我有 COCO json 文件,其中包含图像的真实框信息。有没有办法产生这个指标?
#This code produces images that have labeled bounding Boxes.
for image_path in TEST_IMAGE_PATHS:
image = Image.open(image_path)
# the array based representation of the image will be used later in order to prepare the
# result image with Boxes and labels on it.
image_np = load_image_into_numpy_array(image)
# Expand dimensions since the model expects images to have shape: [1,None,3]
image_np_expanded = np.expand_dims(image_np,axis=0)
# Actual detection.
output_dict = run_inference_for_single_image(image_np,detection_graph)
# Visualization of the results of a detection.
vis_util.visualize_Boxes_and_labels_on_image_array(
image_np,output_dict['detection_Boxes'],output_dict['detection_classes'],output_dict['detection_scores'],category_index,instance_masks=output_dict.get('detection_masks'),use_normalized_coordinates=True,line_thickness=8)
plt.figure(figsize=IMAGE_SIZE)
plt.imshow(image_np)
解决方法
暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!
如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。
小编邮箱:dio#foxmail.com (将#修改为@)