问题描述
操作:
我在Jupyter笔记本上运行了可可评估! python3 coco.py evaluate --dataset=/host/Downloads/coco_2017_dataset --model=last
,而不是直接在终端上运行它。这是Github repo的Mask R-CNN实现。
问题:此val2017
评估/ dev COCO数据集中有5000张图像。为什么这里只显示6个AP和6个召回?
目标:
- 显示整个数据集的mAP。
- 对以上问题进行澄清/回答。
我当前的结果:
index created!
Running per image evaluation...
Evaluate annotation type *bBox*
DONE (t=2.59s).
Accumulating evaluation results...
DONE (t=0.87s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.286
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.473
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.317
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.119
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.337
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.443
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.242
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.332
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.340
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.133
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.397
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.514
Prediction time: 4761.41809463501. Average 9.52283618927002/image
Total time: 4820.066065311432
解决方法
暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!
如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。
小编邮箱:dio#foxmail.com (将#修改为@)