OpenCV在Google合作实验室不起作用

问题描述

我在Google colaboratory上练习OpenCV,因为我不知道如何在GPU上使用OpenCV,当我在硬件上运行OpenCV时,它占用大量CPU,所以我去了Google colaboratory。 我笔记本的链接是here

如果您不想看它,那么这里是代码:

import cv2

face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
cap = cv2.VideoCapture(0)

while True:
    _,img = cap.read()
    gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
    faces = face_cascade.detectMultiScale(gray,1.1,4)
    for (x,y,w,h) in faces:
        cv2.rectangle(img,(x,y),(x+w,y+h),(255,0),2)

    cv2.imshow('img',img)

    k = cv2.waitKey(30) & 0xff
    if k==27:
        break
    
cap.release()

相同的代码在我的PC上工作正常,但在Google Colaboratory上却无法正常工作。错误是:

---------------------------------------------------------------------------
error                                     Traceback (most recent call last)
<ipython-input-5-0d9472926d8c> in <module>()
      6 while True:
      7         _,img = cap.read()
----> 8         gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
      9         faces = face_cascade.detectMultiScale(gray,4)
     10         for (x,h) in faces:

error: OpenCV(4.1.2) /io/opencv/modules/imgproc/src/color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cvtColor'

PS〜我的haarcascade文件在Google合作实验室笔记本的同一目录内

如何处理?如果不是,那么是否有任何“ 具体”解决方案可以在支持CUDA的GPU而不是CPU上运行OpenCV?预先感谢!

解决方法

_src.empty()表示从相机获取帧有问题,并且imgNone,当尝试cvtColor(None,...)时会给出_src.empty()

您应该检查if img is not None:,因为cv2在无法从相机获取帧或从文件读取图像时不会引发错误。有时,相机需要一些时间来“热身”,并且它几乎不能给出空白帧(None)。


VideoCapture(0)从直接连接到运行此代码的计算机的摄像机读取帧-当您在服务器Google Colaboratory上运行代码时,这意味着摄像机直接连接到服务器Google Colaboratory(而不是本地计算机)相机),但该服务器没有相机,因此VideoCapture(0)无法在Google Colaboratory上使用。

cv2在服务器上运行时无法从本地相机获取图像。您的网络浏览器可能可以访问您的相机,但是它需要JavaScript来获取框架并将其发送到服务器-但是服务器需要代码来获取该框架


我在Google中检查了Google Colaboratory是否可以访问本地网络摄像头,看来他们为此创建了脚本-Camera Capture-在第一个单元格中是函数take_photo(),该函数使用JavaScript来访问您的相机并在浏览器中显示,该功能在第二个单元格中用于显示本地相机的图像并拍摄屏幕截图。

您应该使用此功能而不是VideoCapture(0)来将服务器与本地摄像机配合使用。


顺便说一句:亲爱的take_photo()也有关于cv2.im_show()的信息,因为它也仅适用于直接连接到运行此代码的计算机的监视器(此计算机必须运行)。 GUI,例如Windows上的Windows,Linux上的X11),并且当您在服务器上运行它时,它想在直接连接到服务器的监视器上显示-但是服务器通常在没有监视器(也没有GUI)的情况下工作/ p>

Google Colaboratory具有特殊替换,可在网络浏览器中显示

 from google.colab.patches import cv2_imshow

顺便说一句::如果您在加载haarcascades .xml时遇到问题,则可能需要文件夹到文件名。 cv2为此cv2.data.haarcascades

具有特殊变量
path = os.path.join(cv2.data.haarcascades,'haarcascade_frontalface_default.xml')

cv2.CascadeClassifier( path )

您还可以查看此文件夹中的内容

import os

filenames = os.listdir(cv2.data.haarcascades)
filenames = sorted(filenames)
print('\n'.join(filenames))

编辑:

我创建了可以从本地网络摄像头逐帧获取的代码,而无需使用button并且无需保存在文件中。问题是速度很慢-因为它仍然必须将帧从本地Web浏览器发送到google colab服务器,然后再发送回本地Web浏览器

带有JavaScript函数的Python代码

#
# based on: https://colab.research.google.com/notebooks/snippets/advanced_outputs.ipynb#scrollTo=2viqYx97hPMi
#

from IPython.display import display,Javascript
from google.colab.output import eval_js
from base64 import b64decode,b64encode
import numpy as np

def init_camera():
  """Create objects and functions in HTML/JavaScript to access local web camera"""

  js = Javascript('''

    // global variables to use in both functions
    var div = null;
    var video = null;   // <video> to display stream from local webcam
    var stream = null;  // stream from local webcam
    var canvas = null;  // <canvas> for single frame from <video> and convert frame to JPG
    var img = null;     // <img> to display JPG after processing with `cv2`

    async function initCamera() {
      // place for video (and eventually buttons)
      div = document.createElement('div');
      document.body.appendChild(div);

      // <video> to display video
      video = document.createElement('video');
      video.style.display = 'block';
      div.appendChild(video);

      // get webcam stream and assing to <video>
      stream = await navigator.mediaDevices.getUserMedia({video: true});
      video.srcObject = stream;

      // start playing stream from webcam in <video>
      await video.play();

      // Resize the output to fit the video element.
      google.colab.output.setIframeHeight(document.documentElement.scrollHeight,true);

      // <canvas> for frame from <video>
      canvas = document.createElement('canvas');
      canvas.width = video.videoWidth;
      canvas.height = video.videoHeight;
      //div.appendChild(input_canvas); // there is no need to display to get image (but you can display it for test)

      // <img> for image after processing with `cv2`
      img = document.createElement('img');
      img.width = video.videoWidth;
      img.height = video.videoHeight;
      div.appendChild(img);
    }

    async function takeImage(quality) {
      // draw frame from <video> on <canvas>
      canvas.getContext('2d').drawImage(video,0);

      // stop webcam stream
      //stream.getVideoTracks()[0].stop();

      // get data from <canvas> as JPG image decoded base64 and with header "data:image/jpg;base64,"
      return canvas.toDataURL('image/jpeg',quality);
      //return canvas.toDataURL('image/png',quality);
    }

    async function showImage(image) {
      // it needs string "data:image/jpg;base64,JPG-DATA-ENCODED-BASE64"
      // it will replace previous image in `<img src="">`
      img.src = image;
      // TODO: create <img> if doesn't exists,// TODO: use `id` to use different `<img>` for different image - like `name` in `cv2.imshow(name,image)`
    }

  ''')

  display(js)
  eval_js('initCamera()')

def take_frame(quality=0.8):
  """Get frame from web camera"""

  data = eval_js('takeImage({})'.format(quality))  # run JavaScript code to get image (JPG as string base64) from <canvas>

  header,data = data.split(',')  # split header ("data:image/jpg;base64,") and base64 data (JPG)
  data = b64decode(data)  # decode base64
  data = np.frombuffer(data,dtype=np.uint8)  # create numpy array with JPG data

  img = cv2.imdecode(data,cv2.IMREAD_UNCHANGED)  # uncompress JPG data to array of pixels

  return img

def show_frame(img,quality=0.8):
  """Put frame as <img src="data:image/jpg;base64,...."> """

  ret,data = cv2.imencode('.jpg',img)  # compress array of pixels to JPG data

  data = b64encode(data)  # encode base64
  data = data.decode()  # convert bytes to string
  data = 'data:image/jpg;base64,' + data  # join header ("data:image/jpg;base64,") and base64 data (JPG)

  eval_js('showImage("{}")'.format(data))  # run JavaScript code to put image (JPG as string base64) in <img>
                                           # argument in `showImage` needs `" "` 

以及在循环中使用它的代码

# 
# based on: https://colab.research.google.com/notebooks/snippets/advanced_outputs.ipynb#scrollTo=zo9YYDL4SYZr
#

#from google.colab.patches import cv2_imshow  # I don't use it but own function `show_frame()`

import cv2
import os

face_cascade = cv2.CascadeClassifier(os.path.join(cv2.data.haarcascades,'haarcascade_frontalface_default.xml'))

# init JavaScript code
init_camera()

while True:
    try:
        img = take_frame()

        gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
        #cv2_imshow(gray)  # it creates new image for every frame (it doesn't replace previous image) so it is useless
        #show_frame(gray)  # it replace previous image

        faces = face_cascade.detectMultiScale(gray,1.1,4)

        for (x,y,w,h) in faces:
                cv2.rectangle(img,(x,y),(x+w,y+h),(255,0),2)
        
        #cv2_imshow(img)  # it creates new image for every frame (it doesn't replace previous image) so it is useless
        show_frame(img)  # it replace previous image
        
    except Exception as err:
        print('Exception:',err)

我不使用from google.colab.patches import cv2_imshow,因为它总是在页面上添加新图像,而不是替换现有图像。


与Google Colab上的Notebook相同的代码:

https://colab.research.google.com/drive/1j7HTapCLx7BQUBp3USiQPZkA0zBKgLM0?usp=sharing

,

代码中可能的问题是,使用类似Haar的功能时,您需要提供完整路径作为目录。

face_cascade = cv2.CascadeClassifier('/User/path/to/opencv/data/haarcascades/haarcascade_frontalface_default.xml')

colab相关的opencv问题已经存在很长时间了,同样的问题也被问到here

here所述,您可以使用cv2_imshow显示图像,但是要处理“相机”帧。

from google.colab.patches import cv2_imshow
img = cv2.imread('logo.png',cv2.IMREAD_UNCHANGED)
cv2_imshow(img)

一个可能的答案:

插入Camera Capture代码段,方法为take_photo,但您需要修改该方法。

face_cascade = cv2.CascadeClassifier('/opencv/data/haarcascades/haarcascade_frontalface_default.xml')

try:
    filename = take_photo()
    img = Image(filename)
    gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
    faces = face_cascade.detectMultiScale(gray,4)
    for (x,h) in faces:
            cv2.rectangle(img,2)
    cv2_imshow("img",img)
      
except Exception as err:
    print(str(err))

上面的代码需要进行编辑,因为没有直接的方法来使用VideoCapture,而您必须修改take_photo

相关问答

错误1:Request method ‘DELETE‘ not supported 错误还原:...
错误1:启动docker镜像时报错:Error response from daemon:...
错误1:private field ‘xxx‘ is never assigned 按Alt...
报错如下,通过源不能下载,最后警告pip需升级版本 Requirem...