Swift Vision 的矩形检测在 Core Image 的透视校正后失败

问题描述

我使用的是 macOS Big Sur 版本 11.2.3 和 Xcode 版本 12.4。 我想获得具有透视失真的数独图像的外部正方形。 我是通过以下方式做到的:

  1. 执行矩形检测请求。这提供了外部矩形的点。

  2. 执行透视校正。这提供了一个完美的二次矩形。

  3. 现在我想在数独外框处裁剪图像。

  4. 对透视校正图像执行第二个矩形检测请求,以获得用于裁剪操作的矩形。

令人惊讶的是,矩形检测结果提供了一个空数组。

我有一个疑问,可能是什么原因。

打印出原始 CGImage 的属性提供:

Original image:
    <CGImage 0x7f92e4415560> (IP)
        <<CGColorSpace 0x6000035faf40> (kCGColorSpaceICCBased; kCGColorSpaceModelRGB; sRGB IEC61966-2.1)>
            width = 2448,height = 3264,bpc = 8,bpp = 32,row bytes = 9792
            kCGImageAlphaNoneskipLast | 0 (default byte order)  | kCGImagePixelFormatPacked
            is mask? No,has masking color? No,has soft mask? No,has matte? No,should interpolate? Yes
    2021-04-06 19:15:04.445374+0200 StackExchangeHilfe[1959:100561] Metal API Validation Enabled

打印出透视校正后的 CGImage 的属性

    Corrected image:
    <CGImage 0x7f92f451f180> (DP)
        <<CGColorSpace 0x6000035fae80> (kCGColorSpaceDeviceRGB)>
            width = 2073,height = 2194,row bytes = 8320
            kCGImageAlphaPremultipliedLast | 0 (default byte order)  | kCGImagePixelFormatPacked
            is mask? No,should interpolate? Yes

bitmapInfo 有区别:

原始图片kCGImageAlphaNoneskipLast

更正后的图像:kCGImageAlphaPremultipliedLast

任何其他 CIFilter 都不会更改位图信息。

我试图改变这个值,但它是一个只读变量。 然而,也许我的怀疑是完全错误的。

无论如何有人可以帮忙吗? 提前致谢。

    import UIKit
    import Vision

    class ViewController: UIViewController {
        @IBOutlet weak var origImageView: UIImageView!
        @IBOutlet weak var correctedImageView: UIImageView!
        
        let imageName = "sudoku"
        var origImage: UIImage!
        
        override func viewDidLoad() {
            super.viewDidLoad()
            origImage = UIImage(named: imageName)
            origImageView.image = origImage
            let correctedImage = performOperationsWithUIImage(origImage)
            correctedImageView.image = correctedImage
        }
        
        func performOperationsWithUIImage(_ image: UIImage) -> UIImage? {
            let cgImage = image.cgImage!
            print("Original image:")
            print("\(String(describing: cgImage))")
            
            // Create rectangle detect request
            let rectDetectRequest = VNDetectRectanglesRequest()
             // Customize & configure the request to detect only certain rectangles.
            rectDetectRequest.maximumObservations = 8 // Vision currently supports up to 16.
            rectDetectRequest.minimumAspectRatio = 0.8 // height / width
            rectDetectRequest.quadraturetolerance = 30
            rectDetectRequest.minimumSize = 0.5
            rectDetectRequest.minimumConfidence = 0.6
            
            // Create a request handler.
            let imageRequestHandler = VNImageRequestHandler(cgImage: cgImage,orientation: .up,options: [:])
            // Send the requests to the request handler.
            do {
                try imageRequestHandler.perform([rectDetectRequest])
                } catch let error as NSError {
                    print("Failed to perform first image request: \(error)")
                }
            guard let results = rectDetectRequest.results as? [VNRectangleObservation]
            else {return nil}
            print("\nFirst rectangle request result:")
            print("\(results.count) rectangle(s) detected:")
            print("\(String(describing: results))")
            
            // Perform pespective correction
            let width = Int(cgImage.width)
            let height = Int(cgImage.height)
            guard let filter = CIFilter(name:"CIPerspectiveCorrection")  else { return nil }
            
            filter.setValue(CIImage(image: image),forKey: "inputimage")
            filter.setValue(CIVector(cgPoint: VNImagePointFornormalizedPoint(results.first!.topLeft,width,height)),forKey: "inputTopLeft")
            filter.setValue(CIVector(cgPoint: VNImagePointFornormalizedPoint(results.first!.topRight,forKey: "inputTopRight")
            filter.setValue(CIVector(cgPoint: VNImagePointFornormalizedPoint(results.first!.bottomLeft,forKey: "inputBottomLeft")
            filter.setValue(CIVector(cgPoint: VNImagePointFornormalizedPoint(results.first!.bottomright,forKey: "inputBottomright")
            
            guard
                let outputCIImage = filter.outputimage,let outputCGImage = CIContext(options: nil).createCGImage(outputCIImage,from: outputCIImage.extent)  else {return nil}
            
            print("\nCorrected image:")
            print("\(String(describing: outputCGImage))")
            
            // Perform another rectangle detection
            let newImageRequestHandler = VNImageRequestHandler(cgImage: outputCGImage,options: [:])
            // Send the requests to the request handler.
            do {
                try newImageRequestHandler.perform([rectDetectRequest])
                } catch let error as NSError {
                    print("Failed to perform second image request: \(error)")
                }
            guard let newResults = rectDetectRequest.results as? [VNRectangleObservation]
            else {return nil}
            print("\nSecond rectangle request result:")
            print("\(newResults.count) rectangle(s) detected:")
            print("\(String(describing: newResults))")

            return UIImage(cgImage: outputCGImage)
        }


    }

enter image description here

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)