ios – UIImage人脸检测

我正在尝试编写一个带有UI Image的例程,并返回一个只包含face的新UIImage.这看起来非常简单,但我的大脑在绕过CoreImage和UIImage空间时遇到了问题.

这是基础知识:

- (UIImage *)imageFromImage:(UIImage *)image inRect:(CGRect)rect {
    CGImageRef sourceImageRef = [image CGImage];
    CGImageRef newImageRef = CGImageCreateWithImageInRect(sourceImageRef,rect);
    UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
    CGImageRelease(newImageRef);
    return newImage;
}


-(UIImage *)getFaceImage:(UIImage *)picture {
  CIDetector  *detector = [CIDetector detectorOfType:CIDetectorTypeFace 
                                             context:nil 
                                             options:[NSDictionary dictionaryWithObject: CIDetectorAccuracyHigh forKey: CIDetectorAccuracy]];

  CIImage *ciImage = [CIImage imageWithCGImage: [picture CGImage]];
  NSArray *features = [detector featuresInImage:ciImage];

  // For simplicity,I'm grabbing the first one in this code sample,// and we can all pretend that the photo has one face for sure. :-)
  CIFaceFeature *faceFeature = [features objectAtIndex:0];

  return imageFromImage:picture inRect:faceFeature.bounds;
}

返回的图像来自翻转图像.我尝试使用以下方法调整faceFeature.bounds:

CGAffineTransform t = CGAffineTransformMakeScale(1.0f,-1.0f);
CGRect newRect = CGRectApplyAffineTransform(faceFeature.bounds,t);

…但是这给了我图像之外的结果.

我确定有一些简单的方法可以解决这个问题,但是如果没有计算自下而上,然后使用它作为X创建一个新的rect,是否有“正确”的方法来做到这一点?

谢谢!

解决方法

使用CIContext从图像中裁剪脸部会更容易,也不那么麻烦.像这样的东西:
CGImageRef cgImage = [_ciContext createCGImage:[CIImage imageWithCGImage:inputimage.CGImage] fromrect:faceFeature.bounds];
UIImage *croppedFace = [UIImage imageWithCGImage:cgImage];

其中inputimage是您的UIImage对象,faceFeature对象是您从[CIDetector featuresInImage:]方法获得的CIFaceFeature类型.

相关文章

UITabBarController 是 iOS 中用于管理和显示选项卡界面的一...
UITableView的重用机制避免了频繁创建和销毁单元格的开销,使...
Objective-C中,类的实例变量(instance variables)和属性(...
从内存管理的角度来看,block可以作为方法的传入参数是因为b...
WKWebView 是 iOS 开发中用于显示网页内容的组件,它是在 iO...
OC中常用的多线程编程技术: 1. NSThread NSThread是Objecti...