问题描述
我正在尝试使用核心图形创建 UIImage。
我的愿望是将图像划分为 4 个不同的灰度区域/像素。
....白色....灰色.....
.....灰色.....黑色...
所以使用核心图形,我想定义一个由 4 个不同的 int8_t
组成的数组,它们对应于所需的图像:
int8_t data[] = {
255,122,};
255
是白色的,
122
为灰色,
0
是黑色的
我能找到的类似代码的最佳参考是 here
这个引用指的是一个 RGB 图像,所以根据我自己的常识想出了这个代码(对不起,objective-C 法语 - 这不是我的引用:)):
- (UIImage *)getImageFromGrayScaleArray {
int width = 2;
int height = 2;
int8_t data[] = {
255,};
CGDataProviderRef provider = CGDataProviderCreateWithData (NULL,&data[0],width * height,NULL);
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceGray();
CGImageRef imageRef = CGImageCreate (width,height,[self bitsPerComponent],[self bitsPerPixel],width * [self bytesPerPixel],colorSpaceRef,kCGBitmapByteOrderDefault,provider,NULL,NO,kCGRenderingIntentDefault);
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
return image;
}
- (int)bitsPerPixel {
return 8 * [self bytesPerPixel];;
}
- (int)bytesPerPixel {
return [self bytesPerComponent] * [self componentsPerPixel];
}
- (int)componentsPerPixel {
return 1;
}
- (int)bytesPerComponent {
return 1;
}
- (int)bitsPerComponent {
return 8 * [self bytesPerComponent];
}
但是……这段代码给了我整个黑色的 UIImage:
有人可以参考我可以阅读和理解如何完成这样的任务吗?在尝试执行此类任务时,有关核心图形的数据量似乎非常稀少。所有这些猜测的时间......永远:)
解决方法
你很接近...
灰度图像每个像素需要两个组件:亮度和 alpha。
因此,只需进行一些更改(请参阅评论):
- (UIImage *)getImageFromGrayScaleArray {
int width = 2;
int height = 2;
// 1 byte for brightness,1 byte for alpha
int8_t data[] = {
255,255,122,};
CGDataProviderRef provider = CGDataProviderCreateWithData (NULL,&data[0],// size is width * height * bytesPerPixel
width * height * [self bytesPerPixel],NULL);
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceGray();
CGImageRef imageRef = CGImageCreate (width,height,[self bitsPerComponent],[self bitsPerPixel],width * [self bytesPerPixel],colorSpaceRef,// use this
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big,// instead of this
//kCGBitmapByteOrderDefault,provider,NULL,NO,kCGRenderingIntentDefault);
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
return image;
}
- (int)bitsPerPixel {
return 8 * [self bytesPerPixel];;
}
- (int)bytesPerPixel {
return [self bytesPerComponent] * [self componentsPerPixel];
}
- (int)componentsPerPixel {
return 2; // 1 byte for brightness,1 byte for alpha
}
- (int)bytesPerComponent {
return 1;
}
- (int)bitsPerComponent {
return 8 * [self bytesPerComponent];
}
编辑 -- 我认为使用上述代码的内存缓冲区寻址存在问题。经过一些测试,我得到了不一致的结果。
试试这个修改后的代码:
@interface TestingViewController : UIViewController
@end
@interface TestingViewController ()
@end
@implementation TestingViewController
// CGDataProviderCreateWithData callback to free the pixel data buffer
void freePixelData(void *info,const void *data,size_t size) {
free((void *)data);
}
- (UIImage*) getImageFromGrayScaleArray:(BOOL)allBlack {
int8_t grayArray[] = {
255,};
int8_t blackArray[] = {
0,};
int width = 2;
int height = 2;
int imageSizeInPixels = width * height;
int bytesPerPixel = 2; // 1 byte for brightness,1 byte for alpha
unsigned char *pixels = (unsigned char *)malloc(imageSizeInPixels * bytesPerPixel);
memset(pixels,imageSizeInPixels * bytesPerPixel); // setting alpha values to 255
if (allBlack) {
for (int i = 0; i < imageSizeInPixels; i++) {
pixels[i * 2] = blackArray[i]; // writing array of bytes as image brightnesses
}
} else {
for (int i = 0; i < imageSizeInPixels; i++) {
pixels[i * 2] = grayArray[i]; // writing array of bytes as image brightnesses
}
}
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceGray();
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,pixels,imageSizeInPixels * bytesPerPixel,freePixelData);
CGImageRef imageRef = CGImageCreate(width,8,8 * bytesPerPixel,width * bytesPerPixel,kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big,false,kCGRenderingIntentDefault);
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
return image;
}
- (void)viewDidLoad {
[super viewDidLoad];
self.view.backgroundColor = [UIColor systemTealColor];
UIImage *img1 = [self getImageFromGrayScaleArray:NO];
UIImage *img2 = [self getImageFromGrayScaleArray:YES];
UIImageView *v1 = [UIImageView new];
UIImageView *v2 = [UIImageView new];
v1.image = img1;
v1.backgroundColor = [UIColor systemYellowColor];
v2.image = img2;
v2.backgroundColor = [UIColor systemYellowColor];
v1.contentMode = UIViewContentModeScaleToFill;
v2.contentMode = UIViewContentModeScaleToFill;
v1.translatesAutoresizingMaskIntoConstraints = NO;
[self.view addSubview:v1];
v2.translatesAutoresizingMaskIntoConstraints = NO;
[self.view addSubview:v2];
UILayoutGuide *g = [self.view safeAreaLayoutGuide];
[NSLayoutConstraint activateConstraints:@[
[v1.topAnchor constraintEqualToAnchor:g.topAnchor constant:40.0],[v1.centerXAnchor constraintEqualToAnchor:g.centerXAnchor],[v1.widthAnchor constraintEqualToConstant:200.0],[v1.heightAnchor constraintEqualToAnchor:v1.widthAnchor],[v2.topAnchor constraintEqualToAnchor:v1.bottomAnchor constant:40.0],[v2.centerXAnchor constraintEqualToAnchor:self.view.centerXAnchor],[v2.widthAnchor constraintEqualToAnchor:v1.widthAnchor],[v2.heightAnchor constraintEqualToAnchor:v2.widthAnchor],]];
}
@end
我们添加两个 200x200 图像视图,并使用以下方法将顶部 .image
设置为返回的 UIImage
:
int8_t grayArray[] = {
255,};
和底部图像使用:
int8_t blackArray[] = {
0,};
输出: