问题描述
|
我正在尝试使用AVAssetWriter将CGImages写入文件以根据图像创建视频。
我已经在模拟器上以三种不同的方式成功完成了此操作,但是每种方法在运行iOS 4.3的iPhone 4上均失败。
所有这些都与像素缓冲区有关。
我的第一种方法是仅根据需要创建像素缓冲区,而无需使用池。可以,但是在设备上占用的内存过多。
我的第二种方法是使用推荐的AVAssetWriterInputPixelBufferAdaptor,然后使用CVPixelBufferPoolCreatePixelBuffer从适配器pixelBufferPool中提取像素缓冲区。
这也可以在模拟器上运行,但是在设备上将失败,因为适配器的像素缓冲池从未分配。我没有收到任何错误消息。
最后,我尝试使用CVPixelBufferPoolCreate通过自己的像素缓冲池创建。这在模拟器中也可以使用,但是在设备上,一切正常,直到我尝试用appendPixelBuffer追加像素缓冲区,但每次都失败。
我在网络上发现的信息很少。我已经根据找到的示例创建了代码,但是现在已经没有运气了。如果任何人都有成功使用AVAssetWriter进行此操作的经验,请看一下,如果发现有任何异常,请与我联系。
注意:您将看到注释掉的尝试块。
一,安装
- (BOOL) openVideoFile: (Nsstring *) path withSize:(CGSize)imageSize {
size = CGSizeMake (480.0,320.0);//imageSize;
NSError *error = nil;
videoWriter = [[AVAssetWriter alloc] initWithURL:
[NSURL fileURLWithPath:path] fileType:AVFileTypeQuickTimeMovie
error:&error];
if (error != nil)
return NO;
NSDictionary *videoCleanApertureSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithDouble:size.width],AVVideoCleanApertureWidthKey,[NSNumber numberWithDouble:size.height],AVVideoCleanApertureHeightKey,[NSNumber numberWithInt:10],AVVideoCleanApertureHorizontalOffsetKey,AVVideoCleanApertureVerticalOffsetKey,nil];
NSDictionary *videoAspectRatioSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:1],AVVideoPixelAspectRatioHorizontalSpacingKey,[NSNumber numberWithInt:1],AVVideoPixelAspectRatioVerticalSpacingKey,nil];
NSDictionary *codecSettings = [NSDictionary dictionaryWithObjectsAndKeys:
//[NSNumber numberWithInt:960000],AVVideoAverageBitRateKey,// [NSNumber numberWithInt:1],AVVideoMaxKeyFrameIntervalKey,videoCleanApertureSettings,AVVideoCleanApertureKey,videoAspectRatioSettings,AVVideoPixelAspectRatioKey,//AVVideoProfileLevelH264Main31,AVVideoProfileLevelKey,nil];
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264,AVVideoCodecKey,codecSettings,AVVideoCompressionPropertiesKey,[NSNumber numberWithDouble:size.width],AVVideoWidthKey,AVVideoHeightKey,nil];
writerInput = [[AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings] retain];
NSMutableDictionary * bufferAttributes = [[NSMutableDictionary alloc] init];
[bufferAttributes setobject: [NSNumber numberWithInt: kCVPixelFormatType_32ARGB]
forKey: (Nsstring *) kCVPixelBufferPixelFormatTypeKey];
[bufferAttributes setobject: [NSNumber numberWithInt: 480]
forKey: (Nsstring *) kCVPixelBufferWidthKey];
[bufferAttributes setobject: [NSNumber numberWithInt: 320]
forKey: (Nsstring *) kCVPixelBufferHeightKey];
//NSDictionary *bufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32ARGB],kCVPixelBufferPixelFormatTypeKey,nil];
//[bufferAttributes setobject: [NSNumber numberWithInt: 640]
// forKey: (Nsstring *) kCVPixelBufferWidthKey];
//[bufferAttributes setobject: [NSNumber numberWithInt: 480]
// forKey: (Nsstring *) kCVPixelBufferHeightKey];
adaptor = [[AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
sourcePixelBufferAttributes:nil] retain];
//CVPixelBufferPoolCreate (kcfAllocatorSystemDefault,NULL,(CFDictionaryRef)bufferAttributes,&pixelBufferPool);
//Create buffer pool
NSMutableDictionary* attributes;
attributes = [NSMutableDictionary dictionary];
int width = 480;
int height = 320;
[attributes setobject:[NSNumber numberWithInt:kCVPixelFormatType_32ARGB] forKey:(Nsstring*)kCVPixelBufferPixelFormatTypeKey];
[attributes setobject:[NSNumber numberWithInt:width] forKey: (Nsstring*)kCVPixelBufferWidthKey];
[attributes setobject:[NSNumber numberWithInt:height] forKey: (Nsstring*)kCVPixelBufferHeightKey];
CVReturn theError = CVPixelBufferPoolCreate(kcfAllocatorDefault,(CFDictionaryRef) attributes,&pixelBufferPool);
NSParameterassert(writerInput);
NSParameterassert([videoWriter canAddInput:writerInput]);
[videoWriter addInput:writerInput];
writerInput.expectsMediaDataInRealTime = YES;
//Start a session:
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
buffer = NULL;
lastTime = kCMTimeZero;
presentTime = kCMTimeZero;
return YES;
}
接下来,使用两种方法追加编写器并创建要追加的像素缓冲区。
- (void) writeImagetoMovie:(CGImageRef)image
{
if([writerInput isReadyForMoreMediaData])
{
// CMTime frameTime = CMTimeMake(1,20);
// CMTime lastTime=CMTimeMake(i,20); //i is from 0 to 24 of the loop above
// CMTime presentTime=CMTimeAdd(lastTime,frameTime);
buffer = [self pixelBufferFromCGImage:image];
BOOL success = [adaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
if (!success) NSLog(@\"Failed to appendPixelBuffer\");
CVPixelBufferRelease(buffer);
presentTime = CMTimeAdd(lastTime,CMTimeMake(5,1000));
lastTime = presentTime;
}
else
{
NSLog(@\"error - writerInput not ready\");
}
}
- (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image
{
CVPixelBufferRef pxbuffer;
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES],kCVPixelBufferCGImageCompatibilityKey,[NSNumber numberWithBool:YES],kCVPixelBufferCGBitmapContextCompatibilityKey,nil];
if (pixelBufferPool == NULL) NSLog(@\"pixelBufferPool is null!\");
CVReturn status = CVPixelBufferPoolCreatePixelBuffer (NULL,pixelBufferPool,&pxbuffer);
/*if (pxbuffer == NULL) {
CVReturn status = CVPixelBufferCreate(kcfAllocatorDefault,size.width,size.height,kCVPixelFormatType_32ARGB,(CFDictionaryRef) options,&pxbuffer);
}*/
//NSParameterassert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer,0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
//NSParameterassert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata,8,4*size.width,rgbColorSpace,kCGImageAlphaNoneskipFirst);
//NSParameterassert(context);
CGContextConcatCTM(context,CGAffineTransformMakeRotation(0));
CGContextDrawImage(context,CGRectMake(90,10,CGImageGetWidth(image),CGImageGetHeight(image)),image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer,0);
return pxbuffer;
}
解决方法
我已经找到解决此问题的方法。
如果要使AVAudioPlayer和AVAssetWriter一起正常运行,则必须具有和音频会话类别为'mixable \'。
您可以使用可混合使用的类别,例如AVAudioSessionCategoryAmbient。
但是,我需要使用AVAudioSessionCategoryPlayAndRecord。
您可以通过实现以下方式将任何类别设置为可混合使用:
OSStatus propertySetError = 0;
UInt32 allowMixing = true;
propertySetError = AudioSessionSetProperty (
kAudioSessionProperty_OverrideCategoryMixWithOthers,// 1
sizeof (allowMixing),// 2
&allowMixing // 3
);
, 好吧,首先在创建该适配器对象时需要传递一些“ 3”:
NSDictionary *bufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA],kCVPixelBufferPixelFormatTypeKey,nil];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:_videoWriterInput
sourcePixelBufferAttributes:bufferAttributes];
然后将对“ 5”的调用删除,适配器对象中已经创建了一个像素缓冲池,因此只需调用以下命令:
CVPixelBufferRef pixelBuffer = NULL;
CVPixelBufferPoolCreatePixelBuffer(NULL,adaptor.pixelBufferPool,&pixelBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer,0);
// ...fill the pixelbuffer here
CVPixelBufferUnlockBaseAddress(pixelBuffer,0);
CMTime frameTime = CMTimeMake(frameCount,(int32_t) 30);
BOOL res = [adaptor appendPixelBuffer:pixelBuffer withPresentationTime:frameTime];
CVPixelBufferRelease(pixelBuffer);
CFRelease(sampleBuffer);
我认为应该这样做,在某些时候我也遇到了类似的错误,我通过创建适配器和像素缓冲区来解决它,如下所示。