问题描述
我正在开发一个接收带有视频和音频数据包流的应用程序。我能够解码视频并使用AVSampleBufferDisplayLayer
播放它。 (可以在here中找到代码)但是我已经花了一个多星期的时间来解码音频。在流的开头,我收到这样的音频描述-> sampleRate=48000
,channelCount=1
和profileLevel=2
。然后,我不断接收ACC数据包,并尝试在ACC数据包中解码。就像我对待视频数据包一样。首先,我创建AudioStreamBasicDescription
,CMAudioFormatDescription
,并设置audioRenderer
和audioRendererSynchoronizer
。
class AudioDecoderPlayer: NSObject {
private var audioRenderer = AVSampleBufferAudioRenderer()
private var audioRendererSynchoronizer = AVSampleBufferRenderSynchronizer()
private let serializationQueue = DispatchQueue(label: "sample.buffer.player.serialization.queue")
private var audioStreamBasicDescription: AudioStreamBasicDescription
private var formatDescription: CMAudioFormatDescription?
private var outputQueue: AudioQueueRef?
var sampleBuffers: [CMSampleBuffer] = []
let sampleRate: Double
let channels: Int
init(sampleRate: Double = 48000,channels: Int = 1,profileLevel: Int = 2) {
self.sampleRate = sampleRate
self.channels = channels
let uChannels = UInt32(channels)
let channelBytes = UInt32(MemoryLayout<Int16>.size)
let bytesPerFrame = uChannels * channelBytes
self.audioStreamBasicDescription = AudioStreamBasicDescription(
mSampleRate: Float64(sampleRate),mFormatID: kAudioFormatMPEG4AAC,mFormatFlags: AudioFormatFlags(profileLevel),mBytesPerPacket: bytesPerFrame,mFramesPerPacket: 1,mBytesPerFrame: bytesPerFrame,mChannelsPerFrame: uChannels,mBitsPerChannel: channelBytes * 8,mReserved: 0
)
super.init()
let status = CMAudioFormatDescriptionCreate(allocator: kCFAllocatorDefault,asbd: &audioStreamBasicDescription,layoutSize: 0,layout: nil,magicCookieSize: 0,magicCookie: nil,extensions: nil,formatDescriptionOut: &formatDescription)
if status != noErr {
fatalError("unable to create audio format description")
}
audioRendererSynchoronizer.addRenderer(audioRenderer)
subscribeToAudioRenderer()
startPlayback()
}
func subscribeToAudioRenderer() {
audioRenderer.requestMediaDataWhenReady(on: serializationQueue,using: { [weak self] in
guard let strongSelf = self else {
return
}
while strongSelf.audioRenderer.isReadyForMoreMediaData {
if let sampleBuffer = strongSelf.nextSampleBuffer() {
strongSelf.audioRenderer.enqueue(sampleBuffer)
}
}
})
}
func startPlayback() {
serializationQueue.async {
if self.audioRendererSynchoronizer.rate != 1 {
self.audioRendererSynchoronizer.rate = 1
self.audioRenderer.volume = 1.0
}
}
}
func nextSampleBuffer() -> CMSampleBuffer? {
guard sampleBuffers.count > 0 else {
return nil
}
let sampleBuffer = sampleBuffers.first
sampleBuffers.remove(at: 0)
return sampleBuffer
}
这就是我的解码功能的样子
func decodeAudioPacket(data: Data) {
let headerValue = UInt32(data.count)
//add data lenght at the beginning
var sizedData = withUnsafeBytes(of: headerValue.bigEndian) { Data($0) }
sizedData.append(data)
let blockBuffer = sizedData.toCMBlockBuffer()
// Outputs from CMSampleBufferCreate
var sampleBuffer: CMSampleBuffer?
let result = CMAudioSampleBufferCreateReadyWithPacketDescriptions(
allocator: kCFAllocatorDefault,dataBuffer: blockBuffer,formatDescription: formatDescription!,sampleCount: 1,presentationTimeStamp: CMTime(value: 1,timescale: Int32(sampleRate)),packetDescriptions: nil,sampleBufferOut: &sampleBuffer)
if result != noErr {
fatalError("CMSampleBufferCreate() failed")
}
sampleBuffers.append(sampleBuffer!)
}
我发现requestMediaDataWhenReady
仅被调用一次,这表明未在播放声音,但我不明白为什么。我可以通过编辑WWDC 2017会话509中的代码来使其工作,但仍然没有声音在播放。 (可以在here中找到代码)我也尝试过将各种解决方案与AudioQueue
一起使用,但是没有成功。 (由于某些原因,从未调用过来自AudioFileStreamOpen
的回调)(可以找到here和here的代码)。但是我宁愿使用AVSampleBufferAudioRenderer
来解决它,因为我认为它应该更容易,而且我还想使用AVSampleBufferAudioRendererSynchronizer
来将视频与音频同步。
任何关于我做错事的建议都会受到赞赏。
谢谢
解决方法
暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!
如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。
小编邮箱:dio#foxmail.com (将#修改为@)