来自ARKit相机框架的纹理ARMeshGeometry?

问题描述

这个问题多少基于this帖子,其中的想法是从具有LiDAR扫描仪的iOS设备中获取ARMeshGeometry,计算纹理坐标,并将采样的相机帧作为给定的网格,从而允许用户创建其环境的“真实感” 3D表示。

在该帖子中,我调整了其中一个响应以计算纹理坐标,就像这样;

func buildGeometry(meshAnchor: ARMeshAnchor,arFrame: ARFrame) -> SCNGeometry {
    let vertices = meshAnchor.geometry.vertices

    let faces = meshAnchor.geometry.faces
    let camera = arFrame.camera
    let size = arFrame.camera.imageResolution
    
    // use the MTL buffer that ARKit gives us
    let vertexSource = SCNGeometrySource(buffer: vertices.buffer,vertexFormat: vertices.format,semantic: .vertex,vertexCount: vertices.count,dataOffset: vertices.offset,dataStride: vertices.stride)
    
    // set the camera matrix
    let modelMatrix = meshAnchor.transform
    
    var textCords = [CGPoint]()
    for index in 0..<vertices.count {
        let vertexPointer = vertices.buffer.contents().advanced(by: vertices.offset + vertices.stride * index)
        let vertex = vertexPointer.assumingMemoryBound(to: (Float,Float,Float).self).pointee
        let vertex4 = SIMD4<Float>(vertex.0,vertex.1,vertex.2,1)
        let world_vertex4 = simd_mul(modelMatrix,vertex4)
        let world_vector3 = simd_float3(x: world_vertex4.x,y: world_vertex4.y,z: world_vertex4.z)
        let pt = camera.projectPoint(world_vector3,orientation: .portrait,viewportSize: CGSize(width: CGFloat(size.height),height: CGFloat(size.width)))
        let v = 1.0 - Float(pt.x) / Float(size.height)
        let u = Float(pt.y) / Float(size.width)
        
        //let z = vector_float2(u,v)
        let c = CGPoint(x: v,y: u)
        textCords.append(c)
    }
    
    // Setup the texture coordinates
    let textureSource = SCNGeometrySource(textureCoordinates: textCords)
    
    // Setup the normals
    let normalsSource = SCNGeometrySource(meshAnchor.geometry.normals,semantic: .normal)
    
    // Setup the geometry
    let faceData = Data(bytesNocopy: faces.buffer.contents(),count: faces.buffer.length,deallocator: .none)
    let geometryElement = SCNGeometryElement(data: faceData,primitiveType: .triangles,primitiveCount: faces.count,bytesPerIndex: faces.bytesPerIndex)
    let nodeGeometry = SCNGeometry(sources: [vertexSource,textureSource,normalsSource],elements: [geometryElement])
    
    /* Setup texture - THIS IS WHERE I AM STUCK
    let texture = textureConverter.makeTextureForMeshModel(frame: arFrame)
    */
    
    let imageMaterial = SCNMaterial()
    imageMaterial.isDoubleSided = false
    imageMaterial.diffuse.contents = texture!
    nodeGeometry.materials = [imageMaterial]
    
    return nodeGeometry
}

我要努力确定这些纹理坐标是否在正确地计算,然后确定如何对相机框架进行采样,以将相关的框架图像用作该网格的纹理。

链接的问题表明将ARFrame的{​​{1}}(这是capturedImage属性转换为CVPixelBuffer对于实时性能是理想的,但是对我来说,MTLTexture一个CVPixelBuffer图像,而我相信我将需要一个ycbcr图像。

在我的RGB类中,我试图将textureConverter转换为CVPixelBuffer,但是不确定如何返回MTLTexture RGB;

MTLTexture

最后,我不确定我是否真的需要func makeTextureForMeshModel(frame: ARFrame) -> MTLTexture? { if CVPixelBufferGetPlaneCount(frame.capturedImage) < 2 { return nil } let cameraimageTextureY = createTexture(fromPixelBuffer: frame.capturedImage,pixelFormat: .r8Unorm,planeIndex: 0) let cameraimageTextureCbCr = createTexture(fromPixelBuffer: frame.capturedImage,pixelFormat: .rg8Unorm,planeIndex: 1) /* How do I blend the Y and CbCr textures,or return a RGB texture,to return a single MTLTexture? return ... } func createTexture(fromPixelBuffer pixelBuffer: CVPixelBuffer,pixelFormat: MTLPixelFormat,planeIndex: Int) -> CVMetalTexture? { let width = CVPixelBufferGetWidthOfPlane(pixelBuffer,planeIndex) let height = CVPixelBufferGetHeightOfPlane(pixelBuffer,planeIndex) var texture: CVMetalTexture? = nil let status = CVMetalTextureCacheCreateTextureFromImage(nil,textureCache,pixelBuffer,nil,pixelFormat,width,height,planeIndex,&texture) if status != kCVReturnSuccess { texture = nil } return texture } 纹理还是RGB纹理,但是我仍然不确定如何返回用于纹理化的适当图像(通过手动设置纹理格式,我尝试仅返回ycbcr而不必担心CVPixelBuffer颜色空间的情况,会产生非常奇怪的图像)。

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)