I want to render animated NSView (or just the underlying CALayer) into a series of images without the view being presented on the screen AT ALL. I figured how to do that with CARenderer and MTLTexture but there are some issues with the below approach.
This runs in a playground and stores output to Off-screen Render folder in your downloads:
import AppKit
import Metal
import QuartzCore
import PlaygroundSupport
let view = NSView(frame: CGRect(x: 0, y: 0, width: 600, height: 400))
let circle = NSView(frame: CGRect(x: 0, y: 0, width: 50, height: 50))
circle.wantsLayer = true
circle.layer?.backgroundColor = NSColor.red.cgColor
circle.layer?.cornerRadius = 25
view.wantsLayer = true
view.addSubview(circle)
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm, width: 600, height: 400, mipmapped: false)
textureDescriptor.usage = [MTLTextureUsage.shaderRead, .shaderWrite, .renderTarget]
let device = MTLCreateSystemDefaultDevice()!
let texture: MTLTexture = device.makeTexture(descriptor: textureDescriptor)!
let context = CIContext(mtlDevice: device)
let renderer = CARenderer(mtlTexture: texture)
renderer.layer = view.layer
renderer.bounds = view.frame
let outputURL: URL = try! FileManager.default.url(for: .downloadsDirectory, in: .userDomainMask, appropriateFor: nil, create: false).appendingPathComponent("Off-screen Render")
try? FileManager.default.removeItem(at: outputURL)
try! FileManager.default.createDirectory(at: outputURL, withIntermediateDirectories: true, attributes: nil)
var frameNumber: Int = 0
func render() {
Swift.print("Rendering frame #\(frameNumber)…")
renderer.beginFrame(atTime: CACurrentMediaTime(), timeStamp: nil)
renderer.addUpdate(renderer.bounds)
renderer.render()
renderer.endFrame()
let ciImage: CIImage = CIImage(mtlTexture: texture)!
let cgImage: CGImage = context.createCGImage(ciImage, from: ciImage.extent)!
let url: URL = outputURL.appendingPathComponent("frame-\(frameNumber).png")
let destination: CGImageDestination = CGImageDestinationCreateWithURL(url as CFURL, kUTTypePNG, 1, nil)!
CGImageDestinationAddImage(destination, cgImage, nil)
guard CGImageDestinationFinalize(destination) else { fatalError() }
frameNumber += 1
}
var timer: Timer?
NSAnimationContext.runAnimationGroup({ context in
context.duration = 0.25
view.animator().frame.origin = CGPoint(x: 550, y: 350)
}, completionHandler: {
timer?.invalidate()
render()
Swift.print("Finished off-screen rendering of \(frameNumber) frames in \(outputURL.path)…")
})
// Make the first render immediately after the animation start and after it completes. For the purpose
// of this demo timer is used instead of display link.
render()
timer = Timer.scheduledTimer(withTimeInterval: 1 / 30, repeats: true, block: { _ in render() })
The problems with the above code are shown on the attachment below and are:
The texture doesn't get cleaned and each next frame is drawn on top of the previous render. I'm aware that I can use
replace(region:…), but suspect that it's not efficient compared to render pass with clear color description. Is this true? Can render pass be used withCARenderer?The first frame (in real project it's two-three frames) often comes out empty. I suspect this has to do with some async behaviour in
CARendererrendering or duringCGImageconstruction using Core Image. How can this be avoided? Is there some kind of wait-until-rendering-finished callback on the texture?
