Part of the
Takes a series of images and audio information generated by
renderFrames() and encodes it to a video.
An object with the following properties:
string containing the absolute path of the directory where the frames are located. This will be the directory where the
ffmepg command will be executed.
number specifying the desired frame rate of the output video.
number specifying the desired output width in pixels for the video.
number specifying the desired output height in pixels for the video.
An absolute path specify where the output file should be written to.
Whether in case of an existing file in
outputLocation it should be overwritten. Type
Information about the audio mix. This is part of the return value of renderFrames().
none. It should match what you passed into the renderFrames() function.
For backwards compatibility, if you omit this parameter, it will use
'png'. Make sure to explicitly set this to
jpeg to take advantage of faster rendering.
Sets the pixel format. See here for available values. The default is
Set a codec. See the encoding guide for available values and guidance on which one to choose. The default is
The constant rate factor of the output, a parameter which controls quality. See here for more information about this parameter. Default is depending on the codec.
Callback function which informs about the encoding progress. The
progress value is a
number between 0 and 1.
Notifies when a remote asset needs to be downloaded in order to extract the audio track.
A boolean value that when set to
true, will log all kinds of debug information. Default
stitchFramesToVideo() returns a promise which resolves to nothing. If everything goes well, the output will be placed in