Skip to main content

<HtmlInCanvas>v4.0.455

This component renders its children into a <canvas> using the browser’s HTML-in-canvas API and allows you to draw an effect using the Canvas 2D API, WebGL or WebGPU.

HTML-in-canvas is only available in Chrome 149 and later and if the chrome://flags/#canvas-draw-element flag is enabled.

MyComp.tsx
import {HtmlInCanvas} from 'remotion'; export const MyComp: React.FC = () => { return ( <HtmlInCanvas width={1280} height={720}> <div style={{fontSize: 80}}>Hello</div> </HtmlInCanvas> ); };

HtmlInCanvas.isHtmlInCanvasSupported()

Return value: boolean

If HTML-in-canvas is not available, the component throws a fatal error.
Use HtmlInCanvas.isHtmlInCanvasSupported() to check if HTML-in-canvas is supported.
Buggy implementations such as the one in Chrome 147 are not considered supported.

Check if HTML-in-canvas is supported
import {HtmlInCanvas} from 'remotion'; if (HtmlInCanvas.isHtmlInCanvasSupported()) { console.log('HTML-in-canvas is supported'); } else { console.log('HTML-in-canvas is not supported'); }

API

width

Width of the canvas and the inner layout area, in pixels. Must be a positive integer.

height

Height of the canvas and the inner layout area, in pixels. Must be a positive integer.

children

Children to draw to the canvas.
Children will be wrapped in a <div> with the given width and height.

onPaint?

Called when the children are updated and can be painted onto the canvas.
If this callback is omitted, the children are painted using a 2d context with no transform.

Simple example
import {HtmlInCanvasOnPaint} from 'remotion'; const onPaint: HtmlInCanvasOnPaint = ({canvas, element, elementImage}) => { const ctx = canvas.getContext('2d'); if (!ctx) { throw new Error('Failed to acquire 2D context'); } ctx.reset(); ctx.filter = 'blur(10px)'; const transform = ctx.drawElementImage(elementImage, 0, 0); element.style.transform = transform.toString(); };

See the examples below for webgl and webgpu examples.

The callback receives:

canvas

An OffscreenCanvas with dimensions width × height.
You should paint to this canvas.

element

The inner HTMLDivElement that wraps children.
You should apply the return value of drawElementImage to this element's style.transform property.

elementImage

An ElementImage handle for the current capture.
You should paint this image to the canvas.

onInit?

Runs once before the first paint. Use it to create GPU contexts or other resources tied to the OffscreenCanvas. Must return a cleanup function, or a Promise that resolves to one. The cleanup runs on unmount.

The argument object matches onPaint (canvas, element, elementImage). The elementImage passed here is only for initialization — capture again inside onPaint for each frame.

durationInFrames?

Inherited from <Sequence>.

from?

Inherited from <Sequence>.

ref?

You can add a React ref to <HtmlInCanvas />.
It is attached to the layout HTMLCanvasElement — the canvas that hosts the laid-out subtree (layoutSubtree).

If you use TypeScript, type the ref with HTMLCanvasElement:

src/Example.tsx
import React, {useRef} from 'react'; import {AbsoluteFill, HtmlInCanvas} from 'remotion'; export const Example: React.FC = () => { const canvasRef = useRef<HTMLCanvasElement>(null); return ( <HtmlInCanvas ref={canvasRef} width={1280} height={720}> <AbsoluteFill style={{fontSize: 80}}>Hello</AbsoluteFill> </HtmlInCanvas> ); };

onPaint examples

2D

Call drawElementImage() on the 2D context to draw elementImage into the bitmap.
HTML-in-canvas recommends that you use the return value and assign it to element.style.transform, so that the original DOM element matches the transform, so that selection still works.

For 2D, you usually do not need onInit.

2D: animated blur + drawElementImage
import React, {useCallback} from 'react'; import { AbsoluteFill, HtmlInCanvas, type HtmlInCanvasOnPaint, useCurrentFrame, useVideoConfig, } from 'remotion'; const BLUR_MIN_PX = 4; const BLUR_MAX_PX = 22; const BLUR_CYCLES_PER_SECOND = 0.35; export const HtmlInCanvas2DBlur: React.FC = () => { const frame = useCurrentFrame(); const {width, height, fps} = useVideoConfig(); const onPaint: HtmlInCanvasOnPaint = useCallback( ({canvas, element, elementImage}) => { const ctx = canvas.getContext('2d'); if (!ctx) { throw new Error('Failed to acquire 2D context'); } const t = (frame / fps) * Math.PI * 2 * BLUR_CYCLES_PER_SECOND; const blurPx = BLUR_MIN_PX + (BLUR_MAX_PX - BLUR_MIN_PX) * (0.5 + 0.5 * Math.sin(t)); ctx.reset(); ctx.filter = `blur(${blurPx}px)`; const transform = ctx.drawElementImage(elementImage, 0, 0); element.style.transform = transform.toString(); }, [frame, fps], ); return ( <HtmlInCanvas width={width} height={height} onPaint={onPaint}> <AbsoluteFill style={{ justifyContent: 'center', alignItems: 'center', backgroundColor: '#1a1a2e', color: 'white', fontSize: 120, fontFamily: 'sans-serif', }} > <h1 style={{margin: 0}}>Hello</h1> </AbsoluteFill> </HtmlInCanvas> ); };

WebGL

Do all setup in onInit such as getting the WebGL context and compiling the shader.
onInit must return a cleanup function that destroys the resources that were created in onInit.

Use texElementImage2D to turn the elementImage into a texture.

This example is long. Expand to see it:
WebGL2: minimal full component
/** * Minimal WebGL2 + HtmlInCanvas sample (same code as /docs/remotion/html-in-canvas). * UV wave distortion in the fragment shader (not expressible as a static CSS filter). */ import React, {useCallback, useRef} from 'react'; import { AbsoluteFill, HtmlInCanvas, HtmlInCanvasOnInit, HtmlInCanvasOnPaint, useCurrentFrame, useVideoConfig, } from 'remotion'; type GlState = { gl: WebGL2RenderingContext; program: WebGLProgram; uTex: WebGLUniformLocation | null; uTime: WebGLUniformLocation | null; texture: WebGLTexture; vao: WebGLVertexArrayObject; }; const VS = `#version 300 es in vec2 a_pos; in vec2 a_uv; out vec2 v_uv; void main() { gl_Position = vec4(a_pos, 0.0, 1.0); v_uv = a_uv; }`; const FS = `#version 300 es precision highp float; uniform sampler2D u_tex; uniform float u_time; in vec2 v_uv; out vec4 o; void main() { vec2 uv = v_uv; uv.x += 0.045 * sin(v_uv.y * 32.0 + u_time * 5.0); uv.y += 0.038 * sin(v_uv.x * 26.0 + u_time * 4.0); o = texture(u_tex, uv); }`; function linkProgram( gl: WebGL2RenderingContext, vsSrc: string, fsSrc: string, ): WebGLProgram { const vert = gl.createShader(gl.VERTEX_SHADER)!; gl.shaderSource(vert, vsSrc); gl.compileShader(vert); const frag = gl.createShader(gl.FRAGMENT_SHADER)!; gl.shaderSource(frag, fsSrc); gl.compileShader(frag); const program = gl.createProgram()!; gl.attachShader(program, vert); gl.attachShader(program, frag); gl.linkProgram(program); gl.deleteShader(vert); gl.deleteShader(frag); return program; } const QUAD = new Float32Array([ -1, -1, 0, 0, 1, -1, 1, 0, -1, 1, 0, 1, 1, -1, 1, 0, -1, 1, 0, 1, 1, 1, 1, 1, ]); export const HtmlInCanvasDocsMinimalWebGL: React.FC = () => { const frame = useCurrentFrame(); const {width, height, fps} = useVideoConfig(); const gpuRef = useRef<GlState | null>(null); const onInit: HtmlInCanvasOnInit = useCallback(({canvas}) => { const gl = canvas.getContext('webgl2', { alpha: true, premultipliedAlpha: true, antialias: false, }); if (!gl) { throw new Error('WebGL2 unavailable'); } gl.pixelStorei(gl.UNPACK_FLIP_Y_WEBGL, true); const program = linkProgram(gl, VS, FS); const uTex = gl.getUniformLocation(program, 'u_tex'); const uTime = gl.getUniformLocation(program, 'u_time'); const texture = gl.createTexture()!; gl.bindTexture(gl.TEXTURE_2D, texture); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE); const buffer = gl.createBuffer()!; gl.bindBuffer(gl.ARRAY_BUFFER, buffer); gl.bufferData(gl.ARRAY_BUFFER, QUAD, gl.STATIC_DRAW); const vao = gl.createVertexArray()!; gl.bindVertexArray(vao); const locPos = gl.getAttribLocation(program, 'a_pos'); const locUv = gl.getAttribLocation(program, 'a_uv'); gl.enableVertexAttribArray(locPos); gl.vertexAttribPointer(locPos, 2, gl.FLOAT, false, 16, 0); gl.enableVertexAttribArray(locUv); gl.vertexAttribPointer(locUv, 2, gl.FLOAT, false, 16, 8); gpuRef.current = {gl, program, uTex, uTime, texture, vao}; return () => { gl.deleteProgram(program); gl.deleteTexture(texture); gl.deleteVertexArray(vao); gl.deleteBuffer(buffer); gpuRef.current = null; }; }, []); const onPaint: HtmlInCanvasOnPaint = useCallback( ({elementImage}) => { const gpu = gpuRef.current; if (!gpu) { return; } const {gl} = gpu; gl.viewport(0, 0, gl.drawingBufferWidth, gl.drawingBufferHeight); gl.useProgram(gpu.program); gl.activeTexture(gl.TEXTURE0); gl.bindTexture(gl.TEXTURE_2D, gpu.texture); gl.texElementImage2D( gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, elementImage, ); if (gpu.uTex) { gl.uniform1i(gpu.uTex, 0); } if (gpu.uTime) { gl.uniform1f(gpu.uTime, frame / fps); } gl.bindVertexArray(gpu.vao); gl.drawArrays(gl.TRIANGLES, 0, 6); }, [frame, fps], ); return ( <HtmlInCanvas width={width} height={height} onInit={onInit} onPaint={onPaint} > <AbsoluteFill style={{ justifyContent: 'center', alignItems: 'center', color: 'white', fontSize: 120, }} > <h1>Hello</h1> </AbsoluteFill> </HtmlInCanvas> ); };

WebGPU

Use onInit to get a WebGPU context, to request a GPU and to compile shaders.
onInit must return a cleanup function that destroys the resources that were created in onInit.
In onPaint, draw the elementImage to the canvas using copyElementImageToTexture.

This example is long. Expand to see it:
compose-webgpu.tsx
import React, {useCallback, useRef} from 'react'; import { AbsoluteFill, HtmlInCanvas, type HtmlInCanvasOnInit, type HtmlInCanvasOnPaint, useCurrentFrame, useVideoConfig, } from 'remotion'; // Minimal WebGPU types — `@webgpu/types` is intentionally not a dependency, // matching the convention in `packages/core/src/effects/gpu-device.ts`. type Gpu = { requestAdapter(): Promise<GpuAdapter | null>; getPreferredCanvasFormat(): string; }; type GpuAdapter = {requestDevice(): Promise<GpuDevice>}; type GpuTextureView = unknown; type GpuTexture = {createView(): GpuTextureView; destroy(): void}; type GpuBuffer = {destroy(): void}; type GpuBindGroup = unknown; type GpuPipeline = unknown; type GpuSampler = unknown; type GpuShaderModule = unknown; type GpuDevice = { createShaderModule(d: {code: string}): GpuShaderModule; createRenderPipeline(d: unknown): GpuPipeline; createTexture(d: unknown): GpuTexture; createSampler(d?: unknown): GpuSampler; createBindGroup(d: unknown): GpuBindGroup; createBuffer(d: unknown): GpuBuffer; createCommandEncoder(): { beginRenderPass(d: unknown): { setPipeline(p: GpuPipeline): void; setBindGroup(i: number, b: GpuBindGroup): void; draw(n: number): void; end(): void; }; finish(): unknown; }; queue: { submit(c: unknown[]): void; writeBuffer(b: GpuBuffer, offset: number, data: BufferSource): void; copyElementImageToTexture( source: Element | ElementImage, width: number, height: number, destination: {texture: GpuTexture}, ): void; }; }; type GpuCanvasContext = { configure(d: { device: GpuDevice; format: string; alphaMode: 'premultiplied' | 'opaque'; }): void; getCurrentTexture(): GpuTexture; }; const WGSL = /* wgsl */ ` struct VsOut { @builtin(position) pos: vec4f, @location(0) uv: vec2f, }; @vertex fn vs(@builtin(vertex_index) i: u32) -> VsOut { // Fullscreen triangle (slightly oversized — clipped to viewport). var p = array(vec2f(-1.0, -3.0), vec2f(-1.0, 1.0), vec2f(3.0, 1.0)); var uv = array(vec2f(0.0, 2.0), vec2f(0.0, 0.0), vec2f(2.0, 0.0)); var o: VsOut; o.pos = vec4f(p[i], 0.0, 1.0); o.uv = uv[i]; return o; } struct U { time: f32, _pad: f32, resolution: vec2f, }; @group(0) @binding(0) var samp: sampler; @group(0) @binding(1) var tex: texture_2d<f32>; @group(0) @binding(2) var<uniform> u: U; @fragment fn fs(in: VsOut) -> @location(0) vec4f { // Animate pixel cell size with a slow breathing motion. let cell = 6.0 + sin(u.time * 0.8) * 4.0; let snapped = floor(in.uv * u.resolution / cell) * cell / u.resolution; // Slight chromatic offset between channels — sampled from snapped centers. let off = vec2f(2.0, 0.0) / u.resolution; let r = textureSample(tex, samp, snapped + off).r; let g = textureSample(tex, samp, snapped).g; let b = textureSample(tex, samp, snapped - off).b; let a = textureSample(tex, samp, snapped).a; // Posterize to 5 levels per channel for a flatter, screenprint look. let levels = 5.0; let q = floor(vec3f(r, g, b) * levels) / (levels - 1.0); return vec4f(q, a); } `; type GpuState = { device: GpuDevice; context: GpuCanvasContext; pipeline: GpuPipeline; sampler: GpuSampler; texture: GpuTexture; uniformBuffer: GpuBuffer; bindGroup: GpuBindGroup; width: number; height: number; }; export const HtmlInCanvasComposeWebGPU: React.FC = () => { const frame = useCurrentFrame(); const {width, height, fps} = useVideoConfig(); const gpuRef = useRef<GpuState | null>(null); const time = frame / fps; const onInit: HtmlInCanvasOnInit = useCallback(async ({canvas}) => { if (typeof navigator === 'undefined' || !('gpu' in navigator)) { throw new Error('WebGPU is not available in this environment'); } const gpu = (navigator as unknown as {gpu: Gpu}).gpu; const adapter = await gpu.requestAdapter(); if (!adapter) { throw new Error('No WebGPU adapter available'); } const device = await adapter.requestDevice(); const context = ( canvas as unknown as { getContext(id: 'webgpu'): GpuCanvasContext | null; } ).getContext('webgpu'); if (!context) { throw new Error('WebGPU context unavailable on OffscreenCanvas'); } // Use the device's preferred swap-chain format (typically `bgra8unorm`) // to avoid an extra format-conversion copy on present. const presentationFormat = gpu.getPreferredCanvasFormat(); context.configure({ device, format: presentationFormat, alphaMode: 'premultiplied', }); const module = device.createShaderModule({code: WGSL}); const pipeline = device.createRenderPipeline({ layout: 'auto', vertex: {module, entryPoint: 'vs'}, fragment: { module, entryPoint: 'fs', targets: [{format: presentationFormat}], }, primitive: {topology: 'triangle-list'}, }); const TextureUsage = ( globalThis as unknown as { GPUTextureUsage: { COPY_DST: number; TEXTURE_BINDING: number; RENDER_ATTACHMENT: number; }; } ).GPUTextureUsage; const BufferUsage = ( globalThis as unknown as { GPUBufferUsage: {UNIFORM: number; COPY_DST: number}; } ).GPUBufferUsage; const texture = device.createTexture({ size: {width: canvas.width, height: canvas.height}, format: 'rgba8unorm', usage: TextureUsage.COPY_DST | TextureUsage.TEXTURE_BINDING | TextureUsage.RENDER_ATTACHMENT, }); const sampler = device.createSampler({ magFilter: 'linear', minFilter: 'linear', addressModeU: 'clamp-to-edge', addressModeV: 'clamp-to-edge', }); // 16 bytes: time (f32), pad (f32), resolution (vec2f). const uniformBuffer = device.createBuffer({ size: 16, usage: BufferUsage.UNIFORM | BufferUsage.COPY_DST, }); const bindGroup = device.createBindGroup({ layout: ( pipeline as unknown as { getBindGroupLayout(i: number): unknown; } ).getBindGroupLayout(0), entries: [ {binding: 0, resource: sampler}, {binding: 1, resource: texture.createView()}, {binding: 2, resource: {buffer: uniformBuffer}}, ], }); gpuRef.current = { device, context, pipeline, sampler, texture, uniformBuffer, bindGroup, width: canvas.width, height: canvas.height, }; return () => { texture.destroy(); uniformBuffer.destroy(); gpuRef.current = null; }; }, []); const onPaint: HtmlInCanvasOnPaint = useCallback( ({elementImage}) => { const gpu = gpuRef.current; if (!gpu) { return; } const {device, context, pipeline, texture, bindGroup, uniformBuffer} = gpu; device.queue.copyElementImageToTexture( elementImage, gpu.width, gpu.height, {texture}, ); const uniforms = new Float32Array([time, 0, gpu.width, gpu.height]); device.queue.writeBuffer(uniformBuffer, 0, uniforms); const encoder = device.createCommandEncoder(); const view = context.getCurrentTexture().createView(); const pass = encoder.beginRenderPass({ colorAttachments: [ { view, clearValue: {r: 0, g: 0, b: 0, a: 0}, loadOp: 'clear', storeOp: 'store', }, ], }); pass.setPipeline(pipeline); pass.setBindGroup(0, bindGroup); pass.draw(3); pass.end(); device.queue.submit([encoder.finish()]); }, [time], ); return ( <AbsoluteFill style={{ justifyContent: 'center', alignItems: 'center', }} > <HtmlInCanvas width={width} height={height} onInit={onInit} onPaint={onPaint} > <AbsoluteFill style={{ backgroundColor: 'white', color: 'black', justifyContent: 'center', alignItems: 'center', fontSize: 120, fontFamily: 'sans-serif', fontWeight: 'bold', }} > <h1>Hello, World!</h1> </AbsoluteFill> </HtmlInCanvas> </AbsoluteFill> ); };

Async callbacks

You can use await inside onPaint and Remotion will keep the frame open via delayRender() until the promise settles.
This might be necessary if you are implementing a multi-pass effect using multiple contexts.

Async: createImageBitmap then drawImage
import {HtmlInCanvasOnPaint} from 'remotion'; declare const width: number; declare const height: number; const onPaint: HtmlInCanvasOnPaint = async ({canvas, elementImage}) => { const ctx = canvas.getContext('2d'); if (!ctx) { return; } ctx.reset(); ctx.drawElementImage(elementImage, 0, 0); const bitmap = await createImageBitmap(canvas); try { ctx.reset(); ctx.drawImage(bitmap, 0, 0, width, height); } finally { bitmap.close(); } };

Compatibility

BrowsersEnvironments
Chrome
Firefox
Safari

Only works in Chrome 149 and later with the flag enabled, as mentioned on the top of the page.

See also