Selected DOM element (let's say pop-up
<div>
) is getting burned 🔥 within GLSL shader.
Let me quickly explain how it works. To understand the code, you'll need at least a basic understanding of JS, Three.js, and GLSL. When the user presses the button, we do the following:
<div>
, passing the texture and other data to Three.js
<div>
into a texture
We can't apply WebGL effect directly to the DOM element. Only data can be passed to the shader. Luckily, texture is data, so we convert the element into a texture, and then transform this static image.
Taking a "screenshot" of a DOM element isn't something that can be achieved with native JS methods, but there are many libraries available. I've tried html2canvas and dom-to-image (I ended up with the second one). Both libraries worked fine for a simple pop-up window, but you may face issues with complex DOM objects and unsupported CSS. This is because these libs do lots of stuff to get the image: they recursively clone the DOM element, apply all the styles, embed objects, and render the result to canvas. Things can get wrong, you know.
As for enviroment to run the shader, Three.js is not the only way. But it's easy and I'm just used to it.
I create a new
THREE.Scene
, the
THREE.OrthographicCamera
, and the
THREE.Plane
that takes the whole scene space.
To match the Three.js
<canvas>
to the original HTML element, we:
<canvas>
size with JS method
renderer.setSize
<canvas>
on the screen by copying CSS properties from the pop-up
<div>
bounding box.
Once this is done, we hide the original pop-up element.
To tun custom shader on the Three.js plane, we use
THREE.ShaderMaterial
and create both vertex and fragment shaders.
Along with the texture, we pass other uniforms there: the time elapsed since the click, area size in pixels, and pop-up ratio.
The vertex shader doesn't do anything special; we simply use it to pass the
UV
coordinates to the fragment shader.
The fun stuff begins in the fragment shader. First, we render the texture as it is, then we apply the additional layers to it.
For both masking and coloring, we use Fractal Brownian Motion - a noise that's perfectly described in
the Book Of Shaders
.
With FBM, we calculate a
noise_mask
(ensuring the noise frequency is kinda consistent for different canvas sizes).
The
noise_mask
is further adjusted by an
edges_mask
, which makes the fire start on the edges of the images.
We pass
noise_mask
to the
smoothstep
function.
With
smoothstep
edges changing from 0 to 1, we can go from zero-level masking to full coverage of area.
Using the same
noise_mask
with different edge limits, we can calculate the following from time and coordinate:
This quick tutorial is only here because the pop-up looks better with some text in the background. If you'd like me to make a proper tutorial or if you have any questions about the code, please reach out on Twitter .