Edd Biddulph

Twitter | CV

Articles

May 2013
Screenspace Particle Physics

YouTube video



Particles are essential to many visually stunning effects in computer graphics. They can be used to create dramatic explosions for scenes of intense action or they can add soul and atmosphere to a steamy underground lair or New York sidewalk. Particle systems have been an important part of visuals in gaming for a long time, with one of the earliest examples being Quake's rocket-powered blasts and gory splatters of blood (okay so I admit Quake only used large solid rectangles to achieve this, but disable them completely and you might notice the classic shooter begin to feel rather empty). Particles can be animated in a variety of ways and once a good basic method of rendering and animating them is in place then many different looks and styles can be achieved. Adjustments are usually made by an artist given a suitably flexible set of tools. For motion, a particle's position at a particular frame can be a pure function of time or it can be the result of a simulation which incorporates environmental influences.



This is the first of my two rather lame diagrams for this article.


An an example, a visual effect consisting of hot sparks perhaps jettisoned from the grinding machinery of a malevolent robot (let's call him Jimmy) could be made more convincing if the sparks ricocheted off the floor and other nearby surfaces including the mechanical parts of the robot itself. There are many ways in which the collision response could be created, but one idea which I have had floating around in my head for some time now, and which forms the main subject of this article, is to use the depth information from the rendered scene. This information can be used as-is to reflect the velocity vectors of particles, effectively colliding them with the rendered geometry. Obviously there will be much information missing from a single render using the camera's view, but transient effects like spewing sparks and small explosions don't require a full scene database to create a convincing approximation. These approximations should be 'just good enough' to convince an observer that the sparks and debris are in fact interacting with the environment and characters on screen.

This is ideal for very short-burst effects such as explosions with debris, or short-range laser fire. Especially in games, these events often occur amid many other events which demand the viewer's attention. All that is really needed is a hint to the viewer and their mind will fill in the rest. This kind of simplification forms the basis for many perceptual optimisations in computer graphics - PCF shadow penumbrae, irradiance propagation volumes, and postprocess motion-blur just to name a few.

See the bottom of this page for a Windows binary, source code, and CodeBlocks project file.


The Basic Algorithm

The following diagram shows Jimmy (our robot from eariler) ejecting a series of particles, some of which are considered by the screenspace particle physics engine to be intersecting environmental geometry. Since in this example only one depth value is stored per pixel, a threshold is used to estimate the thickness of on-screen geometry. This is a scene-dependent value which allows particles to go behind objects in the scene. Allowing particles to go behind geometry (that is, become occluded with respect to the camera) may be necessary if the particles are likely to travel far from their point of emission, and have long lifespans. The threshold value can be defined on a per-emitter or even per-particle basis. A particle's lifespan is the length of time the particle should exist for before being removed from the scene altogether. Particle removal helps avoid drops in performance and memory availability.



The second diagram. Not much better than the first. Don't worry, there aren't any more after this!


The algorithm proceeds as follows:



The viewspace position for a pixel can be derived from the clipspace depth and the pixel's location using the inverse projection matrix. Here is the function which does this from the particle update shader (pm_f.glsl):

vec3 vsPointForCoord(vec2 tc) { float z_over_w = texture2D(depth_tex, tc).r; vec4 w_pos = vec4(tc.x * 2.0 - 1.0, tc.y * 2.0 - 1.0, z_over_w, 1.0); vec4 v_pos = inv_view_p * w_pos; return (v_pos / v_pos.w).xyz; }

This function takes a scaled-and-biased normalised device coordinate (basically the texture coordinate for a screen-covering texture) for a pixel and returns a point in viewspace. inv_view_p is the inverted projection matrix. A similar function would be used in deferred shading.

There are two options for performing the actual collision detection: a single lookup into the depth texture for a particle, or a trace from the particle's previous position to the current one. The trace can be implemented just like the raymarching technique for relief mapping: points are sampled along the ray and the first point which goes 'inside' the depth is used together with the previous outside point to arrive at an estimated intersection position. This is more appropriate to high-speed particle motions and can help avoid problems with particles becoming stuck inside surfaces (although there can be heuristic rules added to avoid this problems anyway).


There are many advantages to performing the particle physics simulation in this way:

However there are also disadvantages:

Surface Normals

In my example implementation, surface normals are extracted from the clipspace depth texture using a cross-product. Three depths are used: the depth at the particle's screenspace position, and two depths adjacent to this one. The following code fragment is taken from the particle update shader (pm_f.glsl):

vec2 eps = vec2(1.0) / textureSize(depth_tex, 0).xy; vec2 proj_tc1 = proj_tc0 + vec2(eps.x, 0.0); vec2 proj_tc2 = proj_tc0 + vec2(0.0, eps.y); vec3 p0 = vsPointForCoord(proj_tc0); vec3 p1 = vsPointForCoord(proj_tc1); vec3 p2 = vsPointForCoord(proj_tc2); vec3 n = mat3(inv_view_mv) * normalize(cross(p1.xyz - p0.xyz, p2.xyz - p0.xyz));

Where inv_view_mv is the inverse modelview matrix.


Rendering the Particles

Generally particles are rendered as polygons aligned to be coplanar with the plane of projection. Often these polygons are referred to as 'billboards' because it is common for the polygons to be axis-aligned quadrilaterals but there are advantages to using a polygon shape which more closely bounds the particle's texture mask - specifically a reduction in overdraw. For my implementation I have used aligned quads and a simple 'blob' shape for the particles. The quads are constructed by a geometry shader which is fed with the particle positions as point primitives. To give the particle a slightly more natural appearance, I added a little variation to their sizes. I scaled the generated quad in the geometry shader by a value derived from the input vertex's index (gl_VertexID in GLSL). In a reusable particle system I would add more attributes to the particles by introducing a new texture or vertex attribute which would hold e.g. texture atlas coordinates (packed as two 2D vectors into one 4D vector) or material data.


Extending It

One possible solution for the problem of missing geometry is to switch from screenspace to global collision detection and response when there is not enough information available from the depth buffer. This would be a hybrid solution to physics simulation for particles. If global collision detection via raytracing is not viable, then it is possible to render a cubemap around the camera for depth only. This provides extra scene information (albeit at the cost of extra rendering). As a nice side-effect this also resolves the problem with a swinging camera which abuptly brings particles into solid geometry and creates a jarring disparity between particle motions for continuous particle effects (for example a stream of droplets from a broken drainpipe).

Conceptually the depth information used for the collision detection is the same as that used for hidden surface removal in rendering the camera's view, but actually a completely separate depth buffer can be used. This can be taken advantage of if for example the rendered geometry is very detailed and causes problems with the collision detection. A low-detail version of the scene can be rendered into the particle depth texture however this forces a separate pass to be made and so can reduce performance.

As I have already mentioned, this method can be used for solid objects represented by polygonal meshes. This is possible by use of glVertexAttribDivisor. When a divisor of 1 is set for an attribute, then the lookup index of that attribute from an array only advances once for each instance in an instanced draw call (for example, glDrawElementsInstanced). It would be fairly straightforward to populate an attribute array with texture coordinates, a unique coordinate for each instance. These texture coordinates would then be used to look up into the particle positions texture, and the instance would be positioned according to the particle's position. A simple time-varying rotation could then be used to make the result look more convincing. Of course, the particle update phase would need to occur before these solid meshes are rendered - it could be tricky to integrate this with a scene containing translucent surfaces.

Of course, I'll try out these ideas when I have the time!


Download

Download - Includes Windows binary, source code, and CodeBlocks project file. Licensed under the zlib license.