Edd Biddulph

Twitter | CV


November 2010
Virtual Texturing

Virtual Texturing with Physically-Based Illumination and Atlas Packing

Referred to as 'Megatexturing' by id Software, and Virtual Texturing by Crytek. This project is an attempt to replicate the recent evolution of id's Megatexturing technology which has been built upon their earlier technology used in Quake Wars. The goal of this project is to produce a tool to pack surfaces into an atlas, then perform physically-based light transport over the atlas to create a realistic and detailed environment which can be rendered in realtime. Memory usage is bound to a constant limit during an interactive walkthrough.

The core raytracing algorithm originally used mailboxing to improve efficiency by skipping repeated tests of rays against triangles. However this complicated the promotion to multi-threaded rendering, so mailboxing was dropped completely to allow a simple implementation and unlimited scalability with increasing processor cores. Mailboxing is the technique of assigning a unique ID to each ray, and allocating enough storage with each triangle to store a copy of one ray ID. An intersection test is only performed if the ray ID does not match the stored one, and once a test is done a copy is stored. This only makes sense where a triangle may be referenced by more than one voxel in the acceleration structure (as is typically done within a KD-tree).

The virtual texture atlas is divided into square tiles. To prevent leakage of colour from black empty areas around visible texels, the texels needed to be expanded into the void. My first attempt was to find the nearest filled texel from each empty texel. This was too slow. I then tried using a push-pull method of box filtering and then splatting the box back into the high-resolution level of the texture. This was much faster, but resulted in visible seams and inprecise colouring. The final approach I used was to go back to the nearest-texel approach, but only search for texels in the straight horizontal and vertical directions. To remedy texels at sharp edges, I used a second pass which would fill in the last empty texels by looking in the local neighbourhood. This produces good results and does not take too much time.

To pack the surface textures, I use a brute-force search algorithm which rotates the surfaces to find the rotation that results in the axis-aligned bounding box with the smallest area. I then packed from smallest to largest in terms of area, filling the virtual texture space from top to bottom. A scanning algorithm is used which finds the best space to fit in the new surface. It uses a simple spatial hashing with linked lists to speed up the packing process.

For this project I developed a KD-Tree triangle mesh raytracer. This same raytracer was later used in the development of Beam. However, Beam is now capable of producing higher-quality renderings in less time due to an improvement in my knowledge of offline rendering techniques. It would be interesting to port the code developed for Beam back into this project.

The resulting virtual texture files are not compressed, but I could implement compression with a variable bitrate DCT scheme and fixed-size blocks for efficient seeking.

SSE2 was used to batch-produce values for generating scene sampling directions. Photon mapping with final-gather was used for indirect lighting. Direct lighting is accumulated explicitly from light-emitting polygons. Importance sampling is used. Dual graphs were used to cluster polygons together with low distortion of texels upon projection into the virtual texture space. Hashing was also used to assist in the dual-graph generation - it greatly reduced the time it took to establish topological relations between polygons in the scene. This also alleviates any burden on the input scene to be well-formed. The virtual texture generation tool accepts a 'polygon soup' as input.

There are two programs in this project - the virtual texture generation which packs surfaces and simulates light transport, and the viewing program which allows an interactive walkthrough of the scene. The viewer uses a feedback mechanism to obtain information about the detail of the tiles visible on-screen. Unlike what is suggested in most literature on the topic, this data is transferred at full resolution and only a subset of pixels is analysed by the CPU-based analysis stage. This made the rendering stage simpler to implement as it only requires one pass whereas a re-rendering to a lower resolution or a down-sampling stage may have complicated it. The downside to this is that more bandwidth is consumed when transferring tile information back to CPU. This did not seem to noticably inhibit realtime framerates during my tests. The viewer analyses the returned data and manages two textures - an index map and a tile cache. The index map is a texture which is applied to all surfaces in the scene, and is not repeated in any way. It contains what are essentially pointers into the tile cache, which holds the visualised texel data for the tiles of the virtual texture. The coarsest mipmap of the virtual texture is always present in this cache, so there is something to fall back on when there is nothing else available to show. Tiles are written into the cache and read from system memory. I plan to compress the texture in system memory and uncompress it when transferring to graphics memory. It is also possible to transcode from some compressed format in system memory to S3TC format in graphics memory, the addressing and filtering of which is accelerated in hardware. In fact, this is what id Software apparently did for their terrain megatexturing for Quake Wars and it is likely they have used a similar process in their newer id Tech 5 games.

GLSL was used to create the shaders used in this project. Shaders are required to perform the indirect texture lookup.

Download - includes Win32 executable, sourcecode, and CodeBlocks project files. Source code is licensed under the zlib license.


http://en.wikipedia.org/wiki/Quake_II - I used Quake 2's levels for testing because I already had code to load them and they provide usable area lightsource data.