One of the difficulties we ran into when building the WebGL backend for Shumway was dealing with large vector shapes. Flash content can have arbitrarily sized vector shapes, often times much larger than the viewport. This is especially true when content is scaled.
Rasterizing vector shapes can be quite expensive, so caching rasterized shapes as textures on the GPU is really important. This, however, is not always practical or even possible in some cases. Large shapes don’t usually fit in textures. For instance, most mobile devices have limitations on
MAX_TEXTURE_SIZE, often no larger than
2048 x 2048.
Here’s a concrete example to illustrate the problem. Suppose you wanted to create a cityscape paralax effect. You would start off adding several large cityscape shapes to your stage and then teweening their scale.
The vector shapes are very large, roughly
5,000px across at 1:1 scale. After being scaled, they are nearly
Workign with excessively large shapes like these may seem inneficient, but it’s normal for Flash developers and very convenient. The Flash player has no problem dealing with these.
See the final result.
At any one time, only a small portion of the Cityscape is in the viewport. It doesn’t make sense to rasterize and cache the entire shape. We can partition the Cityscape into many equally sized tiles and cache only those that intersect the viewport. As new tiles come into view they are rasterized and cached. Similarly, if the shape is scaled, new tiles of a higher quality are generated and cached. If at any point, there is no more texture memory available, old unused tiles are evicted to make room for new ones.
This means that cached tiles are no longer arranged in the GPU texture as they used to appear in the original shape. Tiles that used to be adjacent may now be far apart, or even in two different GPU textures.
Reassembling the original shape is now a matter of drawing several textured quads that sample from different texture locations. The problem however is that, if we’re not careful, we get ugly seeming artifacts.
Let’s focus on tiles
10 and look at how they are drawn by the GPU. During polygon rasterization, the GPU generates fragments for each of the screen pixels that overlap with the geometry. If we do our math right, the GPU will generate fragments that either belong to tile
9 or tile
10 along the edge between the two tiles.