- Weighted Voronoi Stippling
- Painterly Rendering
- mustache.js improved
- Entropy Solitaire
- Tone Preservation with Ice Crystal Growth
To my knowledge, the only similar body of work that has been done in relation to ice crystals in a Computer Graphics setting is the paper that inspired this project: Kim and Lin’s 2003 EuroGraphics paper. Their primary interest when developing their system was in a) physically accurate renderings and b) edge preservation [KL1]. This is evidenced by the direction that their research takes with the 2004 and 2006 follow-ups [KL2], [KL3]. Kobiyashi’s crystal growth simulation PDE serves as the workhorse for both this project and the Kim and Lin papers [RK]. But aside from these, there is not much else to report. Certainly crystal growth simulation is a very active field of research, but aside from the preceding papers, no other body of work has extended the idea beyond the crystal’s natural beauty.
In a natural setting, the formation of the ice frost on a window is readily controlled by placing slightly objects on the frosting surface. Wherever the object was placed will not frost over. This can be performed in layers to get interesting results. Alternatively, placing slightly heated objects on the surface allows for control over where the condensation gathers, giving a different kind of directorial control over the formations. Although the methods are known, there is not many examples of individuals doing art in this fashion, perhaps because of the high error rates associated with the techniques. Figures 1a and 1b provide examples of “art” created using the techniques described above.
The work horse behind this application is the pair of PDEs described by Kobiyashi which reasonably depict the growth of some forms of ice crystals. Figure 2 shows the PDE pair:
Figure 2 - The Kobiyashi Formulation
The results section contains some solutions to the Kobiyashi Formulation using different initial conditions. The implementation of a solver for the PDE pair is troublesome because the equations are not well suited for higher order explicit differencing stencils such as Rung-Kutta, so the timestep must be very small [KL1], [KL2]. Kim and Lin successfully applied a Lax-Wendroff and Crank-Nicholson algorithm to increase the timestep by a factor of four, but for this implementation I did not want to deal with the added complexity of an implicit method. A basic FTCS technique was used to solve the equation. A short attempt was made at increasing the timestep by replacing the Laplacian terms with a Noyd-Deghgan second order stencil without any success [ND]. This was caused by a lack of understanding of how to adapt the Laplacian terms in the equation to the stencil.
The cross terms in equation (1) can be dropped. This makes the results of the simulations a little smoother, but since the postprocessing step is going to reintroduce details anyways, it is a worthwhile simplification to make [KL2]
Kim and Lin describe an elaborate pipeline for turning the solutions of the Kobiyashi formulation into something visually appealing. The primary goal of the pipeline is to reintroduce edges and details into the “blob” that the Kobiyashi formulation creates. In brief, the pipeline consists of the following steps:
The pipeline is explained in some detail in the source paper, so there is no need to describe it further here. The only difference between this implementation and Kim and Lin’s is that instead of creating a subdivision surface, I generate a Normal Map and render the mesh directly on the GPU, instead of feeding the resulting mesh into another program. Because the paper did not describe the generation of the normal map, I had to improvise the generation of the normal map.
Plate 1 - The top image is an evolved image of a specific initial condition. The middle image is the generated normal map from that image. The bottom plate is the final result of the rendering process.
Plate 2 - The top image used constant values for
θ0. The bottom image used a lookup table for the different values. Notice how much more random the bottom image is.
Θ0which determine the number of lobes and the prefered direction of growth in the Kobiyashi Formulation [KL1]. Having these two terms constant across the entire Phase and Temperature fields results in crystals that grow too uniformily in one direction. This in turn reduces the suspension of disbelief of the viewer because that uniformity does not exist in nature. To remedy this problem, a simple solution was taken were the values for
θ0are texture mapped. Based on the location on the phase field, different values for
θ0are taken. The texture maps used are very low frequency Perlin Noise textures which are then mapped to the applicable ranges for
jthe texture is mapped from 2 to 8.
θ0is mapped between 0 and 2π. Plate 2 shows the results of this mapping on the simulation.
The entire application was implemented in C++, using several third party libraries which are described in the API Links section of this document. In the interest of saving time, the user is responsible for providing a partially preprocessed PNG image to the program which is then used to set the initial conditions of the joint PDEs. The user is responsible for making sure that the input image has “power of 2” dimensions and that it is in black and white. Additionally, a slight Gaussian Blur tends to give better results. The user also has the option of providing two input images that explicitly define the starting conditions of the PDEs, but minor changes are required in the code for this to work.
The following subsections deal with implementation details that are part of each component of the rendering system.
Plate 3 - The top image used constant values for the diffuse colour. The bottom image used a perlin noise texture for the diffuse colour. Note how there is more internal detail in the bottom image.
Two different attempts were made at creating Tone Preservation with Ice Crystals. The first method, the 3D Kobiyashi Formulation produced results that were far too erratic and did not preserve tone at all. The second method, Stacked 2D Kobiyashi Formulation gives better results, but still significantly falls short in preserving the tone of the input image. Below is a description of the different techniques.
The first method attempted for tone preservation was to simply extend the Kobiyashi equations into the third dimension and simulate over phase and temperature volumes. The two PDEs are readily adapted by simply adding the
z partial derivative into the equation into the expected places. The one part of the equation that is difficult to adapt is equation (2) from Figure 2, which is explictly locked down to two dimensions by Kim and Lin. On further inspection however, equation (2) is an extended Polar Rose equation, which Wejchurt successfully extended into 3D with very interesting results [JW]. Using his formulations of the Spherical Rose equation in place of the Polar Rose equation gave the desired results of extending the Formulation into 3D. Unfortunately, either because of poor coding or poor formulation, the resulting PDE never managed to give adequate results in terms of shapes that resembled ice crystals. No amount of tweaking of the simulation constants resulted in a solution that visually looked like ice similar to the 2D case. As a result, this method was abandoned in favour of the Stacked 2D Formulation described below.
In this method, multiple phase and temperature fields are stacked on top of each other with the hopes that they will approximate the tonal properties of the input image. Each layer is rendered using an adjustable alpha value which causes the different levels to merge with each other into a multitoned image. The key to this method is the choice of initial conditions for each field slice. The initial conditions for each slice were calculated from the input image as follows:
foreach pixel P in temperature field I = input_intensity if ( I > 1 - slice_number / slice_count ) P = 1.0 else P = 0.0 end if next foreach pixel P in phase field I = input_intensity if ( I > 1 - slice_number / slice_count ) P = 0.0 else P = 1.0 end if next
This is in effect a screening process whereby each slice is a different thresholded version of the input image. The phase field is then further processed by extracting the border image from the field and using the border image as the phase field for the specific slice level. This pairing of temperature and phase initial conditions ensures that the crystals only ever grow inside the contours defined by the screened image, and not outside. This ensures a strong preservation of the edges of the specific levels. In general 8 slices gives the best results. This is the technique that was used in the Results section of this document. Unfortunately, as demonstrated by the results, this does a very poor job of simulating tone simply because there is insufficient overlap between the different layers to produce the desired mixing results.
To speed up the simulation of the Kobiyashi PDE, the PDE solver code was migrated onto the GPU using the GLSL shading language. As the PDE simulation is highly paralellizable, the GPU is strongly suited for the task since it is in essense an SIMD device. Kim and Lin achieved a five fold speed increase with the GPUs of the time, and GPU technology has progressed at a blistering pace since 2004. When initially implementing the GLSL version of the PDE solver, the expectation was to get a similar five fold increase bearing in mind that time constraints on the project do not allow for time to heavily optimize the routines. However, the first implementation of the GPU based solver far surpassed expectations by giving up to a 350 fold increase in speed over the CPU version of the solver in some circumstances. Table 1 gives the benchmark results of the simulations for one iteration of the simple FTCS technique used to solve the equations.
|Field Dimensions (WxHxD)||CPU (avg ms)||GPU (avg ms)||Relative Increase|
|256 x 256 x 1||107||01||N/A|
|256 x 256 x 8||350||1||350x|
|256 x 256 x 16||1234||14||88x|
|512 x 512 x 1||321||23||14x||512 x 512 x 8||2839||38||74x|
|512 x 512 x 16||5226||65||80x|
|1024 x 1024 x 16||9057||2832||32x|
All tests were performed on a Athlon 64 3200+ CPU with 1 GB of RAM. The GPU was a AGP ATI Radeon X1650 PRO with 256 MB of RAM (Catalyst 8.4 Driver). The Operating System is Windows XP SP2. The compiler was Microsoft Visual C++ 2005 with full optimizations.
Results are the averages across multiple iterations
1 - Less than the resolution of the timer used
2 - There was graphical corruption on the results. Possibly driver related. However, the benchmark scales with the previous benchmarks so I still consider it valid.
The one potential concern with these test results is that the OpenGL driver may be executing the commands asyncronously, causing the instrumentation code to return immediately. However, the same instrumentation techniques were used in the past with accurate results [SY1], [SY2]. In addition, the simulation is visibly sped up, so I am inclined to believe the results. In addition it should be noted that because of the “banded optimization” that has been implemented in the CPU version, the CPU version gets progressively slower as the simulation proceeds. The GPU version does not suffer from this fault and produces a uniform speed regardless of the field's contents. As demonstrated by Table 1, it does not even make sense to compare the two different implementations because the GPU version is on average two orders of magnitude faster than the CPU version. When a problem domain is as highly paralellizable as this, the first platform target should be the GPU.
Although the results from the tone preservation section of this project leave something to be desired. I believe that the basic technique employed here is sound in that it will be able to produce better results with further tweaking. Longer term, the inclusion of ice melts in to the equation will open up new possibilities for extensions to the basic idea of rendering icy images. If tone preservation is successfully implemented, then the next step of temporally cohesive icy video renderings is the natural extension to this project. Doing temporal cohesion requires ice melts and some speed up to the rendering techniques as they can take up to 10 minutes as it stands, so rendering a video would be prohibitive right now. Adding ice melts should be possible by implementing lifetimes to each pixel in the image. Once the lifetime is up (after n iterations), the reverse of the Kobiyashi process could be applied to the pixel to melt the ice at a physically plausible rate.
The Abstract Framework of the project can give rise to other interesting media as long as the simulation step gives some form of directorial control over the initial conditions. There are numerous PDEs and Fractals that have inherent beauty and complexity in them, and if there was a degree of directorial control over them, some new art forms would emerge out of them. As a contrived example: A rendering of my dog as if it was a Mandelbrot Fractal. The important thing is that the pipeline that is outlined is flexible enough that different simulation parameters can give drastically different results. Speculatively, a mixture of different simulation types stacked on top of each may give the most visually interesting results if done correctly.
This project successfully implements Kim and Lin’s 2003 paper: Visual Simulation of Ice Crystal Growth. The project successfully delineates the edges of an input image and renders them in a style that is similar to frost formations. The source algorithm is successfully refined to remove some of the perceived imperfections in the original paper using
θ0 mapping, and perlin noise texturing. The algorithm is further extended to account for image tone. However this extension to the pipeline produces unsatisfactory results, failing to successfully preserve tone in all but the most simple of images. Essentially, the hypothesis of stacking ice crystals on top of each other did not pan out for this project and resulted in the substandard output demonstrated in this document. I still believe that it is possible to successfully apply the stacking idea to create renderings that preserve tone, but for the time frame of this project, those goals were not successfully met. Finally, the iterative simulation of the Kobiyashi Formulation was moved onto the GPU using the GLSL shading language for a drastic speed improvement over the CPU version of the same iterative simulator.
Note that the Simulation and Rendering methods were tweaked for each class of images to give optimal results.
The following are the results of the Kobiyashi Simulation with specific initial conditions and different simulation constants.
The following video demonstrates the simulation running with all of the different stages of the Kim and Lin pipeline being shown: ice-tone.avi (2.1 MB)
The following are images that preserve the edges of the image that they are rendering.
The following are results from the Tone Preservation component of the project. As can be seen, the results are sub-optimal. These two images are representative of the quality of the renderings that the program returns. Other images had similarly bad tone preservation properties.