water shading

recently i did some water rendering in Arnold renderer, which you can see here. the shading worked quite well on the water and whitewater.

after this started to sim a wave Tunnel in Houdini Flips. a huge wave where you actually could surf. But i run into some trouble for water shading and whitewater look, i needed some reference.


I went to beach and took trusty NikonV1 camera with me to shoot some reference Photos. This are tiny Waves (50cm in height), but easy to take some pictures and good enough as shading reference.

the nice shot of miniature breaking wave

a nice snapshot of translucent effects of the wave
here we can see droplets (whitewater) in close up

here we can see more Droplets / Whitewater with sun in out back. a good example for anisotropy effect on shading.

droplets turn white with sunlight, depending on the sun direction.

There multiple Ways to render realistic Water. The old school way is to render a polygon water surface and volume underneath to simulate the light scattering. we did similar things back in 2008 on the Avatar Movie with custom written shaders for Renderman. Sidefx added presets for Houdini for its Ocean setup’s. The render time of this method are modest but the shading can quite difficult depending on camera angle and light situation.

these Days in the age of Path tracers, there 2 ways, rendering it with Sub-Surface Scattering (SSS) or Transmission Depth.

Sub-Surface Scattering simulates the effect of light entering an object and scattering beneath its surface. Not all light reflects from a surface. Some of it will penetrate below the surface of an illuminated object. There it will be absorbed by the material and scattered internally. Some of this scattered light will make its way back out of the surface and become visible to the camera. This is known as ‘sub-surface scattering’ or ‘SSS’. SSS is necessary for the realistic rendering of materials such as marble, skin, leaves, wax, and milk. The SSS component in this shader is calculated using a brute-force raytracing method.

While the Transmission Depth attribute controls volumetric light absorption within the object (fog), the Scatter attribute controls what percentage of the light will be scattered instead of absorbed, effectively creating the murky effect of semi-transparent materials.

Depth Controls the depth into the volume at which the transmission color is realized. Increasing this value makes the volume thinner, which means less absorption and scattering. It is a scale factor so that you can set a transmission_color and then tweak the depth to be appropriate for the size of your object.

Scattering is very import if wanna shader deep Materials like Ocean water. For the scattering effect to work Scatter must have a dominant percentage value, and the Depth attribute must generally be much lower. also the Opaque attribute must be unchecked in the Arnold attributes of the object’s shape node for the light to be able to pass into the mesh and illuminate the volume.

Rendering with refraction Depth is more “physical correct” way, but it does account tiny organism (light blockers) in this case you add textures to simulate plankton in the water.

I choose go with SSS route. The tycial Surface Scattering shading model has a similar volume light scatter look. the look can be limited but it works in case deep Ocean water. the advantage: it full support with current Arnold GPU renderer (Depth transmission is not supported yet) and SSS shading model is also faster to render. In addition, I’ ve added an extra underwater bubble simulation with particles to increase the realism.

About Render Engines part 1

This is a quick overview of current render Engines for Houdini and General in terms of MotionGraphics and VFX usage. 

There are different RenderEngines out there, each one is unique and uses different method to solve a problem. I am looking into Arnold, RenderMan, Vray, Octane and Redshift. For comparison reason I added Indigo Renderer engine.

There are different way to render a scene with benefits and shortcomings. lets start with most common one.

image by Glare Technology

Pathtracing (PT)

to be precise Backward Pathtracing.  In backward ray tracing, an eye ray is created at the eye; it passes through the viewplane and on into the world.  The first object the eye ray hits is the object that will be visible from that point of the viewplane.  After the ray tracer allows that light ray to bounce around, it figures out the exact coloring and shading of that point in the viewplane and displays it on the corresponding pixel on the computer monitor screen. that’s classical way, which all of the Render engines uses as standard.

Metropolis light transport (MLT)

This procedure has the advantage, relative to bidirectional path tracing, that once a path has been found from light to eye, the algorithm can then explore nearby paths; thus difficult-to-find light paths can be explored more thoroughly with the same number of simulated photons. Metropolis light transport is an unbiased method that, in some cases (but not always), converges to a solution of the rendering equation faster than other unbiased algorithms such as path tracing or bidirectional path tracing. MetroPolis is often used in Bidirectional mode (BDMLT).

Path Guiding

Mix between Path-tracing and MLT, unbiased technique for intelligent light-path construction in path-tracing algorithms. Indirect Guiding that improves indirect lighting by sampling from the better lit or more important areas of the scene. goal is to allow path-tracing algorithms to iteratively “learn” how to construct high-energy light paths.

link to latest Siggraph paper

BiDirectional Pathtracing ( BDPT )

Regular backward Pathtracing has hard time in indoor scene with small light source because it take lot’s rays and bounce to find a tiny light in a room, just to see if a object gets light by the light.

with Bidirectional, rays are fired from both the camera and light sources. They are then joined together to create many complete light paths.

Spectral rendering

image by Silverwing

Unlike most renderers which work with RGB colours, Spectral renderers uses spectral colour throughout, from the physically-based sky model to the reflective and refractive properties of materials. The material models are completely based on the laws of physics.
This makes it possible to render transparent materials like glass and water at the highest degree of realism.
Spectral renderer are pretty good in simulate different medium atmospheric effects like under water or earth air atmosphere.

Biased Rendering

hat Biased Render Engine actually means is pre-computing a lot of information before sending out rays from the camera. In more simple words, It uses an optimization algorithm to greatly speed up the render time but doing so It is not strictly just modeling the physics of light but it is giving an approximation

here is an example what Spectral rendering able to do:

Indigo renderer Planet-scale atmospheric simulation

Unlike other rendering systems which rely on so-called practical models based on approximations, Indigo’s sun and sky system is derived directly from physical principles. Using Rayleigh/Mie scattering and data sourced from NASA, Indigo’s atmospheric simulation is highly accurate. It’s stored with full spectral information, and allows fast rendering and real-time changes of sun position.

some examples of Atmosphere simulations by Indigo Forum user Yonosoy.

image by Yonosoy.
image by Yonosoy.
image by Yonosoy.
image by Yonosoy.
image by Yonosoy., even complete Planet athmosphere simulation is possible

refraction rendering Anrold vs Indigo

I’ve did little test to compare Arnold refraction rendering vs Indigo renderer. since Arnold used spectral calculations for refraction, it was interesting to see how its sticks up to full spectral renderer like Indigo.

the render settings are fairly optimized for Arnold, Indigo gives not much room for optimizing. to be as fair as possible i use only pure the path tracer in Indigo. Anrold6 GPU is disappointed like always.
GPU was Nvidia quadro RTX 6000 with clockspeed at 1455 Mhz, the CPU was Intel XeonE-2276M at 2.8Ghz


2 minutes render time

it turns Arnold and specially ArnoldGPU is extremely bad with HDRi textures. with physical sky the times improved by the factor 3-6. with resolution limits on Hdri, Arnold gets a decent speed up, but Arnold GPU calculates forever on noises.

testing new Arnold 6.0.3 GPU renderer update on particles


Recent HtoA 5.2.1 made it possible to actually use Arnold6 GPU in Houdini, i tool it on a test ride with particles. (5 million particles).

the setup is quite simple, pure diffuse shading only AA samples. I’ve chosen a darker frame to test the sample quality.


CPU 2 minutes on 6 core Xeon

GPU 1 minute (Nvidia Quadro RTX5000), same sample count the CPU render above but more little more noise.

GPU 1:30 minute, increased the sample to get a noise free render.

the Arnold GPU render it getting slowly faster the Arnold CPU in some cases, there is still long road ahead to for the GPU renderer speed to catch up with competition.

star simulation with OpenCL

birth of a massive star. for my little #astrophysics exploration, i’ve created a star simulation using openCL in houdini. its based on nbody #physics model, only driven by #gravity. using #nvidia #quadro rtx 5000 #NVIDIAStudio. #arnoldrender

the colours are not quite right, I am trying to integrate kelvin colour temperature.  the current colour is based on density and speed.  

In this simulation, I assumed  48% amount of negative gravity to fill the mystery of dark matter. I used 1 million nbodies for the simulation.  for the steps, I am planning to add finer particle streams to get more details.

atmossphere volume in arnold

rendered with Arnold in Houdini. I’ve tried the atmosphere volumes the first time. easy to setup. The render time was quite slow on the volumes, typically for volumes, but much faster then render the screen envr with VDB cloud. I’ve used a mesh light for inner character illumination. 

it’s a shame I could not use Arnold 6 GPU, because its missing features. Volumes would boost get huge speed boost with volume raymarching on a GPU. I had to use a denoiser from Affinity Photo in dark areas.

with Arnold 6 GPU it does not drop any errors, you have to wait 1-2 minutes if data is loaded to GPU and starts the rendering or not. even a wrong file pass to an Hdri image makes Arnold GPU drop out, you sitting and wait in front of a black screen and don’t know if it will render or not.

light simulation

I’ve made a simple scene to test the physics of light. for proper light calculation, I used spectral render Indigo and Octane.

Indigo has multiple Engines, standard Spectral Path tracer on CPU or GPU and Bidirectional path tracing with MTL sampling (metropolis Light Transport). Octane has only default Spectral Path tracer on GPU but includes an MTL sampling method. I’v also added Renderman 23 to the test with its unified rendering integrator. It supports bidirectional path, manifold caustics and path guiding on the CPU.  

Another render engine like Arnold or Cycles with regular Path tracing would impractical fro complex light calculation tasks.

The base scene is Sphere and squashed Sphere underneath inside a volume box (uniform VDB).      

The following image is the result of Indigo Renderer with BiDirectional path tracing and MTL. It was by far the fastest rendering.