water shading

recently i did some water rendering in Arnold renderer, which you can see here. the shading worked quite well on the water and whitewater.

after this started to sim a wave Tunnel in Houdini Flips. a huge wave where you actually could surf. But i run into some trouble for water shading and whitewater look, i needed some reference.


I went to beach and took trusty NikonV1 camera with me to shoot some reference Photos. This are tiny Waves (50cm in height), but easy to take some pictures and good enough as shading reference.

the nice shot of miniature breaking wave

a nice snapshot of translucent effects of the wave
here we can see droplets (whitewater) in close up

here we can see more Droplets / Whitewater with sun in out back. a good example for anisotropy effect on shading.

droplets turn white with sunlight, depending on the sun direction.

There multiple Ways to render realistic Water. The old school way is to render a polygon water surface and volume underneath to simulate the light scattering. we did similar things back in 2008 on the Avatar Movie with custom written shaders for Renderman. Sidefx added presets for Houdini for its Ocean setup’s. The render time of this method are modest but the shading can quite difficult depending on camera angle and light situation.

these Days in the age of Path tracers, there 2 ways, rendering it with Sub-Surface Scattering (SSS) or Transmission Depth.

Sub-Surface Scattering simulates the effect of light entering an object and scattering beneath its surface. Not all light reflects from a surface. Some of it will penetrate below the surface of an illuminated object. There it will be absorbed by the material and scattered internally. Some of this scattered light will make its way back out of the surface and become visible to the camera. This is known as ‘sub-surface scattering’ or ‘SSS’. SSS is necessary for the realistic rendering of materials such as marble, skin, leaves, wax, and milk. The SSS component in this shader is calculated using a brute-force raytracing method.

While the Transmission Depth attribute controls volumetric light absorption within the object (fog), the Scatter attribute controls what percentage of the light will be scattered instead of absorbed, effectively creating the murky effect of semi-transparent materials.

Depth Controls the depth into the volume at which the transmission color is realized. Increasing this value makes the volume thinner, which means less absorption and scattering. It is a scale factor so that you can set a transmission_color and then tweak the depth to be appropriate for the size of your object.

Scattering is very import if wanna shader deep Materials like Ocean water. For the scattering effect to work Scatter must have a dominant percentage value, and the Depth attribute must generally be much lower. also the Opaque attribute must be unchecked in the Arnold attributes of the object’s shape node for the light to be able to pass into the mesh and illuminate the volume.

Rendering with refraction Depth is more “physical correct” way, but it does account tiny organism (light blockers) in this case you add textures to simulate plankton in the water.

I choose go with SSS route. The tycial Surface Scattering shading model has a similar volume light scatter look. the look can be limited but it works in case deep Ocean water. the advantage: it full support with current Arnold GPU renderer (Depth transmission is not supported yet) and SSS shading model is also faster to render. In addition, I’ ve added an extra underwater bubble simulation with particles to increase the realism.

water rendering with arnold GPU

i am starting to look into water effecst shading, this 1st simualtion with Houdini solvers. the foam is a little over-the-top. but i think the shading itself start to come together. with the new Arnold GPU updates its getting really fast., specifically when i am using Arnold Operators.

i’ve created underwater bubbles in extra simulation to make the side view nicer.

all this above was done in Houdini and regular Anrold htoA plug in.

here i am testing the scene in Gaffer, the IPR is quite fast in here. the next thing i wanna try to use Solaris.

About Render Engines part 1

This is a quick overview of current render Engines for Houdini and General in terms of MotionGraphics and VFX usage. 

There are different RenderEngines out there, each one is unique and uses different method to solve a problem. I am looking into Arnold, RenderMan, Vray, Octane and Redshift. For comparison reason I added Indigo Renderer engine.

There are different way to render a scene with benefits and shortcomings. lets start with most common one.

image by Glare Technology

Pathtracing (PT)

to be precise Backward Pathtracing.  In backward ray tracing, an eye ray is created at the eye; it passes through the viewplane and on into the world.  The first object the eye ray hits is the object that will be visible from that point of the viewplane.  After the ray tracer allows that light ray to bounce around, it figures out the exact coloring and shading of that point in the viewplane and displays it on the corresponding pixel on the computer monitor screen. that’s classical way, which all of the Render engines uses as standard.

Metropolis light transport (MLT)

This procedure has the advantage, relative to bidirectional path tracing, that once a path has been found from light to eye, the algorithm can then explore nearby paths; thus difficult-to-find light paths can be explored more thoroughly with the same number of simulated photons. Metropolis light transport is an unbiased method that, in some cases (but not always), converges to a solution of the rendering equation faster than other unbiased algorithms such as path tracing or bidirectional path tracing. MetroPolis is often used in Bidirectional mode (BDMLT).

Path Guiding

Mix between Path-tracing and MLT, unbiased technique for intelligent light-path construction in path-tracing algorithms. Indirect Guiding that improves indirect lighting by sampling from the better lit or more important areas of the scene. goal is to allow path-tracing algorithms to iteratively “learn” how to construct high-energy light paths.

link to latest Siggraph paper

BiDirectional Pathtracing ( BDPT )

Regular backward Pathtracing has hard time in indoor scene with small light source because it take lot’s rays and bounce to find a tiny light in a room, just to see if a object gets light by the light.

with Bidirectional, rays are fired from both the camera and light sources. They are then joined together to create many complete light paths.

Spectral rendering

image by Silverwing

Unlike most renderers which work with RGB colours, Spectral renderers uses spectral colour throughout, from the physically-based sky model to the reflective and refractive properties of materials. The material models are completely based on the laws of physics.
This makes it possible to render transparent materials like glass and water at the highest degree of realism.
Spectral renderer are pretty good in simulate different medium atmospheric effects like under water or earth air atmosphere.

Biased Rendering

hat Biased Render Engine actually means is pre-computing a lot of information before sending out rays from the camera. In more simple words, It uses an optimization algorithm to greatly speed up the render time but doing so It is not strictly just modeling the physics of light but it is giving an approximation

here is an example what Spectral rendering able to do:

Indigo renderer Planet-scale atmospheric simulation

Unlike other rendering systems which rely on so-called practical models based on approximations, Indigo’s sun and sky system is derived directly from physical principles. Using Rayleigh/Mie scattering and data sourced from NASA, Indigo’s atmospheric simulation is highly accurate. It’s stored with full spectral information, and allows fast rendering and real-time changes of sun position.

some examples of Atmosphere simulations by Indigo Forum user Yonosoy.

image by Yonosoy.
image by Yonosoy.
image by Yonosoy.
image by Yonosoy.
image by Yonosoy., even complete Planet athmosphere simulation is possible

marble rendertest

Inspired by great Marble renders and Shader hack by Lee Griggs. I’ve decided to recreate some tricks with different render engines, a little render Comparison.

The basic idea, using a glass shader for outter shell and an inner sphere with textured volume to fake depth. this way, you will save a lots of work with actually modeling the inner part of a marble. Spectral renderer don’t use this kind of trickery they can actully the inner part as real glass medium (using textures).

I’ve use Cycles (Blender), Renderman, Arnold, Octane and Indigo Renderer. I’ve tried to create a Marvel in Redshift, but i could make it work with single texture and 2 spheres. For Redshift you need actually model a marvel to get realistic rendering.

the spectral rendere engines was the fastest by far. that’s because with spectral render I used a medium instead of volume for interia, that’s saves a lot of render time.

here is quick test with single glass object with Indigo and Cycles :

this here are glass spheres with regular solid texture spheres inside:

light simulation

I’ve made a simple scene to test the physics of light. for proper light calculation, I used spectral render Indigo and Octane.

Indigo has multiple Engines, standard Spectral Path tracer on CPU or GPU and Bidirectional path tracing with MTL sampling (metropolis Light Transport). Octane has only default Spectral Path tracer on GPU but includes an MTL sampling method. I’v also added Renderman 23 to the test with its unified rendering integrator. It supports bidirectional path, manifold caustics and path guiding on the CPU.  

Another render engine like Arnold or Cycles with regular Path tracing would impractical fro complex light calculation tasks.

The base scene is Sphere and squashed Sphere underneath inside a volume box (uniform VDB).      

The following image is the result of Indigo Renderer with BiDirectional path tracing and MTL. It was by far the fastest rendering.

About Render Engines part 2

Each render algorithms has different benefit in different Scene / Light Situations.  Some common Render engines using following Algorithms:

Here is list Pro and Con based on my own Experience :   

Pathtracer

Pro :

  • easy to use
  • best for exteriors
  • great characters and outdoor renderings

Con :

  • bad for caustic
  • not so good for interiors with much indirect lighting and small light sources

BiDirectional Pathtracing

Pro :

  • very good for interiors
  • good and fast caustic 

Con :

  • not so fast for outdoor rendering
  • slow for reflected caustics

Spectral Rendering

Pro :

  • super easy to use
  • most correct physics 
  • mediun rendering
  • make use of physical correct modeling
  • great out of box image quality

Con :

  • slower rendering 
  • more memory use 32 floats instead 3
  • need physical correct materials
  • hard to get aov etc..
  • shader limitation

Metropolis Light Transport

Pro : 

  • faster for reflected caustic
  • excellent for caustic
  • best for interiors (indirect lighting, small light sources)

Con :

  • very slow for exteriors

Path Guiding

Pro :

  • extreme good for interiors (indirect lighting, small light sources)
  • much faster than PT for scenes with very difficult lighting (e.g. light coming through a small opening, lighting the scene indirectly)
  • fast caustics

Con :

  • not so fast for glossy materials
  • more setup time (tweaking render settings)
  • problems with detailed geometry

Biased Rendering

Pro :

  • fastest rendering
  • very useful for caustics + reflected caustics
  • most flexible render setups
  • great shader hacks

Con :

  • hard to setup
  • need knowledge if optimization algorithms
  • hard deal with large dataset
  • biased, artefacts, splotchy, low frequency noise
  • can have large memory footprint

stochastic progressive photon mapping

  • best for indoor
  • small memory footprint
  • handles all kinds of caustics robustly

Reference for vfx part 1

currently i am working on FX book. here a little sub chapter of it.

To create special effects digital you don’t have to recreate or simulate physically correct like in real world. its not possible anyway. you do not need fancy, custom made solvers. in most cases you just need ordinary standard tools which every 3D package has. First you have to understand, how nature works from visual viewpoint.
the most important thing before you start an effect is, to research: References. you should get as many References as possible. lets use a waterfall / watersplashes as an example. Many fx-artists use videos as visual Reference, but they do not analyze it or do it wrong. most Reference videos give you an idea how your endresult should look like, but you have to remember most of videos are shot in 24 or 30 fps. that means you see motionblurred frames, but you will work opengl viewport of 3d software, with means without motionblur. if recreate you a simulation based on movie, it will ready have motionblured look and the renderengine will add a motionblur at rendertime and this makes your fx streaky (you double the motionblur).

most of the time i go out shot reference video or pictures by myself. l prefer my Nikon v1 Camera. Its cheap (300$), have highend lenses, do superslow motion videos and shot with 1/16000 shuttertime. but any DsLR,with 1/4000 shutter, should do the Trick  as well. With shuttertime of 1/4000 or 1/16000, you will get a frame of your reference without motionblur. as you see in the example:

(click on the images and look at the shuttertime on bottom of the images)
if you use “highspeed” images as reference for your scene to create shapes for droplets, you get more realistic look of your effect. your rendersoftware will add correct motionblur.

closeup look at droplets: motionblur (1/160 shutter) and without motionblur (1/8000 shutter)

DSC_2810
DSC_2811

after you got proper references, you need to read your reference images and extract information to understand visual language. as you can see there are not just bunch blurred particle falling down. there a lot of clumbing “blobb’ing” going on. more details how to read and analyze the reference,  i will explain it in part 2