Thursday, June 22, 2017

Reprojecting Reflections

Screen space reflections are such a pain. When combined with taa they are even harder to manage. Raytracing against a jittered depth/normal g-buffer can easily cause reflection rays to have widely different intersection points from frame to frame. When using neighborhood clamping, it can become difficult to handle the flickering caused by too much clipping especially for surfaces that have normal maps with high frequency patterns in them.

On top of this, reflections are very hard to reproject. Since they are view dependent simply fetching the motion vector from the current pixel tends to make the reprojection "smudge" under camera motion. Here's a small video grab that I did while playing Uncharted 4 (notice how the reflections trails under camera motion)



Last year I spent some time trying to understand this problem a little bit more. I first drew a ray diagram describing how a reflection could be reprojected in theory. Consider the goal of reprojecting the reflection that occurs at incidence point v0 (see diagram bellow), then to reproject the reflection which occurred at that point you would need to:
  1. Retrieve the surface motion vector (ms) corresponding to the reflection incidence point (v0)
  2. Reproject the incidence point using (ms)
  3. Using the depth buffer history, reconstruct the reflection incidence point (v1)
  4. Retrieve the motion vector (mr) corresponding to the reflected point (p0)
  5. Reproject the reflection point using (mr)
  6. Using the depth buffer history, reconstruct the previous reflection point (p1)
  7. Using the previous view matrix transform, reconstruct the previous surface normal of the incidence point (n1)
  8. Project the camera position (deye) and the reconstructed reflection point (dp1) onto the previous plane (defined by surface normal = n1, and surface point = v1)
  9. Solve for the position of the previous reflection point (r) knowing (deye) and (dp1)
  10. Finally, using the previous view-projection matrix, evaluate (r) in the previous reflection buffer


By adding to Stingray a history depth buffer and using the previous view-projection matrix I was able to confirm this approach could successfully reproject reflections.
float3 proj_point_in_plane(float3 p, float3 v0, float3 n, out float d) {
 d = dot(n, p - v0);
 return p - (n * d);
}

float3 find_reflection_incident_point(float3 p0, float3 p1, float3 v0, float3 n) {
 float d0 = 0;
 float d1 = 0;
 float3 proj_p0 = proj_point_in_plane(p0, v0, n, d0);
 float3 proj_p1 = proj_point_in_plane(p1, v0, n, d1);

 if(d1 < d0)
  return (proj_p0 - proj_p1) * d1/(d0+d1) + proj_p1;
 else
  return (proj_p1 - proj_p0) * d0/(d0+d1) + proj_p0;
}

float2 find_previous_reflection_position(
 float3 ss_pos, float3 ss_ray,
 float2 surface_motion_vector, float2 reflection_motion_vector,
 float3 world_normal) {
 float3 ss_p0 = 0;
 ss_p0.xy = ss_pos.xy - surface_motion_vector;
 ss_p0.z = TEX2D(input_texture5, ss_p0.xy).r;

 float3 ss_p1 = 0;
 ss_p1.xy = ss_ray.xy - reflection_motion_vector;
 ss_p1.z = TEX2D(input_texture5, ss_p1.xy).r;

 float3 view_n = normalize(world_to_prev_view(world_normal, 0));
 float3 view_p0 = float3(0,0,0);
 float3 view_v0 = ss_to_view(ss_p0, 1);
 float3 view_p1 = ss_to_view(ss_p1, 1);

 float3 view_intersection = 
  find_reflection_incident_point(view_p0, view_p1, view_v0, view_n);
 float3 ss_intersection = view_to_ss(view_intersection, 1);

 return ss_intersection.xy;
}

You can see in these videos that most of the reprojection distortion in the reflections are addressed:





Ghosting was definitely minimized under camera motion. The video bellow compares the two reprojection method side by side.

LEFT: Simple Reprojection, RIGHT: Correct Reprojection
(note that I disabled neighborhood clamping in this video to visualize the reprojection better)

So instead I tried a different approach. The new idea was to pick a few reprojection vectors that are likely to be meaningful in the context of a reflection. Originally I looked into:
  • Motion vector at ray incidence
  • Motion vector at ray intersection
  • Parallax corrected motion vector at ray incidence
  • Parallax corrected motion vector at ray intersection

The idea of doing parallax correction on motion vectors for reflections came from the Stochastic Screen-Space Reflections talk presented by Tomasz Stachowiak at Siggraph 2015. Right now here's how it's currently implemented although I'm not 100% sure that's as correct as it could be (there's a PARALLAX_FACTOR define which I needed to manually tweak to get optimal results. Perhaps there's a better way of doing this)?
float2 parallax_velocity = velocity * saturate(1.0 - total_ray_length * PARALLAX_FACTOR);
Once all those interesting vectors are retrieved, the one with the smallest magnitude is declared as "the most likely succesful reprojection vector". This simple idea alone has improved the reprojection of the ssr buffer quite significantly (note that if casting multiple rays per pixel, then averaging the sum of all succesful reprojection vectors still gave us a better reprojection than what we had previously)



Screen space reflections is one of the most difficult screen space effect I've had to deal with. They are plagued with artifacts which can often be difficult to explain or understand. In the last couple of years I've seen people propose really creative ways to minimize some of these artifacts that are inherent to ssr. I hope this continues!