I do some rendering in blender(cycles). I render with the help of cycles nodes .exr images, where in each pixel instead of rgba-channels I store xyzObjectIndex-channels. Everything works fine up to several pixels.
For example, on the image above you can see visualization of the object ids. Each pixel has information onto which object it looks now. You can see that blender failed to recognize building in one blue pixel, blender suggested that this part of the building is the sky and has rendered sky there.
As I can see this happens for degenerate triangles, triangles which are almost in parallel to the ray, which is shoot from pixel. It means that blender(cycles) raytracing algorithm fails to detect correct triangle when shooting ray.
Material setup:
This nodes help me to render .exr image, where instead of RGB I have XYZ - 3D position of the surface seen by pixel and instead of alpha - Object Index. Rendered image is correct up to a few pixels.
Geometry of the mesh: As far as I know my mesh consists from triangles.
Rendering machine: I use SVM.
Samples: one ray.
How to avoid this problem?
Here is test scene for testing:
P.S.: Also noticed that function ray_triangle_intersect() in file "\cycles\util\util_math_intersect.h" returns sometimes wrong barycentric coordinates (u + v > 1, which is wrong), hence it renders incorrect 3D position for current pixel. Wrong barycentric coordinates are there due to fact that in line 182 (const float inv_den = 1.0f / den;) happens division by zero.
