Unity_objecttoworld

  

Summary 🔗︎

I made a tutorial about planar mapping previously. The biggest disadvantage of the technique is that it only works from one direction and breaks when the surface we’re drawing isn’t oriented towards the direction we’re mapping from (up in the previous example). A way to improve automatic uv generation is that we do the mapping three times from different directions and blend between those three colors.

On Unity UI components, the unityObjectToWorld matrix is not the transformation matrix of the local transform the Graphic component lives on but that of its parent Canvas. Many MRTK/Standard shader effects require object scale to be known. However, it is usually combined with the matrix of the viewing transformation to form the modelview matrix, which is then set as a uniform parameter. In some APIs, the matrix is available as a built-in uniform parameter, e.g., unityObjectToWorld in Unity. (See also Section “Applying Matrix Transformations”.). This matrix is available in the uniform parameter unityObjectToWorld, which is automatically defined by Unity in this way: uniform float4x4 unityObjectToWorld; Since it is automatically defined, we don't need to define it (actually we must not).

This tutorial will build upon the planar mapping shader which is a unlit shader, but you can use the technique with many shaders, including surface shaders.

Calculate Projection Planes 🔗︎

To generate three different sets of UV coordinates, we start by changing the way we get the UV coordinates. Instead of returning the transformed uv coordinates from the vertex shader we return the world position and then generate the UV coordinates in the fragment shader.

We use transform tex to apply the tiling and offset of the texture like we’re used to. In my shader I use xy and zy so the world up axis is mapped to the y axis of the texture for both textures, not rotating them in relation to each other, but you can play around with the way use use those values (the way the top UVs are mapped is arbitrary).

After obtaining the correct coordinates, we read the texture at those coordinates, add the three colors and divide the result by 3 (adding three colors without dividing by the number of colors would just be very bright).

Normals 🔗︎

Having done that our material looks really weird. That’s because we display the average of the three projections. To fix that we have to show different projections based on the direction the surface is facing. The facing direction of the surface is also called “normal” and it’s saved in the object files, just like the position of the vertices.

So what we do is get the normals in our input struct, convert them to worldspace normals in the vertex shader (because our projection is in worldspace, if we used object space projection we’d keep the normals in object space).

For the conversion of the normal from object space to world space, we have to multiply it with the inverse transposed matrix. It’s not important to understand how that works exactly (matrix multiplication is complicated), but I’d like to explain why we can’t just multiply it with the object to world matrix like we do with the position. The normals are orthogonal to the surface, so when we scale the surface only along the X axis and not the Y axis the surface gets steeper, but when we do the same to our normal, it also points more upwards than previously and isn’t orthogonal to the surface anymore. Instead we have to make the normal more flat the steeper the surface gets and the inverse transpose matrix does that for us. Then we also convert the matrix to a 3x3 matrix, discarding the parts that would move the normals. (we don’t want to move the normals because they represent directions instead of positions)

The way we use the inverse transpose object to world matrix is that we multiply the normal with the world to object matrix (previously we multiplied the matrix with the vector, order is important here).

Unity_objecttoworld Instancing

To check our normals, we can now just return them in our fragment shader and see the different axis as colors.

To convert the normals to weights for the different projections we start by taking the absolute value of the normal. That’s because the normals go in the positive and negative directions. That’s also why in our debug view the “backside” of our object, where the axes go towards the negative direction, is black.

After that we can multiply the different projections with the weights, making them only appear on the side we’re projecting it on, not the others where the texture looks stretched. We multiply the projection from the xy plane to the z weight because towards that axis it doesn’t stetch and we do a smiliar thing to the other axes.

We also remove the division by 3 because we don’t add them all together anymore.

That’s way better already, but now we have the same problem again why we added the division by 3, the components of the normals add up to more than 3 sometimes, making the texture appear brighter than it should be. We can fix that by dividing it by the sum of it’s components, forcing it to add up to 1.

And with that we’re back to the expected brightness.

The last thing we add to this shader is the possibility to make the different directions more distinct, because right now the area where they blend into each other is still pretty big, making the colors look messy. To archieve that we add a new property for the sharpness of the blending. Then, before making the weights sum up to one, we calculate weights to the power of sharpness. Because we only operate in ranges from 0 to 1 that will lower the low values if the sharpness is high, but won’t change the high values by as much. We make the property of the type range to have a nice slider in the UI of the shader.

Triplanar Mapping still isn’t perfect, it needs tiling textures to work, it breaks at surfaces that are exactly 45° and it’s obviously more expensive than a single texture sample (though not by that much).

You can use it in surface shaders for albedo, specular, etc. maps, but it doesn’t work perfectly for normalmaps without some changes I won’t go into here.

I hope this tutorial helped you understand how to do triplanar texture mapping in unity.

You can also find the source code for this shader here: https://github.com/ronja-tutorials/ShaderTutorials/blob/master/Assets/010_Triplanar_Mapping/triplanar_mapping.shader

I hope you enjoyed my tutorial ✨. If you want to support me further feel free to follow me on twitter, throw me a one-time donation via ko-fi or support me on patreon (I try to put updates also there, but I fail most of the time, bear with me 💖).

Adding instanced indirect support to shaders

Vegetation Studio can use instanced indirect rendering on vegetation. This allows the vegetation instanced to be rendered direct from a compute buffer on the GPU allowing for larger batches than the 1023 max of normal rendering. In addition to this there is a final compute shader pass before rendering that does GPU fristum culling and LOD selection.

You can modify most shaders to support Vegetation Studios instanced indirect implementation with a cginc include file and 3 lines of code in the shader. Copy the VS_Indirect.cginc file to the same folder as the shader and add the following lines to the shader. This will set up the shader to call the setup function in the include file where each instance is loading the instance info.

Adding this to a shader does not break normal use of the shader. The setup function is only called when used with the InstancedIndirect API.

The VS_indirect.cginc file is located in the Shader subfolder of VegetationStudio. If someone wants to include this file with a asset store asset they are free to do this.

Currently the instanced indirect rendering is only working on single submesh models. I am working on support for multi submesh.

[enlighter lang=”csharp”]

#pragma instancing_options procedural:setup
#pragma multi_compile GPU_FRUSTUM_ON __

#include “VS_indirect.cginc”

[/enlighter]

If the shader is a grass and plant shader you can change the first #pragma to this. That will add a function that scales in/out the grass at the vegetation distance for a smoother transition.

[enlighter lang=”csharp”]

Unity_objecttoworld Urp

#pragma instancing_options procedural:setupScale

[/enlighter]

If you are updating Speedtree shaders or other shaders that already has the instancing_options pragma add the procedural:setup to the existing #pragma

[enlighter lang=”csharp”]

#pragma instancing_options assumeuniformscaling lodfade maxcount:50 procedural:setup

[/enlighter]

Here is the content of the VS_Indirect.cginc file.

[enlighter lang=”csharp”]
#ifdef UNITY_PROCEDURAL_INSTANCING_ENABLED

struct IndirectShaderData
{
float4x4 PositionMatrix;
float4x4 InversePositionMatrix;
float4 ControlData;
};

Unity_objecttoworld 0

#if defined(SHADER_API_GLCORE) defined(SHADER_API_D3D11) defined(SHADER_API_GLES3) defined(SHADER_API_METAL) defined(SHADER_API_VULKAN) defined(SHADER_API_PS4) defined(SHADER_API_XBOXONE)
StructuredBuffer<IndirectShaderData> IndirectShaderDataBuffer;
StructuredBuffer<IndirectShaderData> VisibleShaderDataBuffer;
#endif
#endif

void setupScale()
{
#ifdef UNITY_PROCEDURAL_INSTANCING_ENABLED
#ifdef GPU_FRUSTUM_ON
unity_ObjectToWorld = VisibleShaderDataBuffer[unity_InstanceID].PositionMatrix;
unity_WorldToObject = VisibleShaderDataBuffer[unity_InstanceID].InversePositionMatrix;
#else
unity_ObjectToWorld = IndirectShaderDataBuffer[unity_InstanceID].PositionMatrix;
unity_WorldToObject = IndirectShaderDataBuffer[unity_InstanceID].InversePositionMatrix;
#endif

#ifdef FAR_CULL_ON_PROCEDURAL_INSTANCING
#define transformPosition mul(unity_ObjectToWorld, float4(0,0,0,1)).xyz
#define distanceToCamera length(transformPosition – _WorldSpaceCameraPos.xyz)
float cull = 1.0 – saturate((distanceToCamera – _CullFarStart) / _CullFarDistance);
unity_ObjectToWorld = mul(unity_ObjectToWorld, float4x4(cull, 0, 0, 0, 0, cull, 0, 0, 0, 0, cull, 0, 0, 0, 0, 1));
#undef transformPosition
#undef distanceToCamera
#endif
#endif
}

Unity_objecttoworld

void setup()
{
#ifdef UNITY_PROCEDURAL_INSTANCING_ENABLED
#ifdef GPU_FRUSTUM_ON
unity_ObjectToWorld = VisibleShaderDataBuffer[unity_InstanceID].PositionMatrix;
unity_WorldToObject = VisibleShaderDataBuffer[unity_InstanceID].InversePositionMatrix;
#else
unity_ObjectToWorld = IndirectShaderDataBuffer[unity_InstanceID].PositionMatrix;
unity_WorldToObject = IndirectShaderDataBuffer[unity_InstanceID].InversePositionMatrix;
#endif
#endif
}
[/enlighter]

Amplify Shader Editor

If you are using Amplify Shader Editor to make the shader you can add the same settings there. Copy the VS_indirect.cginc to the same folder as the shader and set the include and pragmas in the shader editor like this.