Implementing Moebius-style rendering in Unity URP Part.I

DM Lv1

What is Moebius-style?

Moebius is one of the most important comic artists in history, known for his works that often feature a strong personal style. Moebius’s style is famous for the use of numerous lines and simple colors. I will attempt to approximate this style in Unity in this Blog.

Outlines

The first thing we need to handle is —- very important for almost every three-to-two rendering style (and even cartoon rendering) —- the outlines. In past projects, I preferred using Procedural Geometry Silhouetting based on smoothed normals (that is, expand model following the normal direction and cull the front face), as it allows for freely selecting which objects to outline. However, this time, we will use screen space outlining (as a post-processing)—-after all, outlines are everywhere in the Moebius style. In screen space, outlining color changes is quite simple. We just need to access the rendering texture of current frame and compute a special convolution called Sobel Edge Detection for each pixel, both along the u and v directions.
Sobel
This formula shows the convolution part in edge detection and A here refers to the greyscales of the 9 surrounding pixels, which is:
GreyScale
After this, we are now able to calculate the gradient G:
Gradient
And then, we can simply compare the gradient with a variable threshold to tell whether this pixel is a edge. (If you want to get a more vivid and detailed understanding of convolution, I recommend this video from @3brown1blue) However, not all edges with large color differences represent the edges between objects, and not all edges have a significant color difference. Therefore, we need to use “another type of color”.

Normal and Depth Buffer Texture

It is not difficult to see that the areas with significant changes in normals or depth are the edges we need to outline. So we should use the normal directions and depth in screen space, rather than the color rendering results of the current frame, as the input for the algorithm. Thanks to Unity URP’s deferred rendering pipeline, we can easily obtain the normal buffer texture (along with depth buffer) for the current frame. But considering extensibility, we can also insert a custom render event to render the normal directions and depth in screen space for the current frame.
Fist of all. let’s create a blank render feature with only a blank pass inside:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
public class OutlineInfoPassFeature : ScriptableRendererFeature
{
private class OutlineInfoPass : ScriptableRenderPass
{

}

public override void Create()
{

}

public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData)
{

}
}

Next, let’s create a shader and bind it to the objects in the scene. In the shader, we add a new pass that returns the normal direction as the color:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
Pass
{
Tags
{
"LightMode" = "NormalOnly"
"RenderType" = "Opaque"
}
ZWrite On
ZTest LEqual
ZClip On

HLSLPROGRAM

#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"
#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Lighting.hlsl"
#pragma vertex vert
#pragma fragment frag

struct vertIn
{
float4 positionOS : POSITION;
float3 normal : NORMAL;
};

struct vertOut
{
float4 positionCS : SV_POSITION;
float3 normal : NORMAL;
};

vertOut vert(vertIn input)
{
vertOut output;
output.positionCS = TransformObjectToHClip(input.positionOS);
output.normal = PackNormalMaxComponent(input.normal);
return output;
}

float4 frag(vertOut input) : SV_Target
{
return float4(input.normal, 1);
}

ENDHLSL
}

Also, we need another pass to return depth. Normal and depth info can actually be returned using one pass through different channel of RGBA color.
However, since we will need to pass additional information through these passes in the future, combining everything into a single RGBA color would result in reduced precision. Therefore, we will still use two passes instead.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
Pass
{
Tags
{
"LightMode" = "OutlineInfo"
"RenderType" = "Opaque"
}
ZWrite On
ZTest LEqual
ZClip On

HLSLPROGRAM

#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"
#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Lighting.hlsl"
#pragma vertex vert
#pragma fragment frag

struct vertIn
{
float4 positionOS : POSITION;
};

struct vertOut
{
float4 positionCS : SV_POSITION;
};

vertOut vert(vertIn input)
{
vertOut output;
output.positionCS = TransformObjectToHClip(input.positionOS);
return output;
}

float4 frag(vertOut input) : SV_Target
{
float depth = Linear01Depth(input.positionCS.z, _ZBufferParams);
return float4(depth, 0, 0, 1);
}

ENDHLSL
}

Don’t forget to set the tag ‘Lightmode’ so that we can render this pass easily later!

Then, let’s update the OutlineInfoPass ,using a command buffer to render these two passes for all objects in the scene to a render texture at a specific point in the pipeline.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
{
var camera = renderingData.cameraData.camera;
if (!camera.TryGetCullingParameters(out var cullingParameters))
return;
var normalCmd = CommandBufferPool.Get("NormalOnlyPass");
var normalTarget = Shader.PropertyToID("NormalTarget");

normalCmd.GetTemporaryRT(normalTarget, camera.pixelWidth, camera.pixelHeight, 16, FilterMode.Bilinear, RenderTextureFormat.ARGBFloat);
normalCmd.SetRenderTarget(normalTarget);
normalCmd.ClearRenderTarget(true, true, Color.clear);
// Set RT for further usage
normalCmd.SetGlobalTexture("_NormalOnly", normalTarget);
context.ExecuteCommandBuffer(normalCmd);

var cullingResult = context.Cull(ref cullingParameters);
// Draw only the normal buffer shader pass
var drawSettings = CreateDrawingSettings(new ShaderTagId("NormalOnly"), ref renderingData, SortingCriteria.CommonOpaque);
var filterSettings = new FilteringSettings(RenderQueueRange.opaque);
context.DrawRenderers(cullingResult, ref drawSettings, ref filterSettings);

var infoCmd = CommandBufferPool.Get("OutlineInfoPass");
var depthTarget = Shader.PropertyToID("DepthTarget");

infoCmd.GetTemporaryRT(depthTarget, camera.pixelWidth, camera.pixelHeight, 16, FilterMode.Bilinear, RenderTextureFormat.ARGBFloat);
infoCmd.SetRenderTarget(depthTarget);
infoCmd.ClearRenderTarget(true, true, Color.clear);
// Set RT for further usage
infoCmd.SetGlobalTexture("_OutlineInfo", depthTarget);
context.ExecuteCommandBuffer(infoCmd);

// Draw only the depth & attenuation buffer shader pass
drawSettings = CreateDrawingSettings(new ShaderTagId("OutlineInfo"), ref renderingData, SortingCriteria.CommonOpaque);
context.DrawRenderers(cullingResult, ref drawSettings, ref filterSettings);

normalCmd.Clear();
infoCmd.Clear();

CommandBufferPool.Release(normalCmd);
CommandBufferPool.Release(infoCmd);
}

After this, we create an instance of OutlineInfoPass in the OutlineInfoFeature and add it to the render queue.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
public class OutlineInfoPassFeature : ScriptableRendererFeature
{
private ScriptableRenderPass _normalOnlyPass;

private class OutlineInfoPass : ScriptableRenderPass
{
public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
{
... // The above code
}
}

public override void Create()
{
_normalOnlyPass = new OutlineInfoPass();
_normalOnlyPass.renderPassEvent = RenderPassEvent.BeforeRenderingOpaques;
}

public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData)
{
if (_normalOnlyPass is null)
return;
renderer.EnqueuePass(_normalOnlyPass);
}
}

Note that this feature should be rendered before our real outline feature since the RenderTexture need to be passed to outline feature. So I set renderPassEvent to RenderPassEvent.BeforeRenderingOpaques. You may change this setting to suit your project’s need, as long as it is before outlining.

After adding ‘OutlineInfoPass’ to renderer features in your renderer data and binding material to some of your objects in scene, we can now open frame debugger to see whether these commandbuffers works.
NormalDebugger
DepthDebugger
Great! Everything works perfectly! (During debugging, I multiplied the depth value by 100, that is, return float4(100 * depth, 0, 0, 1) to make the debug results more noticeable, if your depth texture seems to be purely black, it is probably because the value of depth is too small to see.)

Outlining using Normals and Depth

At this point, we have completed the setup for the outline. We can finally begin to create the render feature for the outline. In this pass, we need to fetch three textures: the two textures we have passed earlier, and current frame’s rendering target(which is used to retrieve the original color at a given UV). We will use the shader to perform the necessary calculations and then blit the results back to the screen.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
public class OutlinePassFeature : ScriptableRendererFeature
{
private OutlinePass _edgeDetectPass;

private class OutlinePass : ScriptableRenderPass
{
private readonly Material _material = new(Shader.Find("Hidden/MoebiusPP"));
private RTHandle _source;
private readonly int _target = Shader.PropertyToID("_EdgeOutline");

public override void OnCameraSetup(CommandBuffer cmd, ref RenderingData renderingData)
{
_source = renderingData.cameraData.renderer.cameraColorTargetHandle;
}

public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
{
var cmd = CommandBufferPool.Get("Edge Outline");
cmd.SetGlobalTexture("_MainTex", _source);
cmd.GetTemporaryRT(_target, width, height, 0, FilterMode.Bilinear, RenderTextureFormat.ARGBFloat);
cmd.Blit(_source, _target);
cmd.Blit(_target, _source, _material);
cmd.ReleaseTemporaryRT(_target);
context.ExecuteCommandBuffer(cmd);
CommandBufferPool.Release(cmd);
}
}

public override void Create()
{
_edgeDetectPass = new OutlinePass();
_edgeDetectPass.renderPassEvent = RenderPassEvent.BeforeRenderingPostProcessing;
}

public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData)
{
renderer.EnqueuePass(_edgeDetectPass);
}
}

I set renderPassEvent to BeforeRenderingPostProcessing here because I want other postprocessings can be applied on outline.

Next, let’s complete the shader “Hidden/MoebiusPP”, which is contains the crucial calculation of outline: the Sobel Edge Detection as we mentioned above.

Firstly, Let’s sample the 8 pixels surrounding the current point in the fragment shader using the UV coordinates, and then pass this information to the vertex shader.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
struct vertIn
{
float4 positionOS : POSITION;
float2 uv : TEXCOORD0;
};

struct vertOut
{
float4 positionCS : SV_POSITION;
float2 uvs[9] : TEXCOORD1;
};

vertOut vert(vertIn input)
{
vertOut output;
output.positionCS = TransformObjectToHClip(input.positionOS);
float2 uv = input.uv;

float uStep = 1.0 / _Width;
float vStep = 1.0 / _Height;

output.uvs[0] = uv + float2(-uStep, -vStep) * _SampleScale; // Top left
output.uvs[1] = uv + float2(0, -vStep) * _SampleScale; // Top
output.uvs[2] = uv + float2(uStep, -vStep) * _SampleScale; // Top right
output.uvs[3] = uv + float2(-uStep, 0) * _SampleScale; // Left
output.uvs[4] = uv; // Center
output.uvs[5] = uv + float2(uStep, 0) * _SampleScale; // Right
output.uvs[6] = uv + float2(-uStep, vStep) * _SampleScale; // Bottom left
output.uvs[7] = uv + float2(0, vStep) * _SampleScale; // Bottom
output.uvs[8] = uv + float2(uStep, vStep) * _SampleScale; // Bottom right

return output;
}

In this step, we use several variables:
>_Width: Represent the width of current screen.
>_Height: Represent the height of current screen.
>_SampleScale: Represent the distance between two sample points. This variable can control the width of outlines.

Then in vertex shader, we can calculate the gradients of normal and depth and sum them together. If the sum is larger than a threshold, we say it is a edge and return vertex color, otherwise we return the original color of this uv.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
 float greyScale(float3 color)
{
return 0.2126 * color.r + 0.7152 * color.g + 0.0722 * color.b;
}

float4 frag(vertOut input) : SV_Target
{
float color[9];
for (int i = 0; i < 9; i++)
{
color[i] = greyScale(SAMPLE_TEXTURE2D(_NormalOnly, sampler_NormalOnly, input.uvs[i]));
}
const float sobelX[9] = {
-1, 0, 1,
-2, 0, 2,
-1, 0, 1
};

const float sobelY[9] = {
-1, -2, -1,
0, 0, 0,
1, 2, 1
};

float Gx = 0;
float Gy = 0;

if (abs(color[1] - color[7]) > _NormalThreshold || abs(color[3] - color[5]) > _NormalThreshold)
{
for (int i = 0; i < 9; i++)
{
Gx += color[i] * sobelX[i];
Gy += color[i] * sobelY[i];
}
}

float depth[9];
for (int i = 0; i < 9; i ++)
{
depth[i] = SAMPLE_TEXTURE2D(_OutlineInfo, sampler_OutlineInfo, input.uvs[i]).r;
}

if (abs(depth[1] - depth[7]) > _DepthThreshold || abs(depth[3] - depth[5]) > _DepthThreshold)
{
for (int i = 0; i < 9; i++)
{
Gx += greyScale(float3(depth[i], depth[i], depth[i])) * sobelX[i];
Gy += greyScale(float3(depth[i], depth[i], depth[i])) * sobelY[i];
}
}

if (sqrt(Gx * Gx + Gy * Gy) > _EdgeThreshold)
{
return _EdgeColor;
}
else
{
return SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, input.uvs[4]);
}
}

In this step, we use several variables:
>_NormalThreshold: Only when normal’s gradient is larger than this threshold will it contribute to gradient.
>_DepthThreshold: Only when depth’s gradient is larger than this threshold will it contribute to gradient.
>_EdgeThreshold: If the gradient is larger than this number, it need to be outlined.
>_EdgeColor: The color of the edge.

We will pass these parameters in the Execute function of the OutlinePass. But first, let’s create a simple volume to manage these parameters, making it easier to adjust them in the Inspector.

1
2
3
4
5
6
7
8
9
10
public class OutlineVolume : VolumeComponent, IPostProcessComponent
{
public ColorParameter EdgeColor = new(Color.black);
public FloatParameter EdgeThreshold = new(0.1f);
public FloatParameter DepthThreshold = new(0.1f);
public FloatParameter NormalThreshold = new(0.1f);

public bool IsActive() => true;
public bool IsTileCompatible() => false;
}

Now we can easily change the value of variables in shader:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
private static readonly int EdgeColor = Shader.PropertyToID("_EdgeColor");
private static readonly int EdgeThreshold = Shader.PropertyToID("_EdgeThreshold");
private static readonly int DepthThreshold = Shader.PropertyToID("_DepthThreshold");
private static readonly int NormalThreshold = Shader.PropertyToID("_NormalThreshold");


public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
{
if (renderingData.cameraData.cameraType != CameraType.Game)
return;

if (_source is null || _material is null)
{
Debug.LogWarning("OutlinePass: Missing source or material");
return;
}
var volume = VolumeManager.instance.stack.GetComponent<OutlineVolume>();
if (volume is null)
{
Debug.LogWarning("OutlinePass: Missing volume");
return;
}

_material.SetColor(EdgeColor, volume.EdgeColor.value);
_material.SetFloat(EdgeThreshold, volume.EdgeThreshold.value);
_material.SetFloat(DepthThreshold, volume.DepthThreshold.value / 100.0f); // The depth values are often very small, so dividing by 100 ensures better precision when adjusting them
_material.SetFloat(NormalThreshold, volume.NormalThreshold.value);

var camera = renderingData.cameraData.camera;
var width = camera.pixelWidth;
var height = camera.pixelHeight;

_material.SetInt(Width, width);
_material.SetInt(Height, height);

... // Set the command buffer
}

With that, we’ve completed most of the logic for the outline effect. Let’s add a simple one color pass to the gameObject shader and see the outline effect in action!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
HLSLPROGRAM

#pragma shader_feature_local_fragment _EMISSION
#pragma multi_compile _ _MAIN_LIGHT_SHADOWS _MAIN_LIGHT_SHADOWS_CASCADE _MAIN_LIGHT_SHADOWS_SCREEN
#pragma multi_compile _ _ADDITIONAL_LIGHTS_VERTEX _ADDITIONAL_LIGHTS
#pragma multi_compile_fragment _ _ADDITIONAL_LIGHT_SHADOWS
#define _ADDITIONAL_LIGHT_CALCULATE_SHADOWS
// Soft Shadows
#pragma multi_compile_fragment _ _SHADOWS_SOFT

#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"
#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Lighting.hlsl"
#pragma vertex vert
#pragma fragment frag

float4 _Color;

TEXTURE2D(_ShadowTex);
SAMPLER(sampler_ShadowTex);

struct vertIn
{
float4 positionOS : POSITION;
float3 normal : NORMAL;
float2 uv : TEXCOORD0;
};

struct vertOut
{
float4 positionCS : SV_POSITION;
float3 normal : NORMAL;
float2 uv : TEXCOORD0;
float4 shadowCoord : TEXCOORD1;
};

vertOut vert(vertIn input)
{
vertOut output;
float3 worldPos = TransformObjectToWorld(input.positionOS);
output.positionCS = TransformObjectToHClip(input.positionOS);
output.normal = TransformObjectToWorldNormal(input.normal);
output.uv = input.uv;
output.shadowCoord = TransformWorldToShadowCoord(worldPos);
return output;
}

float4 frag(vertOut input) : SV_Target
{
return _Color;
}

Don’t forget to add the feature to your pipeline data and set the materials for your objects. The final result with a gradient color skybox looks like:
OutlineFinal
Well done! It already looks quite close to Moebius style!

Noise

So far, our lines are able to accurately depict the edges of objects, perhaps too accurately — they look more like computer-generated edges than hand-drawn ones. A simple solution is to apply a slight noise the UV coordinates before sampling, giving the lines a subtle wobble, making them appear more hand-drawn.

This is not difficult to implement. We just need to add a noise texture parameter in the post-processing shader, and then apply the perturbation to the UVs before sampling the normal and depth textures:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
...
TEXTURE2D(_NoiseMap);
SAMPLER(sampler_NoiseMap);
float _NoiseStrength;
float _NoiseScale;
...

float4 frag(vertOut input) : SV_Target
{
float color[9];
for (int i = 0; i < 9; i++)
{
float2 distortion = SAMPLE_TEXTURE2D(_NoiseMap, sampler_NoiseMap, input.uvs[i] / _NoiseScale).rg * 2 - 1;
distortion *= _NoiseStrength;
color[i] = greyScale(SAMPLE_TEXTURE2D(_NormalOnly, sampler_NormalOnly, input.uvs[i] + distortion));
}

...

float depth[9];
for (int i = 0; i < 9; i ++)
{
float2 distortion = SAMPLE_TEXTURE2D(_NoiseMap, sampler_NoiseMap, input.uvs[i] / _NoiseScale).rg * 2 - 1;
distortion *= _NoiseStrength;
depth[i] = SAMPLE_TEXTURE2D(_OutlineInfo, sampler_OutlineInfo, input.uvs[i] + distortion).r;
}

...
}

Two variables are used to control the result of noise: _NoiseScale and _NoiseStrength. With the texture together, we can create a parameter for them in volumn and set them through render pass. Since code here is repeated work, I don’t think it’s necessary to put them here. Let skip it and check the final result:
NoiseFinal
Now better.

Shadows

In the Moebius style, shadows are often represented through lines as well: thin and dense grid-like lines. Thanks to the video from @UselessGameDev, we can implement similar kind of shadow with a neat and beautiful way. But first, we need to prepare a shadow texture like this:
ShadowTex
As you can see, this texture provides three different directions of lines with R, G and B channel, which actually provides us three different textures in same time. (You can create a texture like this simply with Photoshop.) We simply need to gradually render lines on the object based on light attenuation, transitioning from no lines being rendered at all to rendering all three channels of lines. There are two ways to achieve this effect: rendering shadows on the object itself or rendering shadows in screen space. Both have their pros and cons:

  1. The first approach, rendering shadows on the object, allows the shadow lines to change direction according to the surface orientation of the object. However, this can lead to strange seams between objects, and the line density may vary across different objects.
  1. The second approach, rendering shadows in screen space, ensures that the shadow lines have consistent density across the screen, and handling seams is easier. However, all shadow lines will have the same direction across all objects.

For this case, I have chosen the second approach — rendering shadows in screen space, which requires me to pass the light attenuation to the post-processing shader. Luckily, it is fairly easy since we’ve already passed depth information to post-processing shader using R channge of OutlineInfo pass. All what we need to do now is calculating the light attenuation and set it as the G channel.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
...
#pragma multi_compile _ _MAIN_LIGHT_SHADOWS _MAIN_LIGHT_SHADOWS_CASCADE _MAIN_LIGHT_SHADOWS_SCREEN
#pragma multi_compile _ _ADDITIONAL_LIGHTS_VERTEX _ADDITIONAL_LIGHTS
#pragma multi_compile_fragment _ _ADDITIONAL_LIGHT_SHADOWS
#define _ADDITIONAL_LIGHT_CALCULATE_SHADOWS
#pragma multi_compile_fragment _ _LIGHT_LAYERS
#pragma multi_compile_fragment _ _LIGHT_COOKIES
// Soft Shadows
#pragma multi_compile_fragment _ _SHADOWS_SOFT
// Shadowmask
#pragma multi_compile _ SHADOWS_SHADOWMASK
#pragma multi_compile _ DIRLIGHTMAP_COMBINED
...
struct vertIn
{
float4 positionOS : POSITION;
float3 normal : NORMAL;
};

struct vertOut
{
float4 positionCS : SV_POSITION;
float3 positionWS : TEXCOORD0;
float4 shadowCoord : TEXCOORD1;
float3 normal : TEXCOORD2;
};
...
float4 frag(vertOut input) : SV_Target
{
Light mainLight = GetMainLight(input.shadowCoord);
float NdotL = dot(mainLight.direction, input.normal) * 0.5 + 0.5;
float attenuation = mainLight.distanceAttenuation * mainLight.shadowAttenuation * NdotL;
float luminance = 1 - attenuation; // Clear color is black, so 0 is completely bright
NdotL = dot(mainLight.direction, input.normal);
attenuation = mainLight.distanceAttenuation * mainLight.shadowAttenuation * NdotL;
float highlight = (attenuation > _HighlightThreshold? 0.1 : 0);
float depth = Linear01Depth(input.positionCS.z, _ZBufferParams);
return float4(depth, luminance, 0, 1); // Clear color is black, so x=0: far and y=0 : completely bright
}

And now we can use attenuation easily in post-processing shader! Here, my approach is that if 1 - attenuation is below the threshold, only the R channel lines are rendered. If the luminance is below 1/3 of the threshold, both the R and G channel lines are rendered. In my shadow texture there is no line on B channel, but you can create you own texture to gain different visual effect.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
TEXTURE2D(_ShadowTex);
SAMPLER(sampler_ShadowTex);
float _ShadowScale;
float _ShadowStrength;
float4 frag(vertOut input) : SV_Target
{
...
// Here, since the screen is not square, we need to remap the uv
float2 uv = float2(input.uvs[4].x * _Width / _ShadowResolution, input.uvs[4].y * _Height / _ShadowResolution);
float4 shadowSample = SAMPLE_TEXTURE2D(_ShadowTex, sampler_ShadowTex, uv * _ShadowScale);
float attenuation = SAMPLE_TEXTURE2D(_OutlineInfo, sampler_OutlineInfo, input.uvs[4]).g;
float r = shadowSample.r * (attenuation > 0.5 ? 1 : 0);
float g = shadowSample.g * (attenuation > 0.83 ? 1 : 0);
float b = 0;
float maxShadow = _ShadowStrength * max(max(r,g),b);

if (sqrt(Gx * Gx + Gy * Gy) > _EdgeThreshold)
{
return _EdgeColor;
}
else
{
return SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, input.uvs[4]) * float4(1 - maxShadow, 1 - maxShadow, 1 - maxShadow, 1);
}
}

Here, I have set the threshold to a constant value of 0.5, but it can also be adjusted using a variable. There are two variables using in this part:
> _ShadowStrength: Used to adjust the shadow color (the higher the value, the darker the shadow).
> _ShadowScale: Used to control the interval between shadow lines.
These values need to be controlled via the volume and passed through the render pass with _ShadowTex and _ShadowResolution together, but I won’t elaborate again on this part further. You can change _ShadowTex to gain different style of shadows. Now let’s embrace the physically correct (almost) line shadow:
ShadowFinal
Hmm, not bad, isn’t it?

Highlight

In the Moebius style, conveying the sense of volume of an object is not only achieved through shadows, but also through another very important element — highlights. Highlights are often achieved by using a bright color block combined with edge outlining. Let’s first create the bright color block.

The implementation of the bright color block is quite simple: similar to the concept of cartoon-style cel shading, we just need to replace areas with brightness greater than a certain threshold with another color:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
Properties
{
...
_HighlightThreshold ("Hightlight Threshold", Float) = 0.98
}

...

Pass
{
HLSLPROGRAM

#pragma shader_feature_local_fragment _EMISSION
#pragma multi_compile _ _MAIN_LIGHT_SHADOWS _MAIN_LIGHT_SHADOWS_CASCADE _MAIN_LIGHT_SHADOWS_SCREEN
#pragma multi_compile _ _ADDITIONAL_LIGHTS_VERTEX _ADDITIONAL_LIGHTS
#pragma multi_compile_fragment _ _ADDITIONAL_LIGHT_SHADOWS
#define _ADDITIONAL_LIGHT_CALCULATE_SHADOWS
// Soft Shadows
#pragma multi_compile_fragment _ _SHADOWS_SOFT

#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"
#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Lighting.hlsl"
#pragma vertex vert
#pragma fragment frag

float4 _Color;
float4 _HighlightColor;
float _HighlightThreshold;

struct vertIn
{
float4 positionOS : POSITION;
float3 normal : NORMAL;
};

struct vertOut
{
float4 positionCS : SV_POSITION;
float3 normal : NORMAL;
float4 shadowCoord : TEXCOORD1;
};

vertOut vert(vertIn input)
{
vertOut output;
float3 worldPos = TransformObjectToWorld(input.positionOS);
output.positionCS = TransformObjectToHClip(input.positionOS);
output.normal = TransformObjectToWorldNormal(input.normal);
output.shadowCoord = TransformWorldToShadowCoord(worldPos);
return output;
}

float4 frag(vertOut input) : SV_Target
{
Light mainLight = GetMainLight(input.shadowCoord);
float NdotL = dot(mainLight.direction, input.normal);
float attenuation = mainLight.distanceAttenuation * mainLight.shadowAttenuation * saturate(NdotL);
if (attenuation > _HighlightThreshold)
{
return _HighlightColor;
}
return _Color;
}

ENDHLSL

}

Then, let’s return to the OutlineInfo pass and use the B channel to store the highlight information. Areas that require highlights should have a value of 1, while areas that don’t need highlights should be set to 0. This will help us use the highlight information to calculate the gradient during the convolution process.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
Pass
{
Tags
{
"LightMode" = "OutlineInfo"
"RenderType" = "Opaque"
}
...

float _HighlightThreshold;

...

float4 frag(vertOut input) : SV_Target
{
...
float highlight = (attenuation > _HighlightThreshold? 0.1 : 0);
return float4(depth, luminance, highlight, 1);
}

ENDHLSL
}

Finally, we are able to calculate Sobel Edge Detection to outline the edge of highlight:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
float4 frag(vertOut input)
{
...
float highLight[9];
for (int i = 0; i < 9; i++)
{
float2 distortion = SAMPLE_TEXTURE2D(_NoiseMap, sampler_NoiseMap, input.uvs[i] / _NoiseScale).rg * 2 - 1;
distortion *= _DistortionStrength;
highLight[i] = SAMPLE_TEXTURE2D(_OutlineInfo, sampler_OutlineInfo, input.uvs[i] + distortion).b;
}

for (int i = 0; i < 9; i++)
{
Gx += highLight[i] * sobelX[i];
Gy += highLight[i] * sobelY[i];
}
...
}

Now, let’s see the final result:
HighlightFinal
The render result is already quite close to what I want, although it still lacks many details. The Moebius style often features delicate details, and in Part II, I will attempt to recreate these details and also, try to make the final render feel more vintage.

  • Title: Implementing Moebius-style rendering in Unity URP Part.I
  • Author: DM
  • Created at : 2024-11-03 00:30:18
  • Updated at : 2024-11-06 22:55:18
  • Link: http://dmtyler.github.io/2024/11/03/Moebius-1/
  • License: This work is licensed under CC BY-NC-SA 4.0.
Comments