Moebius is one of the most important comic artists in history, known for his works that often feature a strong personal style. Moebius’s style is famous for the use of numerous lines and simple colors. I will attempt to approximate this style in Unity in this Blog.
Outlines
The first thing we need to handle is —- very important for almost every three-to-two rendering style (and even cartoon rendering) —- the outlines. In past projects, I preferred using Procedural Geometry Silhouetting based on smoothed normals (that is, expand model following the normal direction and cull the front face), as it allows for freely selecting which objects to outline. However, this time, we will use screen space outlining (as a post-processing)—-after all, outlines are everywhere in the Moebius style. In screen space, outlining color changes is quite simple. We just need to access the rendering texture of current frame and compute a special convolution called Sobel Edge Detection for each pixel, both along the u and v directions. This formula shows the convolution part in edge detection and A here refers to the greyscales of the 9 surrounding pixels, which is: After this, we are now able to calculate the gradient G: And then, we can simply compare the gradient with a variable threshold to tell whether this pixel is a edge. (If you want to get a more vivid and detailed understanding of convolution, I recommend this video from @3brown1blue) However, not all edges with large color differences represent the edges between objects, and not all edges have a significant color difference. Therefore, we need to use “another type of color”.
Normal and Depth Buffer Texture
It is not difficult to see that the areas with significant changes in normals or depth are the edges we need to outline. So we should use the normal directions and depth in screen space, rather than the color rendering results of the current frame, as the input for the algorithm. Thanks to Unity URP’s deferred rendering pipeline, we can easily obtain the normal buffer texture (along with depth buffer) for the current frame. But considering extensibility, we can also insert a custom render event to render the normal directions and depth in screen space for the current frame. Fist of all. let’s create a blank render feature with only a blank pass inside:
Pass { Tags { "LightMode" = "NormalOnly" "RenderType" = "Opaque" } ZWrite On ZTest LEqual ZClip On HLSLPROGRAM
#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl" #include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Lighting.hlsl" #pragma vertex vert #pragma fragment frag
Also, we need another pass to return depth. Normal and depth info can actually be returned using one pass through different channel of RGBA color. However, since we will need to pass additional information through these passes in the future, combining everything into a single RGBA color would result in reduced precision. Therefore, we will still use two passes instead.
Pass { Tags { "LightMode" = "OutlineInfo" "RenderType" = "Opaque" } ZWrite On ZTest LEqual ZClip On HLSLPROGRAM
#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl" #include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Lighting.hlsl" #pragma vertex vert #pragma fragment frag
Don’t forget to set the tag ‘Lightmode’ so that we can render this pass easily later!
Then, let’s update the OutlineInfoPass ,using a command buffer to render these two passes for all objects in the scene to a render texture at a specific point in the pipeline.
Note that this feature should be rendered before our real outline feature since the RenderTexture need to be passed to outline feature. So I set renderPassEvent to RenderPassEvent.BeforeRenderingOpaques. You may change this setting to suit your project’s need, as long as it is before outlining.
After adding ‘OutlineInfoPass’ to renderer features in your renderer data and binding material to some of your objects in scene, we can now open frame debugger to see whether these commandbuffers works. Great! Everything works perfectly! (During debugging, I multiplied the depth value by 100, that is, return float4(100 * depth, 0, 0, 1) to make the debug results more noticeable, if your depth texture seems to be purely black, it is probably because the value of depth is too small to see.)
Outlining using Normals and Depth
At this point, we have completed the setup for the outline. We can finally begin to create the render feature for the outline. In this pass, we need to fetch three textures: the two textures we have passed earlier, and current frame’s rendering target(which is used to retrieve the original color at a given UV). We will use the shader to perform the necessary calculations and then blit the results back to the screen.
I set renderPassEvent to BeforeRenderingPostProcessing here because I want other postprocessings can be applied on outline.
Next, let’s complete the shader “Hidden/MoebiusPP”, which is contains the crucial calculation of outline: the Sobel Edge Detection as we mentioned above.
Firstly, Let’s sample the 8 pixels surrounding the current point in the fragment shader using the UV coordinates, and then pass this information to the vertex shader.
In this step, we use several variables: >_Width: Represent the width of current screen. >_Height: Represent the height of current screen. >_SampleScale: Represent the distance between two sample points. This variable can control the width of outlines.
Then in vertex shader, we can calculate the gradients of normal and depth and sum them together. If the sum is larger than a threshold, we say it is a edge and return vertex color, otherwise we return the original color of this uv.
In this step, we use several variables: >_NormalThreshold: Only when normal’s gradient is larger than this threshold will it contribute to gradient. >_DepthThreshold: Only when depth’s gradient is larger than this threshold will it contribute to gradient. >_EdgeThreshold: If the gradient is larger than this number, it need to be outlined. >_EdgeColor: The color of the edge.
We will pass these parameters in the Execute function of the OutlinePass. But first, let’s create a simple volume to manage these parameters, making it easier to adjust them in the Inspector.
1 2 3 4 5 6 7 8 9 10
publicclassOutlineVolume : VolumeComponent, IPostProcessComponent { public ColorParameter EdgeColor = new(Color.black); public FloatParameter EdgeThreshold = new(0.1f); public FloatParameter DepthThreshold = new(0.1f); public FloatParameter NormalThreshold = new(0.1f); publicboolIsActive() => true; publicboolIsTileCompatible() => false; }
Now we can easily change the value of variables in shader:
publicoverridevoidExecute(ScriptableRenderContext context, ref RenderingData renderingData) { if (renderingData.cameraData.cameraType != CameraType.Game) return; if (_source isnull || _material isnull) { Debug.LogWarning("OutlinePass: Missing source or material"); return; } var volume = VolumeManager.instance.stack.GetComponent<OutlineVolume>(); if (volume isnull) { Debug.LogWarning("OutlinePass: Missing volume"); return; }
_material.SetColor(EdgeColor, volume.EdgeColor.value); _material.SetFloat(EdgeThreshold, volume.EdgeThreshold.value); _material.SetFloat(DepthThreshold, volume.DepthThreshold.value / 100.0f); // The depth values are often very small, so dividing by 100 ensures better precision when adjusting them _material.SetFloat(NormalThreshold, volume.NormalThreshold.value); var camera = renderingData.cameraData.camera; var width = camera.pixelWidth; var height = camera.pixelHeight; _material.SetInt(Width, width); _material.SetInt(Height, height); ... // Set the command buffer }
With that, we’ve completed most of the logic for the outline effect. Let’s add a simple one color pass to the gameObject shader and see the outline effect in action!
#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl" #include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Lighting.hlsl" #pragma vertex vert #pragma fragment frag
Don’t forget to add the feature to your pipeline data and set the materials for your objects. The final result with a gradient color skybox looks like: Well done! It already looks quite close to Moebius style!
Noise
So far, our lines are able to accurately depict the edges of objects, perhaps too accurately — they look more like computer-generated edges than hand-drawn ones. A simple solution is to apply a slight noise the UV coordinates before sampling, giving the lines a subtle wobble, making them appear more hand-drawn.
This is not difficult to implement. We just need to add a noise texture parameter in the post-processing shader, and then apply the perturbation to the UVs before sampling the normal and depth textures:
float depth[9]; for (int i = 0; i < 9; i ++) { float2 distortion = SAMPLE_TEXTURE2D(_NoiseMap, sampler_NoiseMap, input.uvs[i] / _NoiseScale).rg * 2 - 1; distortion *= _NoiseStrength; depth[i] = SAMPLE_TEXTURE2D(_OutlineInfo, sampler_OutlineInfo, input.uvs[i] + distortion).r; }
... }
Two variables are used to control the result of noise: _NoiseScale and _NoiseStrength. With the texture together, we can create a parameter for them in volumn and set them through render pass. Since code here is repeated work, I don’t think it’s necessary to put them here. Let skip it and check the final result: Now better.
Shadows
In the Moebius style, shadows are often represented through lines as well: thin and dense grid-like lines. Thanks to the video from @UselessGameDev, we can implement similar kind of shadow with a neat and beautiful way. But first, we need to prepare a shadow texture like this: As you can see, this texture provides three different directions of lines with R, G and B channel, which actually provides us three different textures in same time. (You can create a texture like this simply with Photoshop.) We simply need to gradually render lines on the object based on light attenuation, transitioning from no lines being rendered at all to rendering all three channels of lines. There are two ways to achieve this effect: rendering shadows on the object itself or rendering shadows in screen space. Both have their pros and cons:
The first approach, rendering shadows on the object, allows the shadow lines to change direction according to the surface orientation of the object. However, this can lead to strange seams between objects, and the line density may vary across different objects.
The second approach, rendering shadows in screen space, ensures that the shadow lines have consistent density across the screen, and handling seams is easier. However, all shadow lines will have the same direction across all objects.
For this case, I have chosen the second approach — rendering shadows in screen space, which requires me to pass the light attenuation to the post-processing shader. Luckily, it is fairly easy since we’ve already passed depth information to post-processing shader using R channge of OutlineInfo pass. All what we need to do now is calculating the light attenuation and set it as the G channel.
And now we can use attenuation easily in post-processing shader! Here, my approach is that if 1 - attenuation is below the threshold, only the R channel lines are rendered. If the luminance is below 1/3 of the threshold, both the R and G channel lines are rendered. In my shadow texture there is no line on B channel, but you can create you own texture to gain different visual effect.
Here, I have set the threshold to a constant value of 0.5, but it can also be adjusted using a variable. There are two variables using in this part: > _ShadowStrength: Used to adjust the shadow color (the higher the value, the darker the shadow). > _ShadowScale: Used to control the interval between shadow lines. These values need to be controlled via the volume and passed through the render pass with _ShadowTex and _ShadowResolution together, but I won’t elaborate again on this part further. You can change _ShadowTex to gain different style of shadows. Now let’s embrace the physically correct (almost) line shadow: Hmm, not bad, isn’t it?
Highlight
In the Moebius style, conveying the sense of volume of an object is not only achieved through shadows, but also through another very important element — highlights. Highlights are often achieved by using a bright color block combined with edge outlining. Let’s first create the bright color block.
The implementation of the bright color block is quite simple: similar to the concept of cartoon-style cel shading, we just need to replace areas with brightness greater than a certain threshold with another color:
#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl" #include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Lighting.hlsl" #pragma vertex vert #pragma fragment frag
Then, let’s return to the OutlineInfo pass and use the B channel to store the highlight information. Areas that require highlights should have a value of 1, while areas that don’t need highlights should be set to 0. This will help us use the highlight information to calculate the gradient during the convolution process.
Now, let’s see the final result: The render result is already quite close to what I want, although it still lacks many details. The Moebius style often features delicate details, and in Part II, I will attempt to recreate these details and also, try to make the final render feel more vintage.
Title: Implementing Moebius-style rendering in Unity URP Part.I