In this assignment you will add a materials system to the core you built in A1. You will need to modify your mesh handling to include new attributes for texture coordinates and tangents; add material parameter, texture map, and lighting environment loading to your .s72 loader (and get these textures uploaded to the GPU); update your drawing code to handle multiple shaders and binding sets; and write shader code that implements tangent-space normal maps.
Scoring: this assignment is worth 15 points, total. Thanks to a convenient and intentional construction, each "point" is worth exactly 1% of your final course grade. Points are allocated as follows: A2-env is worth 2 points; A2-tone is worth 1 point; A2-diffuse is worth 3 points; A2-normal is worth 2 points; A2-pbr is worth 3 points; A2x-displacement is worth up to 1 extra point; and A2-create is worth 4 points.
Points will be awarded for each section based on both the code and the sections of a report demonstrating and benchmarking the code.
Reminder: this class takes a strict didactic, ethical, and legal view on copying code.
Write code that can:
"environment"
and "mirror"
materials."lambertian"
material, including albedo from a texture map. Build a utility to process lighting environment cube maps to produce lambertian look-up-table cube maps."pbr"
material. Add support for GGX-specular-importance-sampled mip-map generation to your cube map processing utility.Use your creativity (and your code) to make something beautiful:
Demonstrate and test your code; write a report which includes screen recordings, images, timings, examples, graphs. This is non-trivial. See report template to understand what is required.
Turn in your code in /afs/cs.cmu.edu/academic/class/15472-s24-users/<andrewid>/A2/
.
Your turn-in directory should include:
report/
report describing your code and illustrating that it works.
report/report.html
start with the report template and replace the "placeholder" sections.report/*.s72,*.b72
benchmarking scenes and data mentioned in your report.report/*
other files (images, animations, cubemaps) needed by your report.code/
the code you wrote for this assignment.
code/.git
your code must be stored in a git repository with a commit history showing your development process.code/Maekfile.js
build script. When run as node Maekfile.js
-- produces bin/viewer
(bin/viewer.exe
on Windows), your compiled scene viewer. You may wish to use maek and refer to how, e.g., Scotty3D or the 15-466 Example Code are set up. However, you may also write your own Maekfile.js
. report/
folder; the scene file for your model should be in the model/
folder.)model/
your created model.
model.s72
your main model Scene'72 file.*.b72
any data files needed by your scene.*.png
any texture or lighting data files needed by your scene.model.mp4
a screen recording (H.264 in MP4 container) of your model shown in your viewer. You may wish to rotate the view, the lighting, or otherwise demonstrate the model.
We expect your Maekfile.js
to properly build your viewer on at least one of { Linux/g++
; Windows/cl.exe
; macOS/clang++
}.
We will not penalize you for minor cross-platform compile problems; though we would appreciate it if you tested on Linux.
When we compile and run your code, we will set the Vulkan SDK environment variables as described in the LunarG SDK getting started guide.
In addition, we will have GLFW installed system-wide via apt install libglfw3-dev
.
We expect your report to be viewable in Firefox on Linux. You may wish to consult MDN to determine format compatibility for any embedded videos.
This assignment uses the scene'72 (.s72
) format.
The specification of the format has been updated with material and lighting support, but should otherwise be compatible with your existing code.
In this assignment, you may make the following simplifying assumptions about scene'72 files:
"indices"
property."simple"
material use this "attributes"
layout:
"attributes":{
"POSITION":{ "src":"filename.b72", "offset":N+0, "stride":28, "format":"R32G32B32_SFLOAT" },
"NORMAL": { "src":"filename.b72", "offset":N+12, "stride":28, "format":"R32G32B32_SFLOAT" },
"COLOR": { "src":"filename.b72", "offset":N+24, "stride":28, "format":"R8G8B8A8_UNORM" }
}
"simple"
materials use this "attributes"
layout:
"attributes":{
"POSITION": { "src":"cube.b72", "offset":0, "stride":52, "format":"R32G32B32_SFLOAT" },
"NORMAL": { "src":"cube.b72", "offset":12, "stride":52, "format":"R32G32B32_SFLOAT" },
"TANGENT": { "src":"cube.b72", "offset":24, "stride":52, "format":"R32G32B32A32_SFLOAT" },
"TEXCOORD": { "src":"cube.b72", "offset":40, "stride":52, "format":"R32G32_SFLOAT" },
"COLOR": { "src":"cube.b72", "offset":48, "stride":52, "format":"R8G8B8A8_UNORM" }
}
"type":"2D"
textures have format "format":"linear"
."type":"cube"
textures have format "format":"rgbe"
.
Update your code to support the new "ENVIRONMENT"
type in the Scene'72 specification; and add support for the "environment"
and "mirror"
materials to show it off.
Note that both "environment"
and "mirror"
both use the lighting environment as a look-up table, but with a different vector.
"environment"
uses the normal directly, while "mirror"
uses the reflection vector (note, also, the GLSL reflect
function).
Be mindful of coordinate systems in your shader code --
your shader needs to deal with object-local coordinates (vertices), environment coordinates (when looking up lighting directions), and clip coordinates.
I tend to pass mat4 ClipFromObject
, mat4x3 LightFromObject
, and mat3 LightFromNormal
transformations as well as an vec3 Eye
(camera) point uniform; but there are certainly other ways of getting similar data to the shader.
You do not need to write your own image loading/decompression code for this assignment.
Specifically, you are encouraged to use Sean Barrett's stb_image.h
to handle texture loading.
Our lighting environments will often have a very high dynamic range (the brightest direction is many many times brighter than the darkest direction). In order to deal with this high dynamic range while still maintaining relatively compact files we will use an RGBE (RGB + shared exponent) encoding inspired by the color format in the Radiance renderer's .hdr image format. Note that we are just using the color encoding part of this spec, and will actually store the data in some other image format (probably .png)!
To convert from a stored value (\(rgbe\)) to a radiance value (\( rgb' \)), multiply as follows:
\[ rgb' \gets 2^{e-128} * \frac{rgb + 0.5}{256} \]With one particular quirk, which is that we will map \( (0,0,0,0) \to (0,0,0) \) so that true black is supported.
You likely find frexp
and ldexp
useful in converting to/from RGBE format.
Note also that radiance's color handling code is available for inspiration;
as is some old RGBE-handling from 15-466.
Up until this point I haven't specified what color space our code should be displaying images in (or what values in framebuffers actually mean). This means that your code is probably implicitly using SRGB (if you followed the Vulkan tutorial).
However, now that we're working with environment probes that operate in real-world radiance units, your viewer code should take care to adopt a "linear light" + "tone mapping" rendering flow.
In other words, your code (probably in a fragment shader) should first compute a fragment radiance, and then convert this radiance to the displayed color value by using a "tone mapping" operator. You may choose to do this in two steps (rendering to a HDR-capable color buffer first, then using a full-screen rendering pass to tone map all the rendered pixels at once), or in one (having a tone-mapping function that gets called by all of your material shaders).
We leave the choice of tone mapping operator to you, but do require you to do something more sophisticated than linear.
This might also be an interesting time to consider supporting an HDR output surface. Though, for the purposes of the exercise, your code should still have an LDR output mode with tone mapping available.
As a warm-up material (and to get you started with texture mapping) implement the "lambertian"
material.
This requires you to develop some code to "pre-convolve" the cubemap with a cos-weighted hemisphere to make a lambertian look-up table cube map.
You should produce a utility cube
that when run with the command cube in.png --lambertian out.png
reads a cubemap from in.png
, samples it to produce a lambertian lookup table cube map, and stores this in out.png
.
Both files should be in rgbe encoding (RGB with a shared exponent in the A component).
Tip: rather than having a separate "constant color" and "texture map" code path, you can always have your scene loader code make 1x1 texture maps when it sees a constant color.
Tip: the diffuse lookup cubemap can be very small because it is pretty darn low frequency. E.g., having an edge length of 16 pixels is reasonable.
Eventually, you will probably want to support mip-mapping of all your textures (like diffuse albedo!); but we aren't going to grade you down for not supporting it yet in this assignment. (Indeed, the only texture that your viewer is required to have a mip-map for is the specular lookup table used in A2-pbr, since it uses that extra dimension specifically to store different distributions.)
Add support for normal maps. This will require carrying a tangent frame through the vertex shader and into the fragment shader.
Be aware that normal maps in s72 are stored as 2D textures, scaled and offset as \( n * 0.5 + 0.5 \). Forgetting to scale and bias to expand these normals will result in strange apparent normal directions.
The caution about being clear about coordinate systems in A2-env definitely also applies here.
NOTE: you are not required to support normal maps for "simple"
materials.
Add support for the "pbr"
material type,
using the split-sum approximation with precomputed specular mip-maps and a look-up table as described in Epic's 2013 SIGGRAPH course talk (and notes -- with code!).
You may also find the 2012 Disney Talk (and notes) useful, especially for describing the BRDF parameters more clearly.
The glTF 2.0 specification adopts a similar BRDF and includes some implementation information that might come in handy -- I especially appreciated the description of how to handle metalness. (Though their implementation doesn't deal with image-based lighting and the split-sum approximation.)
In order to "pre-integrate" the convolution of the GGX specular lobe and a given lightmap, expand your cube map utility to support
the command line option cube in.png --ggx out.png
, which will read a cubemap from in.png
, samples it to produce an stack of importance-sampled-with-GGX-at-different-roughnesses lookup table cube maps, and stores them in out.0.png
(base mip level / lowest roughness) through out.N.png
(smallest mip level / highest roughness).
Add displacement map support to your materials.
Use parallax occlusion mapping or a similar technique to add view-dependent displacement to all material types.
NOTE: you are not required to support displacement maps for "simple"
materials.
Your creative exercise in A2 is to build a textured, normal-mapped model. (And put it in a nice scene to show it off.)
When building the model, I suggest first building a high-detail model (either by hand or by using photogrammetry techniques) and then transferring that detail to a lower-resolution model by "baking" it (e.g., in blender). Note that you must create this model yourself, including the textures!
In deciding on what to model or capture, think about what objects might show off features of the "pbr"
material (like variable roughness and metalness), and what has enough detail to benefit from a normal map.
Please keep your model content to a "PG" level. This is not the time to show off your collection of drug paraphernalia.
When building out a scene to show off the model, I suggest finding a suitable environment (polyhaven has many), and considering adding a camera or model animation to show how the light interacts with your model's textured detail.
Don't forget to write the report.