top of page

ATELIER MECANIQUE

Compositing section

This project was an experimental exercise in Nuke, focusing on matchmoving and integration. My goal was to challenge myself with live-action footage in order to deepen my understanding of camera tracking, spatial coherence, and post-production workflows. It allowed me to explore how CG elements interact with real environments, and to push my skills in achieving more seamless and realistic compositing.

​

​​​​​

Roget Maxime.

Here is the final result without any grading, i used Blender Cycles for the rendering and lighting.

This idea basically came up one night with a friend while we were working on a storyboard together.

I wanted to create a place that could belong in a film like Blade Runner or Gunnm. I've always been fascinated by sci-fi atmospheres and how they convey emotion through dense details and controlled chaos. That layered complexity can also evoke a sense of claustrophobia, depending on the perspective or framing.

bladeruner
image.png
gunnm

I chose my father's workshop as the set and invited a friend to help me shoot some footage using a Sony A7S III.

atelier1.jpg
atelier2.jpg

To simplify my workflow, I brought some VFX tools to the set, including a MacBeth ColorChecker and a chrome ball. Thanks to the data they provided, I saved a significant amount of time in post-production. These tools helped ensure accurate black levels and color matching between the real footage and CG elements—assuming, of course, that proper color management was applied to retain full information for grading and fine adjustments later on.

image.png
image.png
image.png

Once the shoot wrapped for the day, I transferred the footage to my computer and denoised the final plate. I then imported it into 3DEqualizer to recreate the camera in CG. This step was crucial for achieving an accurate perspective, making CG integration much more seamless. I input specific camera data such as lens distortion, shutter speed, and frame rate to ensure realism.

image.png

I tested the matchmove solidity and credibility with some cube in perspective, that will show if the 3D camera is matching perfectly his real twin.

I returned to the original set to perform accurate photogrammetry. This allowed me to better match the scale and perspective in CG by aligning specific points from the real environment. The photogrammetry was processed using RealityCapture, and the result provided a solid base for integration.
I used this 3D scan in both the final render, mainly to generate accurate shadow catchers, and in 3DEqualizer to reinforce the realism and alignment of the virtual camera.

Next, I created a CG hand, which became the centerpiece of the project. Inspired by my earlier references—and with a nod to something like Kaneda’s hand—I wanted to blend reality and sci-fi into a single design. The hand is fully riggable, and I quickly animated it to test its integration within the CG environment.
Modeling the hand was a rewarding challenge. I would like to find a perfect balance between robotic structure and human anatomy. It required intense work to design shapes that felt both mechanical and organic, but I truly enjoyed the creative process.

ref_hand
hand04

Final result

hand3

Modeling the hand was a rewarding challenge. I aimed to strike a perfect balance between robotic structure and human anatomy. It required intense work to design shapes that felt both mechanical and organic, but I truly enjoyed the creative process.

hand01
ref_hand

Final modeling result

image.png

Tetsuo from "Akira"

hand02

Final modeling result

For the texturing, I used a technique to integrate printed circuit patterns directly onto the model. I projected and baked height maps from real electronic components—such as resistors, microprocessors, and cables—onto the hand in CG. After unwrapping the asset, I applied the texture maps and refined the look with tweaks to the normal, roughness, metallic, and albedo maps. The result brought the asset to life with a believable mix of tech and skin-like detail.

image.png
image.png

This is basically a map generated in Blender
which could be lookdev after the creation.

Finally, I added a custom rig that I built specifically for this project. (a rigidify rig in Blender)
You can see the final result in this turntable:

To achieve accurate lighting between the live-action footage and the CG elements, I captured an HDRI using a Theta camera. This 360° image of the set allowed me to recreate the ambient lighting and reflections realistically in CG. However, I made the classic mistake of forgetting to include the MacBeth chart and chrome ball in the HDRI capture—definitely a lesson learned! Because of that, I had to spend extra time manually matching the lighting between the HDRI and the live-action plate.
Also, it’s important to clean up your HDRI before using it. Even with a multi-bracketing process (ranging from -5 to +5 exposure), highlights can still be inaccurate or clipped. A bit of post-cleanup goes a long way in ensuring realistic lighting and reflections.

bracketing of HDRI
image_edited.jpg
HDRI neutral on exposure

All this process pushed me to balance creative vision with technical precision, and despite a few small mistakes along the way, I gained valuable experience in how to bridge the gap between reality and imagination. This project not only strengthened my understanding of the VFX pipeline but also fueled my passion for world-building through CG and compositing.

bottom of page