‘Identity’ explores the fragile, often painful tension between who we truly are and who society wants us to be. It reflects how personal identity is shaped, challenged, and sometimes overwritten by external expectations—how idealistic notions of perfection push us to lose the raw, imperfect parts of ourselves in the quest to belong.
Drawing from glitch feminism’s core idea of understanding how systems of power shape our identities and creating space for transformation and liberation by disrupting the system, this work uses the glitch as a metaphor for moments of fracture and rewrite in identity. At its heart, this piece reveals the battle between authenticity and conformity—between embracing our real traits, like sensitivity, stubbornness, and restlessness, and adopting traits society praises, such as obedience, politeness, and ambition. This pressure slowly chips away at our sense of self, until the line between who we really are and who we pretend to be starts to blur.
I started by procedurally creating an animated floor texture in Blender for one of the scenes. The animation added dynamic movement to the floor, which looked really cool at first.
However, later on, we decided to switch from the animated floor to a static texture for simplicity and better performance. To do this, I simply modified the texture I originally created in Blender, adjusting it to work as a static surface instead.
This change helped streamline the scene without losing the visual style I was aiming for.
After Effects
For one of the scenes where the character floats, I created a liquid-style background effect using After Effects. The goal was to add a surreal, flowing visual that complements the mood and motion of the floating character.
I designed and animated the liquid effect directly in After Effects, experimenting with different distortion techniques and blending modes to get the look I wanted. Once the effect was complete, I rendered it out as an image sequence to maintain high quality and flexibility in Unreal Engine.
After rendering, I passed the image sequence to Aayushi, who will apply it to a background plane in the scene. This method allows us to keep the stylized motion without relying on real-time VFX inside Unreal, which helps with both performance and consistency.
I worked on creating glitch effects in After Effects to add a digital, distorted aesthetic to part of the project. I experimented with different techniques like displacement maps, RGB split, and time remapping to create a layered, broken-screen feel.
The final result gives a nice chaotic energy that fits well with the visual direction we’re going for. I plan to use these glitch elements either in transitions or as background overlays to enhance the atmosphere of the scene.
I used After Effects to create an infinite tunnel loop for one of the scenes in our project. The idea was to build a sense of depth and motion, almost like the character is moving through a digital or abstract space.
I achieved the loop by layering and animating shapes with scaling and motion effects, then using seamless transitions to make the tunnel appear continuous. The loop gives a hypnotic, immersive feel, which fits perfectly with the tone of the scene.
This tunnel loop will be used as a background or visual overlay, adding a strong stylistic element to that part of the project.
For this project, I incorporated motion capture animations sourced from Rokoko’s free mocap library. I downloaded several idle animations to use for the side characters to add variety and life to the scene.
To apply the mocap data to my character, I used Maya’s HumanIK (HIK) system for retargeting. The process involved mapping the mocap animation skeleton to my custom rig, which allowed the animation to transfer directly onto my model. After the initial retargeting, the character’s movements worked well—there were no major issues like unnatural joint bending or mesh deformation, which showed that my rig was solid and compatible with mocap data.
Despite this, the mocap animations weren’t perfect out of the box. Some poses and movements looked a bit awkward or didn’t match the style I needed. To address this, I created animation layers in Maya, which allowed me to make non-destructive adjustments. On these layers, I refined the character’s posture and smoothed out problematic movements, ensuring the animation felt more natural and cohesive with the overall scene.
After completing the adjustments, I baked the animation data onto my character’s skeleton to finalize the motion. Then, I exported the animations as Alembic files, which preserve the animation and mesh data efficiently for use in other software or pipelines.
Finally, I sent the Alembic files to Aayushi so she could integrate the animated characters into the environment and continue with scene assembly. This workflow not only helped me gain experience retargeting mocap to a custom rig but also improved my skills in cleaning up mocap animations to suit specific project needs.
This week, we explored how to produce 360° videos using Unreal Engine along with the Off World Media Production Toolkit. We began by installing the Off World plugin, which includes Sprout—a tool that connects Unreal Engine to TouchDesigner for real-time interaction, streamlining the creative process.
The workflow involved setting up a specialized 360° camera inside Unreal Engine, configuring the project settings, and using Blueprints and the Sequencer to animate the scene. Once the video was rendered, we finalized it with some light editing in Media Encoder. It was a valuable introduction to a dynamic storytelling technique that gives the audience control over their perspective.
Project Update:
This week, I finished rigging and weight painting my character, which was a big milestone. Now, I’m moving on to animation. For this project, I’m planning to experiment with retargeting motion capture animation onto my character for the first time.
Since I’ve never done mocap retargeting before, this will be a great opportunity to learn. I’ll be using my own custom model and rig to see how well the mocap data transfers. This way, I can also test how effective my rig setup is in handling real animation data.
I know mocap animations often need adjustments because they don’t always match the exact movements or style we want. So part of the process will be cleaning up and tweaking the mocap to fit the character and the scene better. This hands-on experience will be valuable for improving both my rigging and animation skills.
This week, we had a chance to experiment with MadMapper, a powerful tool for projection mapping that allows animations to be projected onto real-world surfaces using a projector. I spent time getting familiar with the software and focused on applying glitch effects, testing how these could highlight parts of my work through projection. I tried projection mapping for the first time. It was interesting to see how textures can be projected onto a 3D model to add detail without sculpting everything by hand. I’m excited to explore this technique more and see how it can help speed up my workflow while keeping the quality high.
Project Update:
This week, I finished the UV unwrapping for the low poly version of my model. Getting the UVs laid out cleanly was important to ensure the textures would map properly without stretching or distortion. Once the retopology and UVs were complete, I sent both the low poly model with UVs and the high poly sculpt to Aayushi. She’ll be using these files to bake the textures and normal maps, which will bring the details from the high poly onto the low poly mesh.
While Aayushi is working on the texturing side, I’m shifting my focus to rigging. I’m about to start creating the skeleton and controls for the model, so it can be animated smoothly.
This week, I went to the motion capture lab again and observed a mocap session. The process started with placing markers on the performer’s suit, which are tracked by cameras around the room. Once everything was set up, I watched a live capture where the movement was recorded and applied to a digital rig in real time.
After the capture, the data was cleaned up and prepared for animation. It was a good refresher on the mocap workflow and how it fits into character animation.
Sculpting Progress:
This week, I started sculpting the high poly details of my model. With the base mesh complete, I’ve been focusing on refining the anatomy, sharpening forms, and adding surface details like folds, wrinkles, and texture to bring more depth and realism to the character.
As part of the collaborative project with Aayushi, I also made some changes to the character’s design to better fit the direction we’re going for. These adjustments helped improve the overall look and make the sculpt more aligned with the project’s tone and goals.
The plan is to finish the full high poly sculpt by the end of this week. It’s coming along well, and I’m excited to see the final result take shape.
I had all my references organized in PureRef and a very clear vision for the character, so I began sculpting with a strong focus on staying true to those references to achieve a realistic and high-quality result. Before moving on to rigging or retopology, I wanted to make sure the sculpt was fully refined and polished.
At this point, I hadn’t started rigging or even planning it yet. During week 4, Aayushi approached me with her project idea, which was centered around the concept of identity. She asked if I would collaborate with her since she needed a character model that fit her theme. Since I was already sculpting this character, it made sense to work together.
I also saw this as a good opportunity to try out rigging in the future, once the sculpting was complete. I really liked Aayushi’s concept, so we decided to collaborate. This benefits both of us: I get to test and develop the rig, and she gets a character and rig tailored to her project.
We are now planning the upcoming weeks together to map out the next steps. For this week, my main goal is to finish sculpting the character before moving it into Maya for retopology.
This week, we explored how nDisplay works in Unreal Engine. It’s a tool used to project content onto screens that aren’t the usual flat rectangle—like curved or anamorphic displays. I’ve always been fascinated by those kinds of setups, so it was interesting to see how they’re actually built and controlled.
The setup process in Unreal is straightforward. You start a project with nDisplay enabled, bring in the 3D model of the screen you’re working with, and then line up your video or animation so it fits the surface correctly. If the screen is made up of multiple segments, Unreal handles each section individually rather than stretching one image across everything. It’s a smart system that allows for more complex, immersive displays.
Project Progress:
This week, I started sculpting the base mesh of my character in ZBrush. I began with a Dynamesh sphere and focused on blocking out the primary forms using the Move and Clay Buildup brushes. The aim at this stage is to establish the overall proportions and silhouette before moving on to secondary shapes.
I’m keeping the sculpt loose and focusing on structure rather than details. The main goal is to get the anatomy and form working as a whole. Once I’m happy with the base, I’ll start refining the shapes, breaking the model into subtools, and preparing it for retopology later on.
In Week 2, we learned how to set up VCam using our phones to control cameras inside Unreal Engine. The setup was fairly easy—we enabled the Live Link VCam plugin, connected our phones to Unreal, and were able to move the virtual camera just like holding a real one. It felt really intuitive and opened up new possibilities for capturing dynamic shots.
I went home and tried it again on my own, and it worked really well on my network. It made me realise I could definitely use this for my FMP later—I love the idea of acting as the camera person and physically moving through a virtual scene. It’s such a fun and useful tool, especially for creating more natural or cinematic camera movement.
For my artefact project, I’ve now decided I want to sculpt and rig my own character. I’ll use ZBrush for the sculpting and a rigging tool in Maya to create the rig. I’m excited to explore this because I think it will help me understand characters better and give me more control in how I animate. It’s a lot to learn, but it’s a direction I really want to go in, and this project is the perfect opportunity to try.
In Week 1, we were introduced to the unit brief for our Advanced and Experimental 3D Computer Animation Techniques project. We have to create an artefact video that feels like an experience—something unique that plays with storytelling, animation, and experimentation. It’s a creative unit, meant to be more exploratory, where we’re encouraged to try new techniques and push our boundaries.
Personally, because I want to go into character animation, I’m planning to explore rigging more deeply. I’d like to create a rig from scratch for my own created character and use this process to support both storytelling and performance in my work. I’m hoping that by learning more about character rigging, I can bring more emotion and realism to my animations, while also improving my technical skills.
In class, we also talked about the concept of “experience” and how important it is when designing a piece. We looked at the different kinds of roles involved—audience, user, character, and avatar—and how they interact with a piece differently. Thinking about this helped me reflect on how to make something that really engages people and feels meaningful or immersive, even if it’s abstract or experimental.