Categories
3D Computer Animation Fundamental Immersion

Week 12: Showreel Submission

This is my submission for Term 1.

Design Proposal Submission

Categories
3D Computer Animation Fundamental Immersion

Week 11: Sequencers and Rendering

This week I focused on all the Sequencers and camera focus and timing. For one of the scene I created a Niagara effect of dust particles bursting around in a circle when the character is hitting the ground with some camera shake to add a dramatic effect to the scene.

After I was done with all the focusing and timing of the cameras I added Bloom and LUTs to the cameras and turned on Motion Blur and put all the Sequencer together in a Master Sequencer for rendering.

During rendering I faced some issues with the cloth simulation as the cloth was acting weird and later on I found out that Anti Aliasing was causing this issue so in Movie Render I turned off Anti Aliasing and to compensate with it I added some extra Console Variables in the renderer to enhance the quality of the renders.

I rendered all the scenes in the Digital Space as my device was not strong enough to render all that and after the rendering was done I just did some colour correction and added some sound effects to the video in Adobe Premiere Pro and in this way I completed my Unreal Engine Assignment.

Categories
3D Computer Animation Fundamental Immersion

Week 10: Adding Animation and Cloth Simulation

So this week I started with the animation and adding cameras in my scene and creating Sequencers with different cameras and character animation. The problems which I faced this week was mostly regarding cloth simulation and collision. At first I created a Character Blueprint with Cape added as a Modular Mesh but the problem with that was, it is either not possible or too difficult to add a Character Blueprint with selected animation in the Sequencer and the other problem was, although selecting the skeletal mesh of cape as the character’s skeleton the cape was not colliding with the character.

Even after watching all the Youtube videos regarding this issue and trying everything I was not able to make the cape collide with the character mesh and the bones of the character was retargeted with the bones of UE4 mannequin. So, to solve this issue I went back to Maya and added the cape in the character’s FBX file and rigged it again using the AccuRig and then followed the same steps as before to added the character back in Unreal Engine 5.

So, after all this now I had a character with cape and did not needed to create a Character Blueprint so the issue with the sequencer was also solved. I just added Cloth Simulation with weights to the cape and just adjusted the hit box of the character to make the cape work properly.


For the sword, I just created a socket in the Right Hand of the character and added the sword to it so now the sword was moving with the character and the animations. This week I just fixed the cloth simulation and added the sword to the character and worked with the animations and camera angles to sync them all properly.

Categories
3D Computer Animation Fundamental Immersion

Week 9: Adding Character in the Project

So I finally found a character which I want to add in my project because I think it blends very well with the aesthetics of my project. I imported the character into Maya and gave it a basic rig with Human IK so that I can transfer animations to that character then import it in Unreal but everything started getting more and more complex as the animations which I wanted did not have a T-Pose so it was getting hard for me to use Human IK on them as I was new to this and I tried everything to learn and do it properly but it always felt like I hit a wall as no matter what I tried it never worked and It was too time consuming. So, I just made some basic animations myself and later I imported the model in ‘Accu Rig’ which is a free rigging software and then imported that model in Unreal as I was having problems importing the rig directly from Maya.


So after struggling a lot with the character and its rig I just imported it in Unreal Engine with Animations and then I just simply mixed the animations since I had 2 same characters (one with Maya rig and animation and the other one which i got from Accu Rig) I used the Accu Rig version of my character and targeted it to UE 4 Mannequin Skeleton so that I was able to use the Unreal Engine free animations from the packages which I have and I just blended it with the Maya Animation which I have created.

So this week I spent most of my time fixing my character which I got from Sketch Fab and figuring out ways to use animation properly on it.

Categories
3D Computer Animation Fundamental Immersion

Week 8: Term 1 Project Progress

This week I finished grey boxing and imported the meshed to Unreal Engine and placed them properly there. I also started looking for character animate and add to my project. After importing the basic shapes in Unreal I started adjusting the atmosphere and sky in Unreal to set the mood for my project as I am going for a dark red night like atmosphere.

Next I started combining different meshes from Fab, Quixel Bridge and Sketchfab and started making modular meshes to change the basic meshes with those. I just combined different meshes and changed the texture file a little to blend them into each other and after that I started combining them by creating ‘Packed Level Instances’ and that gave me the building blocks for the Cathedral.

Categories
3D Computer Animation Fundamental Immersion

Week 7: Review Session

This week we all had to show Serra our progress with the project for Term 1. So, I got review on my project and what I have done so far.

This week I just focused on creating a map layout for my project as I was going to build a gothic style cathedral inspired from ‘Elden Ring’. So, for that I need a plan and a basic layout.

After I was done with the layout I started placing basic shapes in place of the buildings to get an idea of the building’s size and how they will look. The basic shapes served as a place holder till I finish grey boxing so that I will know what I have to place and where I have to place the detailed meshes exactly.

Grey boxing will take some time to complete because the building which I am going for is quite huge an it’s gonna contain a lot of small modular meshes. I was doing grey boxing in Maya and then I will import it in Unreal Engine to place the detailed textured meshes in place of basic shapes.

Categories
3D Computer Animation Fundamental Immersion

Week 6: Understanding Control Rigs in Unreal Engine 5

This week, I looked into Control Rigs in Unreal Engine 5, learning how to use them for animating characters. It was all about setting up rigs and making sure everything was in place for smooth animations.

Getting Started with Control Rigs

First, I added a Control Rig to my project via the content browser under the Animation section. I started by creating a Modular Rig, which included adding the default mannequin’s skeletal mesh into the blueprint but instead of the default mannequin I used a Paragon Character from FAB. I then dragged and dropped the rig module into the available sockets on the character’s skeleton to set it up.

I could also customize the rig by adjusting the controller sizes and colors to fit my preferences, which was super useful for making the rig easier to work with.

Rigging the Octopus Model

Next, I worked with an octopus model to practice a different type of rigging. The first step was editing the skeleton and setting the root bone’s transform to 0. I deleted the existing arm bones but kept the shoulder bone. Then, I added bones starting from the shoulder, creating joints for the arms, and bound the skin to the skeleton.

I added a Control Rig Samples Pack from Fab to my project and created a regular control rig for the octopus. After importing the octopus into the rig hierarchy, I checked out the chain hierarchy to understand how the bones connected.

Creating and Adjusting Controls

I created controls for the shoulder and the last joint of each arm by selecting the bones and right-clicking to add a new control. To make sure the control wasn’t a child of another control, I pressed Shift + P. I then set up a parent constraint by assigning the root bone in the name section. By parenting the root bone control and attaching it to the shoulder bone, I ensured that the control could transform the bones it was linked to. I repeated this for all the shoulders.

Adding Spring Interpolation and Constraints

I experimented with SpringInterp by turning it into a vector. I dragged the last bone of the arm into the graph, got the bone’s data, and connected the translation to the target. Then, I linked the control for the bone to the Set Control node, which applied the changes.

I also set up an Aim Constraint to control the direction the arm would face. I defined the name and target for the arm’s aim, set the Aim Control’s parent to Control, and connected the nodes to make sure everything worked together.

Rigging a Human Character

Lastly, I applied what I had learned so far by rigging a human character. Using the nodes and techniques I had practiced, I was able to set up the character’s joints and controls, giving me a much better understanding of how to prepare characters for animation. This emphasized the importance of proper joint placement and control creation to ensure smooth, realistic animations.


This week was a great opportunity to learn how to use Control Rigs effectively in Unreal Engine. It’s exciting to see how these tools help set up characters for animation, and I can’t wait to continue building on this knowledge!

Categories
3D Computer Animation Fundamental Immersion

Week 4: Sequencer and Materials

In Week 4, I further explored the Sequencer in Unreal Engine and then progressed to working on creating Materials.

Initially, I learned how to create Shots in Sequencer, which is essential for maintaining organization when working with multiple cameras. To create a Shot, you first need to add a Subsequence track from the Add button in the Sequencer menu. This Subsequence Track serves to organize and manage the various cameras assigned to different tasks.

To create Shots, we first need to generate a Level Sequence from the Cinematics menu and ensure proper naming for organization. These sequences are then added to the Subsequence Track, where their lengths can be adjusted. Once the Shots are in the Subsequence Track, we can assign cameras to them by selecting the camera, pressing Ctrl + X, and then double-clicking on the desired Shot to paste the camera into it.

This week, I also learned how to create Materials in Unreal Engine. I began by downloading a Material from Quixel Bridge and importing it into Unreal Engine. I then created a new Material and used the Material’s node editor to add the Normal, Base Colour, and Roughness maps of the imported Material. I learned that the R channel is for Ambient Occlusion, the G channel for Roughness, and the B channel for Displacement. I connected the RGB channel of the Colour Map to the Base Colour, the G channel of the Roughness Map to the Roughness input, and the RGB channel of the Normal Map to the Normal input of the new Material.

Next, I experimented with adjusting the tiling of the Material by adding Texture Coordinate, Multiply, and Add nodes. This allowed me to modify the tiling based on specific needs, and I tested different values to explore various outcomes

After creating a Material, we were instructed to generate a Material Instance by right-clicking on the created Material and selecting the Material Instance option. The key difference between a Master Material and a Material Instance is that the Material Instance inherits all the properties from the Master Material and allows for real-time updates and adjustments.

Categories
3D Computer Animation Fundamental Immersion

Week 3: Sequencer

This week, we began working with the Unreal Engine 5 Sequencer. I learned how to translate film production techniques into the Unreal Engine Sequencer, with an emphasis on understanding the key differences between traditional film production and the Unreal Engine workflow.

Film

1. Master Scene Technique (Master Shot)

  • This refers to filming an entire scene in a single, continuous shot from a wider angle, usually capturing the entire set and all the actors involved. The master shot provides coverage for the whole scene, establishing the geography of the action and relationships between the characters.

2. Coverage Cameras

  • Coverage refers to additional shots taken from various angles and distances after the master shot. These include close-ups, over-the-shoulder shots, and inserts that help emphasize emotions, reactions, and details. These are edited with the master shot to build the final scene, ensuring smooth continuity and flexibility during the editing process.

3. Linear vs Non-Linear Storytelling

  • Linear: A narrative that progresses in chronological order, moving from beginning to end without jumping in time.
  • Non-Linear: A narrative that doesn’t follow a strict chronological order. It may involve flashbacks, flash-forwards, or scenes out of sequence to create a more dynamic or layered story.

4. Triple Take Technique

  • This is when a director asks for three takes of a specific shot, often with slight variations in performance, framing, or blocking. It gives editors multiple options to choose from during post-production and helps ensure the scene is captured effectively.

5. Overlapping Action

  • A technique where parts of a movement or action are staggered or delayed, making it more realistic. For example, when a character turns, their body, head, and arms don’t move at the same time. This is common in animation, but in live-action, it’s often seen in the timing and natural flow of actions.

6. Hitting Marks

  • In film production, actors are given specific spots (marks) where they need to stand or move during a scene for the best camera angles, lighting, and focus. “Hitting the mark” means landing on that spot accurately while delivering a performance.

7. Traditional Live Action Production Roles

  • Gaffer: The head electrician on set, responsible for managing lighting setups based on the cinematographer’s vision.
  • Grips: Technicians who handle the rigging and setup of equipment (e.g., lights, dollies, scaffolds). Key grips lead this team.
  • Production Manager: Oversees the logistics of the production, ensuring that the schedule, budget, and resources are well managed.
  • Director of Photography (DP): Responsible for the visual look of the film, working closely with the director to choose camera angles, lighting, and shot composition.

These are all crucial elements of traditional live-action filmmaking, giving a solid foundation for understanding how a production comes together from both creative and technical standpoints.

Unreal Engine

1. Sequence-Based Linear Workflow

  • One Level Sequence: This refers to the organization of a scene or project as a single, continuous timeline or sequence. In linear workflows, this sequence progresses from start to finish, without any jumps or re-arrangements. It’s simpler but may lack the flexibility of non-linear editing.
  • Multi-Camera: In a multi-camera setup, several cameras record a scene from different angles simultaneously. This allows for capturing various perspectives (e.g., wide shots, close-ups) in one take, which can then be edited together in post-production.
  • Single Camera Cuts Track: In a single-camera setup, each shot is filmed separately, one at a time. The footage is then edited together in post-production. “Cuts track” refers to editing the individual shots together in sequence on a single track in the timeline.

2. Shot-Based Non-Linear Workflow

  • Nested Level Sequences: These are sub-sequences or smaller units of a larger project that can be worked on independently and then combined into the main timeline. In a non-linear workflow, scenes can be rearranged, edited out of sequence, or altered without needing to stick to a strict order. Nested sequences are particularly useful for complex projects where different parts of a scene are handled separately and then “nested” back into the main project.
  • Take System: A system used to manage multiple versions (or “takes”) of the same shot or scene. This allows the editor or director to easily switch between different takes to find the best one for a specific moment in the timeline. It’s common in both animation and live-action editing.
  • Sub-Scene Tracks (Optional): Optional tracks that allow specific elements of a scene, such as animation, sound, or visual effects, to be edited and adjusted independently. This provides flexibility for making changes to one part of the scene without affecting the whole.

3. Multi-Artist Collaborative Workflow

  • Sub-Levels: Sub-levels are subdivisions of the main project that different artists can work on independently. In complex projects, having sub-levels ensures that multiple people can collaborate on different parts without interfering with each other’s work.
  • Sub-Scene Tracks: Similar to sub-levels but more granular, sub-scene tracks allow various aspects of a scene (e.g., animation, lighting, or sound) to be separated and worked on individually. This modular approach is useful in a collaborative environment where specialists focus on their specific areas.
  • Visibility Tracks: These control the visibility of certain elements or layers in a scene. For example, an artist may choose to hide specific props, characters, or effects during editing or rendering to focus on a particular aspect of the project.

Workflow Comparison

  • Linear Workflow is straightforward, where everything is worked on in a fixed order from start to finish. It’s more commonly used in traditional filmmaking where the story or scenes are shot in sequence.
  • Non-Linear Workflow allows for flexibility, where shots, scenes, and edits can be arranged, changed, or modified out of order. This is especially useful in animation and VFX-heavy productions, where various artists can work on different parts of the project simultaneously, and the final sequence is pieced together later.

These workflows are critical for large-scale productions, especially when teams need to collaborate efficiently without stepping on each other’s toes or slowing down the process.

After understanding the key differences between traditional film production and Unreal Engine, we began working with the Sequencer. I learned that the Sequencer is Unreal Engine’s Non-Linear Editing Tool, offering features such as:

  • Ground-up Shot Creation: Allows creators to build individual shots from scratch within the Unreal Engine, providing full control over elements like camera angles, lighting, and scene composition.
  • Pre-visualization: A tool used to create rough versions of scenes or sequences before full production. It helps visualize how a final scene will look and aids in planning and decision-making.
  • Full Film Creation: Unreal Engine can be used for the end-to-end creation of entire films, from pre-production to final rendering, providing a virtual production environment.
  • Game Cinematic Creation: The tool is also used to create cinematic sequences for games, helping to craft narrative-driven cutscenes or trailers with high-quality visuals.

This makes Unreal Engine highly versatile for both film and game industries.

I learned that in film, the narrative is typically structured as a collection of sequences, often following a three-act structure. In contrast, Unreal Engine uses a Level Sequence, which organizes Nested Level Sequences to build an entire narrative film.

We were introduced to two new terms: Possessable and Spawnable Actors, where Possessable actors are like Zombies which we don’t want in our scene and Spawnable actors are considered to be the angles which we always want in our scene. For Spawnable actors we will have to call them for them to appear in our scene whereas for possessable actors they will be seen in our scene always no matter what.

Afterward, we created a sample project named DMX Previs Sample to learn how to create new cameras within the scene and animate their movements, all while gaining experience with the Sequencer and the process of adding cameras and other objects to it so that we can keyframe them and animate them.

We learned to create Spawnable Actors after adding the models to the Sequencer. By right-clicking on the object I wish to convert into a Spawnable Actor and selecting Create Spawnable within the Sequencer, the object becomes a Spawnable Actor. This ensures that the object is always available in the Sequencer whenever we need to render the scene or access the Sequencer.

We created a Level Sequence and then opened the Sequencer to add a camera to the DMX Previs Sample scene. After incorporating the camera into the Sequencer, we adjusted the focus property from frame 1 to frame 100 and keyframed it to create a simple animation.

This week, I experimented with camera settings and movements to create various camera animations while enhancing my understanding of cameras in Unreal Engine, all while learning to use the Sequencer effectively.

Categories
3D Computer Animation Fundamental Immersion

Week 2: Unreal Engine World Building

This week, we delved deeper into Unreal Engine, beginning with the fundamentals such as the tools, user interface, and various features. We also explored the organization of elements within Unreal Engine and learned how to effectively manage levels inside a project.

When doing world partitions and world level design Unreal Engine is faster, easier and more efficient and due to introduction of nanite structures it can become incredibly detail.

The assets which we import are saved in the Content Folder as .uasset files and levels are saved as .umap files and projects are saved as .uproject files.

If don’t know what is where placed and not being able to find something in Unreal Engine just search for it in Unreal Engine.

Upon launching Unreal Engine, the Unreal Project Browser appears, allowing us to choose a location for our project and assign it a name. Today, we selected “Film/Video & Live Events” as the project type and opted for Starter Content. The Raytracing option is optional and can be enabled later within the project.

Content Drawer Shortcut: Ctrl + Space

To enlarge the view port press F10 key but we will still have the access to content drawer and other options and F11 makes the view port full screen and then we can not use the content drawer.

In this lecture, I learned that organizing assets into levels can enhance efficiency by facilitating the separation of different elements, making them easier to locate. Typically, there is a main level, within which we can create sub-levels.

Lighting is essential for creating realistic environments. I explored Unreal’s advanced lighting tools, including the Environmental Light Mixer, Directional Light, Sky Atmosphere, Sky Light, Exponential Height Fog, and Volumetric Clouds. Rotating the Directional Lights changes the time of the day in the scene. Ctrl + L + Left Click Drag is the short cut for rotating the Directional Light in the scene.

Created a new Emissive Material by multiplying a colour with parameter and connecting the node from the multiply to Emissive Channel.

Shortcut for creating a Three Colour Constant in Materials: 3 + Left Click

Shortcut to create a Parameter: 1 + Left Click

Just drag and drop the Emissive Material on any object to apply it on them.

The Quixel Bridge add-on allows for quick world-building by importing assets into the scene. Assets can be downloaded in Quixel Bridge and then easily dragged and dropped into Unreal Engine.

Props can be merged using the “Merge Actors” option in the Tools menu. Grouping props allows for rearrangement or further editing, but if no changes are needed, they can be merged into a single entity.

Landscapes can be added to a scene, or a new landscape can be created using Landscape Mode, which also allows for sculpting and shaping the terrain.