Week 10: Collaborative Unit

We finalised rendering and compositing everything together. I created a title sequence for the project at the end of it, using After Effects. I used the Saber Plugin to create the fire on the kanji, and generated the smoke using animated fractal noise with a red tint. For the sparks I used particles inside of After Effects to create the effect of rising sparks with the orange smoke. I also decided that I would need a translation for the kanji as most people do not read it. Therefore I created text with a screen wipe effect and timed it with particle effects dissipating away to make it look like the text was turned into smoke.

However, after this I received feedback from others that the bottom translated text didn’t really fit with the rest of the overall look of the title text. Therefore I remade the bottom text to make the overall look more uniform. I think that it makes the shot look a lot more cohesive overall and fits the project well.

During this week we also scanned in the model of my collaborative partner wearing the outfit with the mask that our collaborator made and painted. The scanning process was fairly easy as the technician did most of it for us, and sent us the models with textures attached (though I remade the texture with an AIstandard material and the colour map as the attached texture was a lambert).

I also lit and rendered the scanned model inside of Maya, trying several different lighting setups and camera angles. The model had some issues, with a few parts of the mesh having double thickness geometry, but this was not super noticeable. Ideally I would clean up the mesh and delete the extra parts, but with the time constraints I left the model as is. I wanted to light the model with a fairly dark scene with red highlights as I thought this would help to give more of a sinister feel to the render. I also wanted to keep some distance between the render and mask, as the scan for some parts of the model such as for the mask were not very high resolution so close-ups tend to look bad. Using lighting to hide imperfections in the model with the extra geometry looking like extra shadows was useful to effectively make the model look cleaner. I tried a few different camera angles and shots as shown below, and eventually used the third one.

I also lit and rendered this scene with a seated pose, and I wanted the figures face to be mostly in darkness to give mystery on who he is. I wanted the baby and the general outline of the father to be lit to show people what is happening (a man cradling a baby). However due to the baby not being a real baby and just a blanket, not getting it’s “face” in shot was a good idea. I tried several different lighting setups and camera angles as shown below for this shot too. I chose to use the third shot.

I also lit the axe scene buried in a wooden stump. I found the stump on Quixel Bridge and used the axe that was modelled by my collaborator. As this is meant to be in the first half of the project I wanted it to not be super dark or red, and instead be as if lit in evening or late afternoon. I also tested a few different lighting scenarios, with different camera angles.

The downloaded assets that I used were mostly from Quixel Bridge, and are listed below with screenshots of their download pages. All other models were created and textured by me and Antonio.

I also used the following HDRI’s:

Week 9: Collaborative Unit

During this week we scanned in images of a model wearing Japanese attire including a hat and robes. We could use these as the only character in our project to show that the story is attached to a real person and isn’t completely abstract, whilst still mostly only having objects for the majority of the project.

During this week I also finalised the rendering of the Torii gate scene, to establish location in the project. I used a few different tests.

Although I don’t think there is anything wrong with the render, it was difficult to keep a cohesive theme around our project in terms of style as we have a variety of different shots – some greyscale, some low lit, some full environments etc. This can lead to the project feeling a bit less connected and potentially in the future it would be better to decide on a specific colour palette, lighting scenarios, camera angles etc to better keep the project connected. I tried a lot of different lighting setups for this scene, but settled on this one.

I also rendered the tree growth shots in this week.

I think that I could have spent more time here with different lighting shots, as well as potentially previously in SpeedTree I could’ve given the tree bark more detail and attempted to use flower objects for leaves instead of the default flowers. However with the time constraints I could not have time to experiment with this. The model could’ve also been subdivided to give a smoother model though I ran into issues as the model was an alembic cache.

During this week I also thought about how I wanted to create the title text. I decided I would create this in After Effects, as the industry standard software for creating work like this. I had a few ideas, mostly to use blood or fire effects to show the title. As we were not completely sure on the title of our project yet, I thought that I could potentially think of ideas. I thought that flames would be better eventually, as blood posed technical difficulties and was harder to make look good. The main story of the “show” would be of vengeance, but I thought that this would be fairly empty if the entire show was just the main character enacting revenge and that was it. Instead, I thought that the word “grief” was apt, as the grieving process is one that often changes and evolves over time, similar to how a character can evolve and change over time in TV series. I searched the kanji for this to really push the theme of Japan, and also I thought that the flaming symbols would look very impactful and leave an impression on the audience. I also thought that the symbols would need a translated element beneath. After this I continued to build an idea for the scene with different elements associated with flames and destruction such as red smoke and sparks. Below is a title I took some inspiration from.

Premium PSD | Cinematic title with fire text effect template

I also began to think about what effects would look good in our final compositing pass. I thought that having cherry blossom petals flying over the tree scene would make it feel a lot more dynamic and visually interesting than simply looking at a slightly swaying tree for 6 seconds.

I thought we should cut down some elements of the project, like making the doll blood splatter scene faster so that it is more impactful. I think potentially we could’ve used more short clips to be impactful in the second half of the project, as lots of media uses quickly sequenced changing imagery to push a feeling, especially of horror or negative events.

We met with our sound collaborators again this week with their draft. We eventually had to use this unfinished draft as they had joined our project late enough that it was quite difficult to get the animation to them on time, so they mostly had to create the audio without knowledge of what the final result would look like exactly – which must have been quite a difficult task for them. Nonetheless, I think that they did well to create something that fit the theme of our project with the little time that they had, though in the future it could possibly be refined a bit more.

Week 8: Collaborative Project

During this week I did a lot of test rendering and tweaking lighting of scenes that were already created to adjust how they looked. Trying different lighting setups and camera angles to think about what mood was being achieved with the different shots.

I animated the tree being set on fire this week too. I tried several different methods for this, including Blender, Maya BiFrost and Maya fluids. In Blender I did not get the results I wanted, as the fire was emitting from all over the tree, making too much fire. I had similar difficulties when using Maya fluids, however I was able to overcome this by creating a density map for the fluid simulation, allowing me to control which areas of the tree would be emitting fire (this looked a lot more natural as realistic fire does not emit from all around the object it is burning, and is instead a bit less uniform and dense). Tweaking settings to change the look of the fire, especially in the shading area of the fluid was useful to achieve a fire look that I wanted.

After creating this, I built the scene with the tree a bit better with more visual interest and more objects to help to scene feel more natural and interesting. I tested different lighting setups for both scenes but eventually found the lighting I wanted – I wanted to have the first half of the animation more colourful and vibrant with the second half being a lot darker and less light, so I lit the scenes according to this.

I also further worked on my gate scene, and I thought that I should add some hanging banners or flags from the gate with animation. I had found a tutorial online for Blender where you could quickly make banners affected by wind, and although I wasn’t very familiar with Blender I was able to create this, with a little wind animation on planes with a temple banner texture applied to them. I exported this as an Alembic to Maya, and fixed the UV sets being incorrectly assigned.

I decided that an incense stick and burning it with smoke coming from the top would give more visual interest to the gravestone scene, and smoke helps to portray the overall fiery-burning theme of the second half of the project, representing destruction. First I needed to model an incense holder, which I chose a copper bowl for, as well as the incense stick. I modelled these fairly quickly and textured them as they are basic models.

To create the smoke I thought there would be four options for me – Blender, Houdini, Maya Bifrost and Maya fluids. Blender has the upside of having a lot of free online tutorials for beginners which was useful as I do not know Blender very well. However, I could not get the results that I wanted in Blender, and exporting them to work in Maya would also prove to be an issue. Houdini had a similar issue in my unfamiliarity with the software and lacking the knowledge in knowing how to export it, though I did find a very good tutorial (linked below).

I started using Maya BiFrost, attempting to follow tutorials roughly to get the kind of smoke that incense sticks normally emit, pictured below.

Burning Incense Stick with Smoke Stock Image - Image of abstract, blue:  169492069

However, Maya BiFrost seemed difficult to control and heavy on my computer, causing it to lag heavily. I gave up on BiFrost and instead tried using fluids. Initially I was not sure that I could replicate the above smoke and opted instead for more basic, less interesting smoke. This was not very good though and I eventually found a cigarette smoke preset in the BiFrost browser that with some tweaks looked fairly good. However, on export to the gravestone scene, there were issues in setting this up again and the smoke had to be scrapped altogether. Nonetheless, I thought it was interesting to think about how I could use fire and smoke simulations, especially as I had other experience this project with this on the burning tree.

Week 7: Collaborative Project

For this week I continued on the doll part of the project as well as the toys. I finished the spinning top and kendama toy that will be used for the simulation with texturing done too using Substance Painter.

I also decided to animate the blood splatter of the doll. To begin with, I wanted to learn to use something like RealFlow or Houdini to fully simulate this effect of blood spraying onto the model. However, after speaking with a tutor about this, as the blood splatter effect is so fast that it can be done over the course of 5 frames, spending a long time learning Houdini just for a very short part of the animation seemed like a bad allocation of time. Instead I eventually decided instead to make it in substance with an animated shader, initially painting the blood on the model and saving copies of texture 5 times with less blood each time. This gave me 5 different texture sets that I could order as image sequences in Maya and as I played across the frames I could switch textures each frame. Maya had some issues with this however, so I instead had to manually create 5 texture sets, then apply the first, render, then the second and render a frame, then the third and render a frame and so forth. This lead to the effect of blood splattering on the model, even though it is just a quick texture switch.

I then saved 5 different versions with progressively less and less blood, leaving me with 5 materials for 5 frames of blood quickly splattering up the model. I first tried to set this up in Arnold with image sequencing.

This looked ok, but did not look right; I wanted the blood to look less like an overlayed dull texture and more as if a shiny liquid had stuck to the model. I rendered using a different material and lighting setup with this result.

During this time I also started thinking about how I would do the scene where the tree is on fire. I looked around for several different methods of creating fire in different software, such as in Blender, Houdini, C4D, Maya (BiFrost and fluid systems). I initially wanted to use Blender and rendered out a test sample on a sphere shown below.

However when I imported the tree model that I wanted to add fire too, when I recreated the steps I used to make the spherical fire, it looked strange and there was too much fire being emitted from the shape making it look odd, as if the tree was exploding with fire. Shown below is the nodes I used in Blender to create the fire, as well as a rendered shot of the fire.

Later, I realised this was because you need to add a density map onto where you want the fire to be emitted from, with a black and white map showing areas you do and do not want fire to come from. This was useful when I used Maya’s fluid system to create the fire, creating a fluid box around the tree and changing parameters until the fire and smoke looked how I wanted it to.

Box

Incandescence was a useful tool in helping the fire and smoke to look the way I wanted it to. As the emitter is responsible for both emitting the fire as well as the smoke, creating a black bit at the end made the edges of the fire look like they were emitting black smoke. Realistic fire has lots of different colours in it so this really helped to add to the realism.

Creating a density map allowed for me to only emit fire and smoke from certain areas of the tree, making the fire look far more realistic. Only having certain chunks of alpha on the tree emit fire made the fire look a lot less uniform.

Density Map
Density map added.

I cached this simulation, but the render times were still very high due to the processing required.

I further developed the Torii gate scene this week too, starting with basic lighting and adding a few more Quixel assets to the scene.

We also updated our storyboard for this week and assigned tasks to complete. Our sound design students wanted a previsualisation for this week after we met with them online, so I roughly created one using playblasts and some filler images for renders that were not created yet.

Week 6: Collaborative Project

For this week I setup my tree scene in Maya and thought about which angles I wanted to show the tree growth from. I decided overall that I wanted to have initially several cameras showing the trunk, larger branches and smaller branches growing, followed by an overall shot of the tree, surrounded by animated grass. I experimented with several different angles to see what might look good in terms of showing the growth, as well as different focal lengths. I tested basic lighting on this scene.

I decided to use Unreal Engine to simulate the grass movement as I had found an online tutorial that allowed you to create animated grass. As Unreal would be a better renderer for this task than Arnold, but still wanting to use Arnold for the other elements of the tree, I decided that I should composite the grass and tree together from separate renders, and try and blend the two with lighting. I didn’t end up using this Unreal scene as I thought it would take too much time and with not much experience in Unreal it would not be a quick process. To make this grass I created a plane and animated a texture on top of it with an opacity map, so that the texture would sway and animate it’s real life position while also not just being visible planes due to the opacity map. Duplicating these planes makes the effect look like the grass is very dense and sways in the wind.

During this week I also put my gate into a scene and began to think about building the scene around it. I used several models from Quixel Megascans to fill out the scene, such as stairs, walls, and two pillars. These objects helped to make the scene feel more realistic.

This week we met with our sound design collaborators as well as our art collaborator for the mask. We gave the sound design students some idea of what kind of project we were creating, and made sure that they had a good grasp of what it was that we were going for. We suggested some ideas for music that we might potentially want or certain themes (such as Japanese styled instruments).

During the talks with our art collaborator (Kame) we also discussed what kind of mask we would like to have in the scene and how we wanted it to be painted. Due to the process of creating the mask this lead to some limitations but we were able to get a decent idea of what we could have by the time it came to scanning objects. We presented an image board for inspiration on the masks.

Week 5: Collaborative Project

Summary of your work 

Our project is a TV opening sequence in the style of Daredevil, Westworld, Game of Thrones openings.

The purpose of it is to give a brief introduction to a theoretical TV series based on a man from a historical Japanese village in medieval times. The overall story of the opening will be a precursor to the overarching story of the TV series – explaining how the main character became how he is.

The key features of the project will mostly be using 3D models and lighting to tell a story, with minor animation using dynamic particle systems. Instead of the traditional way of telling a story, this will use objects to symbolize different parts of the story – for example instead of showing a character dying showing a grave with flowers on it.

The role assignments are that Antonio and I will both be doing some modelling and texture work, he will do more in the lighting area and I will do more in the environmental and animation areas. Other students in our group will work with us to help get more design ideas, and potentially some kind of music/sound design student will be able to help us with that area.

What does the work aim to achieve? 

The concept is novel in that it requires the audience to do some work in interpreting what is happening and what the overall story is. Although some effort on our part will be made to make sure that there is a cohesive understandable story, the audience will need to make some judgements on what is happening.

The narrative is driven by mostly the order of objects being shown – using rapidly changing scenes at times to drive home the connections between the objects/scenes – for example showing a tree growing to represent a family growing, but adding a scene that represents a baby (such as an object like a crib, or a baby toy) to help the audience understand that the tree is not just representative of a tree. After establishing this connection different things can be done with the tree such as having it destroyed, which can use the audiences previous understanding to allow them to understand this destruction is also representative of the destruction of the main character’s family.

The developmental process began with thinking of different areas of 3D that we had not yet explored and were interesting in attempting. As we both admired the TV openings of the shows I mentioned in the beginning, trying to go for a work inspired by these would be an interesting challenge, and mostly focus on areas we were not greatly confident in such as in particle systems and narrative through symbolism. Considering options such as what different objects can mean, especially when connected together was interesting and exercises such as saying a word and writing down what instantly comes to mind was useful to help think about what we subconsciously associate objects with.

Practical scope of the project requires a fairly generalised knowledge of different areas of 3D. We will be using modelling, texturing, lighting, rendering, compositing, dynamic systems and animation skills to complete the project.

Target users/audience

The target audience for this project will mostly be adults or more mature audiences as it deals with adult themes unsuitable for children such as violence.

Aspects of the work that are affected by this are that we decided to make the project look realistic – without much in terms of stylised models/animation. Putting in mind that children will not be the audience for this project also allows for more freedom in the limits of what can be shown.

We chose this audience and demographic because the shows that we drew inspiration are also mostly for this demographic. As we are attempting to make media that is similar to these shows we are inspired by whilst still being different enough to make it it’s own thing, we thought that choosing the same audience would make sense, as people who enjoyed that which inspired us will hopefully also enjoying a similar piece of media.

Technical

Techniques used to achieve our goal are mostly 3D CGI software, including Houdini, Maya, SpeedTree, Substance Painter, Arnold, etc.

Plans and timelines

We are aiming to finish our project by the deadline.

Testing aspects of our progress as we progress included making test renders/videos of effect. For example, for the tree animation I made previewing it inside SpeedTree before I export it to Maya, or previewing fluid dynamics inside Houdini before exporting.

I also began working on the gate section of my animation on this week, modelling and texturing a Torii gate. I wasn’t entirely sure how I would use it at this point, but I thought that it would be a good way of symbolically showing the audience what is happening in the story.

I also worked on my SpeedTree tree, building the tree from scratch instead of using presets. This process involves a step process using a node system to develop the tree. It starts with drawing out a trunk using points to begin with a lot of control. Further branches can then be add parented to the level above, getting smaller and smaller as well as more numerous. Eventually, you can also add roots as well as decorations such as leaves.

You can also create materials in this software for the tree bark and the flowers. There is a lot of customisation options available to further change your tree, including parameters to change how the bark looks, how much the tree twists, how much height the tree extrudes, etc. Further changes such as having the tree animate as well as adding forces to the tree such as wind can also be applied to animate your tree. I applied these and exported as an alembic cache.

Week 4: Collaborative Unit

This week in the collaborative unit I came up with ideas for several of the models to use in the project. I knew that we needed toy models to represent the main character’s children. After some research on what toys were available in medieval Japan, I came to the conclusion that I would model a doll, kendama (cup and ball toy), and a spinning top. As these were all fairly simple models I could model, UV and texture them very quickly. During this process I also considered where and how I’d want to use these models, and in the collaborative call we had weekly we decided that we would have a scene of all the toys falling to the ground, followed by a closeup of the doll model being animated splattered by blood.

During this week’s work I searched techniques for creating a tree growth animation. Initially I tried to use Blender following an online tutorial, but I quickly found that the result was not something suitable for what I wanted to create. I found a third party software called “SpeedTree”, that allowed the user to build their own custom trees and then automatically “grow” them over time if the user wanted that as part of the exported file. I used the nodes inside SpeedTree to create a Japanese sakura tree, helping this process feel a lot more unique than if I had just grabbed one of the free preset trees.

I talked to my flatmate, Kame, who I had seen on Instagram and through talking in person had experience creating and painting masks. I thought that it would be interesting for him to create a mask for us that the main character in our project could then wear, and we could scan the object into a 3D space and light it with camera angles in a way that we wanted. Pictured below are the masks that Kame has created before that we noticed and were impressed by.

Week 3: Collaborative Unit

This week was more focused on what each scene would look like – what the overall story should be, camera angles, lighting, overall what the visual style will be. This story can later be changed around and tweaked, but for now it is useful to be able to get an idea of what needs to be created, which tasks will need to be completed and how we anticipate solving them. Thinking about how to represent the overall story without telling it and instead using objects to symbolise the events that are happening, keeping the story easy enough to understand whilst also forcing the audience to do some work to see what is happening. As an example, to try and represent the main character of the story’s family being killed, instead of showing people being murdered directly, using well known tricks such as instead showing blood splattering out. I thought that using toys having blood splattered over them would be an interesting way to show this. Thinking about which ways I could do this was interesting, such as doing it in real life with a camera, using different dynamic systems in 3D space, compositing blood over the top of the scene, etc.

Week 2: Collaborative Unit

For week 2 of the collaborative unit work choosing a theme and story as well as what kind of scenes will go in the final project was important, using Miro.com to establish a mood-board with different inspirations; different TV openings, lighting references, etc. Considering which skills we could utilise as collaborators with an open mind was useful to help really consider options even if not initially obvious. We wanted to mix real life in with our digital work using 3D scanning, and creating the objects for this scan was something that someone with skills that we did not have could do, such as sculptors creating a mask, or someone with experience drawing coming up with concept art.

In most T.V. openings, music heavily affects the final project, so finding a collaborator with skill in music and/or sound design felt like a good idea. During this process, talking to other students that I knew from social settings was interesting to consider how I could potentially use them as collaborative partners and what skills they could bring to the project.

Moodboard
TV openings

Week 1: Collaborative Unit

To begin the collaborative unit I went to the meetup with people from several other courses, though I didn’t have much of an idea of what I wanted the project to be. Not many other people had a very clear idea of what they wanted to make for the collaborative unit, and I was not that interested in making either a game or a VR experience, which ruled out a lot of people at the meetup. Instead, I wanted to maybe collaborate with students from other courses such as Film, Music, Art etc. to help with the creative direction, and I and other students from MA 3D Animation would be responsible with most of the technical work. Personally, I was always interested in openings for TV shows, as I think that they are an important part of hooking an audience into watching a show. Specifically, TV openings that heavily utilize lighting and detailed models with often minimalist animation to give the audience a more subjective view of what they think is happening, and what is being represented without outright telling them.