When approached with this collaboration, as there are only 7 of us in our class we had a rough idea of who would want to have what role before even starting. After some discussion we agreed with the following:
- Creative Directors – Billy, Colin
- Modellers – Anita, Jamie, Kamilla
- Animators – Billy, Margaret
- Programmers & VFX – Will
- Sound Technicians – Colin
The creative directors decide the overall direction and scope of the project and assign tasks to everyone else. The modellers use Autodesk Maya or Blender to create 3D models with varying degrees of detail depending on how close they are to the user. The animators create animations either in Autodesk Maya or Unity based on complexity of animation and distance from user. The programmers do all the coding, put everything together in Unity and make sure it all works before the deadline. Sound technicians collect sound samples for anything potentially useful for the project and import them into Unity, making sure they work as intended. People also pitch in to any work that isn’t covered by those roles or if there is a particularly heavy load of work to be done by too few people.
We made a board on Trello to collect ideas and plan. We included resources, references, notes, ideas, meeting notes, and sections for specific areas of development, e.g. Animation and VFX.
We were given a low resolution image of the painting we would be basing the project on and decided it would be a good idea to get a better resolution image and to go to Sheffield to see and understand the area. The high resolution image file is too big to fit on here but below is the low resolution image and some pictures from Sheffield.




Whilst in Sheffield we used the Polycam app to create 3D models of the environment in real-time. Although we aren’t actually using these models, they help to give a sense of scale for the painting. Also, they’re pretty cool.



On the train ride back to London we started brainstorming seriously. The notes were jotted down in a notebook.



Using this modified painting file the modellers went to work creating some of the objects highlighted blue or green. The rest of us would be given tasks outlined on the Trello board that didn’t really fit into any of the already assigned job positions. I created a particle system for falling leaves as we thought it would be nice to have them fall into the room from the painting. This is the video I followed to create them: https://www.youtube.com/watch?v=wQJ0_TqoLr4. I copied this particle system and modified it to create steam for a steam train that would be running past the user.
The original plan was to use texture projection to texture models and create a terrain based on the painting using google maps data and removing modern buildings. We came across many issues with this approach, mainly that the terrain model wasn’t working properly and would create distortions when texture projecting. After seeking out help for a few days, Billy and Margaret finally were given the advice to create a parallax using multiple planes, each with a different part of the painting. So they set out creating a bunch of billboards for us to arrange and use.



Below are the models of the chapel, the bridge, some houses, and two monuments.

They currently don’t have a baked texture to them but when they do, their textures will be created as below.

There is train steam in the painting and we saw the railway in person when went to Sheffield, but there isn’t a train reference. We asked the people at the cemetery if they knew what kind of train ran through but they didn’t know either, so they kindly went and asked some train enthusiasts for us. Below is the model of the train we made based on the two possibilities we were given by the enthusiasts, as they also didn’t know. This model will be textured properly once the UVs are sorted out but they are a bit of a mess right now. This train will move around the painting to pass by the user.

Some of the closest people in the painting we decided to model as well. Kamilla used MakeHuman to create some generic character models with clothes before she imported them into Autodesk Maya and tweaked them to add some more variety. In particular, she ended up needing to model some of the clothes as MakeHuman didn’t have the right types for what we needed. When importing the characters into Unity however, the materials didn’t fare too well. A lot of them would change to a default material, so we have also needed to make materials for the characters in Unity.

Jamie modelled a museum room for the user to stand in to see the painting in VR. Below is the current state of the experience.

Colin has gathered many sound files for us to use and they will be triggered by different things in different areas of the experience.

There are also animations for the people and carriage billboards, animations for the people models, and a bunch of scripting to be arranged in Unity still. We have had a couple of hiccups with Github in the past few days, which has resulted in some tasks needing to be done multiple times, but it has mostly been a good resource for sharing the project files with each other easily.
We also have made a promo video for the experience that you can find here: https://artslondon.padlet.org/whobbs0120191/93ad8taj3cfq8fzn.
I am the one responsible for making sure Github works for everyone, arranging the scene, and making sure everything works in Unity. We had a merge problem with Github recently that has backtracked a little of our work, but we have backups of files outside Unity so it wasn’t too bad. I still need to resize and rearrange everything to fit the VR headset camera perspective as currently there is a lot of empty world space in the scene. I also was responsible for creating any particle effects and coding, so most of my work will be done in this coming week up to the deadline.