Categories
VR Design Research Labs

Submission Links

Research Blog: https://secondyearvr.myblog.arts.ac.uk/2022/06/10/vr-design-research-labs-unit/

Critical Appraisal: https://secondyearvr.myblog.arts.ac.uk/2022/06/10/vr-design-research-labs-critical-appraisal/

Project Images/Videos/Presentation: https://artslondon.padlet.org/whobbs0120191/i1gut10jra6z46ao

Project APK: https://drive.google.com/drive/folders/1vhXMnCEJHmSdsGHH6oZpe2_CqMbYqPTK?usp=sharing

Categories
VR Design Research Labs

VR Design Research Labs Critical Appraisal

After planning the idea out roughly, I spent too long messing around and testing mechanics that I wasn’t even sure I would be using in the project. This certainly came back to punish me. I also spent too long trying to figure out the combat system as I would look at the scripts and the animators and would run away to do something else rather than slog through it trying to understand and make it work.

In the future I need to plan more thoroughly. At the very minimum, I need to list everything that I will need broadly at the start, such as sound. That way I can have a crude to-do list throughout the entire project that can be modified with more details later. Also, at least write tidbits in the blogs as I go instead of just keeping versions of everything I’ve done to then comb through later trying to expain once I’ve forgotten most of the details. And most importantly, do more work earlier in the project. Even if it’s 20 minutes a day thats still better than nothing.

Categories
VR Design Research Labs

Other Partially Developed Mechanics

Save Data

I wanted to have a save system for the game when I started and I got a very basic one working using a binary formatter. I wanted more complex save data though and that obviously required more complex setup, which I didn’t end up doing, leaving save data partially developed. A binary formatter can be used to save simple data such as int, string, float variables.

Magic

Another combat mechanic I wanted to implement was magic. I didn’t figure out how I wanted it to work exactly, but I did create some custom particle effects that would be triggered when casting.

Procedural Room Generation

The random generation of the game was meant to include the spawning of random rooms, but I didn’t spend the time to figure out checking where the door ways were in a room prefab. The base random generation is already in place due to the random enemy generation, but random rooms requires a lot more complexity than I could be burdened with at the time.

Potions

Another combat mechanic I was originally wanting was potions and grenades. Simple enough to implement, and they would use the same generation as the weapon spawning, but I didn’t have the time to complete this mechanic.

Categories
VR Design Research Labs

Enemy AI

I wanted to have an AI for the enemies to make combat that much more interesting on top of the sword mechanics. I followed the below playlist create an AI but quickly found out the videos leave out a lot of scripts and lines in shown scripts tat make the rest of the code useless without.

I got very frustrated with this and spent countless hours trying to find any clues in every video, even the ones I wasn’t planning on using in the first place. I spent way too long on this and gave up many times before coming back a few days later to try again. I eventually started making progress and filled in some blanks but theyy wwerentenough to get everything working properly. So, time for the not-so-good-looking option.

The videos show capsule colliders on most of the bones, but I simplified it so there are only two box colliders, left and right, to detect when you hit the enemy. The detection triggers an animation depending on where they got hit. The video version would blend the animations based on how close to the centre of the character the collision occured, whereas my version doesn’t blend at all.

Through a lot of trial and error, I figured out a way to make sure an instantiated prefab could detect a collision on itself and not other copies, and activate animations in it’s version of an animator without triggering those of another copy. I ended up using four scripts for this; one on each collider, and two on the parent gameobject. One script on the parent will look for, detect, and chase the player. The other script on the parent is used for storing variables to be accessed by the scripts on the collider gameobjects. Each collider script will detect when their respective collider has been hit, will play their respective hit animations, and will disable both colliders until they are ready to be hit again. The videos below show player detection, hit reaction, and enemy behaviours.

The white circle and yellow lines are done in an editor script in the editor folder so they won’t be used in a built version. This is useful for when you need to see something in the scene view but not in the built game, such as a detection radius.

Before I actually got this to work, the player detection and enemy behaviour was quite different. One version is the player would always be detected and looked at but would only be chased when within a certain range. The other is the player would only be detected within a certain range, but there was no way to hide whilst in that range and you would always be chased. Below are videos to show these.

As you can see with the first video, for some reason the enemy would lean over as the player gets closer, leading to some interesting results.

Categories
VR Design Research Labs

Random Generation

For me to want to repeatedly play a game I like to have different experiences each time. You can have the same basic story but with different encounters/mechanics, or the same encounters/mechanics with different stories. Customisation in games really helps with this as you can change your play style and enjoy a game that way too. One way to change the experience every time is to have procedural/random generation – terrain, enemies, items, or just about anything you can think of. Using a computer doesn’t make generation random exactly – everything is determined by calculations – but it’s pretty close and gives the feel of randomness, which is more important. I used the video below to create a script for generating random coordinates within an area of my designation and for prefabs to spawn at those coordinates.

You can see it working in the video below. The white cubes are to show the corners of the spawn area. The script is setup so each element in the Enemy Types list will correspond to the same element number in the Enemy Spawn Counts list and will use the number in the spawn counts list to spawn the prefab in the types list that many times. The spawning coordinates are all random between the parameters set, and even the rotation can be changed.

I made a modified version of this so you can choose the points you want an object to spawn in, and it will choose an object randomly from the list given to it. I used this to spawn in weapons for the player to choose from to vary their experience more.

Categories
VR Design Research Labs

Sword Mechanics

I really dislike the current popular method of melee combat/interactions; most objects don’t feel like they have any weight and they can move as easily and as quickly as the controller can. They also pass through most objects in an environment instead of receiving force based on object size and weight. This would be solved with force feedback if it were widely available and safe, as that much force to stop a sword could easily break an arm if done wrong.

I wanted to bring a realer sense of combat and logic to my game and not just have someone flailing their arms around to sporadically move their weapon. I ended up finding this: https://evanfletcher42.com/2018/12/29/sword-mechanics-for-vr/.

Evan Fletcher devises a way to create a more realistic feeling of feedback from an object with the current limitations of VR. He doesn’t show any of the set up he did except for the settings on a configurable joint. The rest he vaguely mentions as bullet points for what needs to be done, leaving me to figure out what he means and recreate to the best of my ability. Weeks later after ignoring this for some time trying to find an alternative, I finally sat down and had a think. First I tried to interpret the referencing of his Unity scene setup, before then creating a C# script to crudely run through the bullet points presented by Mr. Fletcher. These are what I came up with:

The wrist prefab has the configurable joint on it and is instantiated by the script when you pick up a weapon. The wrist object becomes a parent to the weapon object and a child to your xr rig hand/controller. When you drop the weapon the wrist prefab is destroyed, but the weapon stays.

This worked, somewhat surprisingly, but I know there’s things I haven’t thought of when deciphering Mr. Fletcher’s notes.

Categories
VR Design Research Labs

Tower Dungeon

For this project I wanted to create a VR game as I am more interested in video games than virtual experiences. I am highly fond of the fantasy genre, in particular medieval fantasy, and so I made that the theme of my project. A historical fantasy version of medieval Europe. I then needed to think about what type of game I wanted to make and roguelike kind of just kept coming back to me. I have played many roguelike games and I enjoy them very much, so I just went with it. I haven’t played any VR roguelike games but I have played Hades, The Binding of Isaac, even Pixel Dungeon on mobile, and many others so I already had a pretty good idea of some things I wanted to include.

To justify making this game and why it would benefit from being in VR compared to on a computer is spatial awareness. Roguelikes are usually played from a third-person or top-down view, meaning you can see things the character you’re playing shouldn’t be able to and react differently because of that. A roguelike in VR would restrict your spatial awareness incredibly in comparison as you can only see from the first-person perspective. You would have no idea if anything were behind you or around the corner unless you turn around or hear a noise. It adds that extra fear and suspense.

I was planning to make my own assets and characters at the start of the project, however I quickly got busy with other work and put this project on the backfoot, thereby running out of time to make any and have enough time left to make the game. I did come across the UMA Character Unity asset pack and tested it out a little, but I quickly forgot about it amongst all the other things I needed to do and I ended up just using a bunch of pre-made assets from the Unity Asset Store and Mixamo.

For weapons I used the following:

  • LOWPOLY – Weapon Pack Demo by IV team (Unity Asset Store)
  • Low Poly Weapons by SICS Games (Unity Asset Store)

For characters I used the following:

  • Dragon for Boss Monster : PBR by Dungeon Mason (Unity Asset Store)
  • erika_archer_bow_arrow (Mixamo Character)
  • Nightshade J Friedrich (Mixamo Character)
  • skeletonzombie_t_avelange (Mixamo Character)
  • Vampire A Lusth (Mixamo Character)
  • Paladin WProp J Nordstrom (Mixamo Character)

For the environment I used Ultimate Low Poly Dungeon by Broken Vector (Unity Asset Store).

With these assets I came up with a list of rooms I could construct and fill with items and then drew a floor plan. It’s a rough floor plan but it let me start the project.

That rough floor plan was good enough to get by but my indecisiveness meant I actually needed to flesh out the plan before building any more of the project. This next floor plan is the most accurate version that I used to make the layout.

And below is what it looks like top down in Unity. Looking through the ceilings of course.

The original concept for this game was to make a tower for the player to conquer but I took too long with making the mechanics that I ended up with only one level. The name was meant to be “Tower Dungeon”, it is now called “Castle Dungeon”. The setting changed from “Tower Dungeon is a roguelike dungeon crawler set in a historical fantasy version of medieval Europe. The player takes on the role of the Undesirable One trying to make their way to the top floor of Fortitude Tower – a tower safeguarding the power of the gods, said to be conquered only by those chosen by the gods.” to “Castle Dungeon is a roguelike dungeon crawler set in a historical fantasy version of medieval Europe. The player’s task is to defeat the occupants of the castle along with the evil Dragon Lord who reigns over them.” As the setting describes, the objective of the game is to clear out the occupants of the castle along with their evil overlord. You can clear room by room or you could go straight to the big guy.

The main mechanics for this game that I wanted were random/procedural generation, enemy AI, and sword mechanics. Each of them are described in their own post. There were also a few other partially developed mechanics that didn’t get finished in time or were being developed before I decided not to use them.

I came across a lot of issues and challenges when creating this game. The main challenges were to do with the enemy AI and sword mechanics, as described in their respective posts. Those took up a lot of my time trying to figure them out and get them working that I ended up not having the time or even forgetting about getting doors to work properly, or triggers to spawn things in rather than everything at once, and even sounds and menus. I also ran out of time to set all of the objects with interactable events, mainly just to pick up and move them around.

Annoyingly there are some pretty serious issues that have remained in the build upon submission. The player moves around just fine when not holding anything but when they pick up a weapon all of a sudden they can only move in the direction roughly 120 degrees counter-clockwise from where they are facing and at a stupidly fast speed. My guess is something in the custom weapon pickup script or configurable joint is affecting the player incorrectly. The sword wrist either works fine and I didn’t test it against an immovable object, or it looks like it’s working the opposite way of how it should and is moving the object it is touching around a pivot rather than itself. The player detection on the enemies has also stopped working in the build but it works in the Unity project file when you play the scene, so I have no clue what is up with that. You can find videos showing these problems here: https://artslondon.padlet.org/whobbs0120191/i1gut10jra6z46ao.

I would like to develop this game more and bring it closer to the original plan that I had but I’m not sure if I will as I might start a new project and use what I have learnt from this one to make that one better. If I do develop this game more, here’s a list of things I would want to improve/add besides the issues mentioned before:

  • Add more levels to make it a tower, not a castle
  • Add more progress attributes (upgrades, lore, difficulties, etc.)
  • Add friendly/neutral NPCs (e.g. shopkeepers)
  • Add/finish more combat mechanics (magic, ranged weaponry, weapon stats, character stats, etc.)
  • Add inventory system for collection of item drops
  • Add item drops
  • Add more room types (e.g. puzzle, trap, etc.)
  • Add random room generation

Categories
Mapping Virtual Practice

Submission Links

Research Blog: https://secondyearvr.myblog.arts.ac.uk/2022/06/07/sheffield-museum-wardsend-cemetery-collaboration/

Sheffield Critical Appraisal: https://secondyearvr.myblog.arts.ac.uk/2022/06/07/critical-appraisal/

Guest Lecture Review: https://secondyearvr.myblog.arts.ac.uk/2022/06/07/review-of-guest-lectures-talks/

Online Portfolio: https://will24422.wixsite.com/portfolio

Project Videos: https://artslondon.padlet.org/whobbs0120191/93ad8taj3cfq8fzn

Categories
Mapping Virtual Practice

Sheffield Museum/Wardsend Cemetery Collaboration

When approached with this collaboration, as there are only 7 of us in our class we had a rough idea of who would want to have what role before even starting. After some discussion we agreed with the following:

  • Creative Directors – Billy, Colin
  • Modellers – Anita, Jamie, Kamilla
  • Animators – Billy, Margaret
  • Programmers & VFX – Will
  • Sound Technicians – Colin

The creative directors decide the overall direction and scope of the project and assign tasks to everyone else. The modellers use Autodesk Maya or Blender to create 3D models with varying degrees of detail depending on how close they are to the user. The animators create animations either in Autodesk Maya or Unity based on complexity of animation and distance from user. The programmers do all the coding, put everything together in Unity and make sure it all works before the deadline. Sound technicians collect sound samples for anything potentially useful for the project and import them into Unity, making sure they work as intended. People also pitch in to any work that isn’t covered by those roles or if there is a particularly heavy load of work to be done by too few people.

We made a board on Trello to collect ideas and plan. We included resources, references, notes, ideas, meeting notes, and sections for specific areas of development, e.g. Animation and VFX.

We were given a low resolution image of the painting we would be basing the project on and decided it would be a good idea to get a better resolution image and to go to Sheffield to see and understand the area. The high resolution image file is too big to fit on here but below is the low resolution image and some pictures from Sheffield.

Painting.jpg

Whilst in Sheffield we used the Polycam app to create 3D models of the environment in real-time. Although we aren’t actually using these models, they help to give a sense of scale for the painting. Also, they’re pretty cool.

On the train ride back to London we started brainstorming seriously. The notes were jotted down in a notebook.

Using this modified painting file the modellers went to work creating some of the objects highlighted blue or green. The rest of us would be given tasks outlined on the Trello board that didn’t really fit into any of the already assigned job positions. I created a particle system for falling leaves as we thought it would be nice to have them fall into the room from the painting. This is the video I followed to create them: https://www.youtube.com/watch?v=wQJ0_TqoLr4. I copied this particle system and modified it to create steam for a steam train that would be running past the user.

The original plan was to use texture projection to texture models and create a terrain based on the painting using google maps data and removing modern buildings. We came across many issues with this approach, mainly that the terrain model wasn’t working properly and would create distortions when texture projecting. After seeking out help for a few days, Billy and Margaret finally were given the advice to create a parallax using multiple planes, each with a different part of the painting. So they set out creating a bunch of billboards for us to arrange and use.

Below are the models of the chapel, the bridge, some houses, and two monuments.

They currently don’t have a baked texture to them but when they do, their textures will be created as below.

There is train steam in the painting and we saw the railway in person when went to Sheffield, but there isn’t a train reference. We asked the people at the cemetery if they knew what kind of train ran through but they didn’t know either, so they kindly went and asked some train enthusiasts for us. Below is the model of the train we made based on the two possibilities we were given by the enthusiasts, as they also didn’t know. This model will be textured properly once the UVs are sorted out but they are a bit of a mess right now. This train will move around the painting to pass by the user.

Some of the closest people in the painting we decided to model as well. Kamilla used MakeHuman to create some generic character models with clothes before she imported them into Autodesk Maya and tweaked them to add some more variety. In particular, she ended up needing to model some of the clothes as MakeHuman didn’t have the right types for what we needed. When importing the characters into Unity however, the materials didn’t fare too well. A lot of them would change to a default material, so we have also needed to make materials for the characters in Unity.

Jamie modelled a museum room for the user to stand in to see the painting in VR. Below is the current state of the experience.

Colin has gathered many sound files for us to use and they will be triggered by different things in different areas of the experience.

There are also animations for the people and carriage billboards, animations for the people models, and a bunch of scripting to be arranged in Unity still. We have had a couple of hiccups with Github in the past few days, which has resulted in some tasks needing to be done multiple times, but it has mostly been a good resource for sharing the project files with each other easily.

We also have made a promo video for the experience that you can find here: https://artslondon.padlet.org/whobbs0120191/93ad8taj3cfq8fzn.

I am the one responsible for making sure Github works for everyone, arranging the scene, and making sure everything works in Unity. We had a merge problem with Github recently that has backtracked a little of our work, but we have backups of files outside Unity so it wasn’t too bad. I still need to resize and rearrange everything to fit the VR headset camera perspective as currently there is a lot of empty world space in the scene. I also was responsible for creating any particle effects and coding, so most of my work will be done in this coming week up to the deadline.

Categories
Mapping Virtual Practice

Review of Guest Lectures & Talks

“Applying for a job / internship” by Dr. Ed Tlegenov

Dr. Tlegenov starts off talking about Autodesk and showing example showreels of Autodesk products. Then we get into the useful resume info. When applying for a big company, a resume will usually be checked by a scanner or some sort of tool, so a person doesn’t have to. These tools don’t care about graphics or anything else except the parameters and keywords given to them. So, at this stage, you’ll want your resume to be simple text with the keywords you saw in the job advertisement. You’ll want to make sure the file type is preferably doc/dox and not pdf, and that it is easy to read i.e., left to right, top to bottom for English. In school you will usually be told about the keywords and to make a resume fairly simple, but this is the first time I have ever been told about the extent of the simplicity, which was actually really nice to know.

Onto LinkedIn profiles. I already knew they were for business connections and employment, so I have kept mine professional as I have been told to, but it was nice to be told more details. Your headline should be catchy, short, and ideally have a personal quality or role sought, a qualification, and an achievement. Keywords should also be used in your LinkedIn profile, just like a resume. Don’t keep your location too small; broaden it to a wider area. Customize your URL to make it easier to find and complete your profile to 100%. Lastly, network, grow, learn, and post. I need to work on posting as I don’t really do that with any social media that I have and would help demonstrate my interests and skills.

When looking and applying for positions, if you want to have a higher chance of success, you should go further than just applying for open positions on websites. You can also look for upcoming/not open yet positions by emailing companies, looking on social media, or you can look for opportunities from your friends, mentors, events, or social media.

All in all, this was a very informative lecture by Dr. Ed Tlegenov. It was nice to get more detailed pointers into applying for positions and what we will face and how to boost our chances of success. I feel this kind of detail should be told more in school instead of the usual spiel given by teachers that is minimal and highly generic and doesn’t really help much.

“Collaborative Motion” by Antoine Marc

Antoine Marc’s lecture was mainly about case studies that although they involved collaboration, they didn’t really feel relevant as they were all dance related, which is a completely different kind of collaboration that what you would need when developing anything VR. He did go over the importance of collaboration and the points he made were good and made sense, but we only spent 2 minutes on the importance before moving on to the case studies. Either way, the points made about the importance of collaboration are important. The points are: collaboration is inherently present in our lives, you can learn from others in the field, develop your own practice, meet experts in the field, exchange with like-minded individuals, learn from others from different fields, think outside the box, and learn from audience engagement.

“DALL-E”

We had a lecture on the OpenAi DALL-E, although I forgot the name of the person who gave us the lecture. We got to see the exclusive discord server for the ai where people would enter their ideas and an image would be generated for them. Our guest let us come up with some ideas to test out and see what would happen. We got some very interesting results.