As part of our Visual Story course at Carnegie Mellons Entertainment Technology Center we were required to briefly analyze visual imagery in a piece of media. I choose the game Shadow of The Colossus as my subject matter.
As the game begins with an eagle in the distance. The eagle descends into a mountain range flying rapidly, in a titled manner past our hero creating a momentary clear diagonal, and frame within a frame. The combination of techniques draw our eyes into that area of the screen where we our introduced to our hero clearly contrasted in brighter colors against the dark mountains.
As our hero continues to travel under the cover of darkness, we see the moon peer out from the canopy. the contrast between the moon and the dark leaves draws our eyes in which the director then uses for a smooth visual transition to the next scene which has a horse’s feet moving at the position of the moon.
As our hero approaches his destination he encounters a pass. Our eyes are drawn to our hero in his bright cape, and the glowing pass which contrasts with the stone grey scene. The techniques help setup our eyes on to our hero and where he is heading.
Introduction: Developed on the Oculus Rift with PS Move, DinoRancher had guests play atop a Triceratops armed with an electric lasso. The goal of the guest was to shepherd a herd of Stegosaurus to safety, protecting them from danger.
Story: You are a DinoRancher armed with your electro lasso and trusty trike. Travel across the desolate wasteland, and protect your herd from those nasty predators!
Integration of the PS move into Virtual Reality
Trike movement system
Design Goal: To create an experience that made the guest feel like a cowboy travelling through the desert protecting a herd of dinosaur from predators.
My Contributions: As producer I arranged meetings, delegated pending tasks, and contributed creatively. In addition as a programmer I was responsible for setting up the games environment which included, asset preparation, level design and developing agent behavior.
Introduction: Developed on the CAVE with Makey Makey, NoseDive had guests play in the CAVE environment using airplane controls we constructed using Makey Makey.
Platform:CAVE, and Makey Makey in Unity 3D | Time: 2 weeks | Roles: Programmer – Game Designer – Producer | Team Size: 5
Story: Our game had our guests take the role of make shift pilots thrust into having to fly a plane to safety through a terrible storm when the captain has become incapacitated.
Adapting to the CAVE environment.
Creating an authentic flight simulator experience with an easily understand story.
Design Goal: To create an authentic story of saving the day through the game we created.
My Contributions: For NoseDive I was producer, designer and programmer. Being producer involved scheduling and coordination of our teams artist, programmer and sound designer. In addition I assisted my fellow programmer with environment and Unity prop setup.
Introduction: A Playroom was a developed on the HTC Vive. A virtual reality device that allows a guest to walk around a calibrated virtual reality space with hand held controls.
Platform:HTC Vive in Unity 3D | Time: 2 weeks | Roles: Designer – Producer | Team Size: 5
Story: The setting of the game is in a play room where the guest encounters a ghost boy who needs help in-order to ‘move on’.
Design Challenge: To design a game for naive guests, conduct play tests, and make three predictions of what the guest will do all whilst having the guest ‘feel free’.
Design Goal: Round 2 of Building Virtual Worlds was indirect control round. This required we build an experience that felt free, and was intuitive enough for a guest to play from start to finish without any instruction or guidelines.
My Contributions: I analyzed, and designed the guests interactions as well as wrote our main non playable characters dialogue. In addition I conducted play tests which gave us invaluable feedback which we used to further develop the experience.
I focused on interaction development by first analyzing what we currently had. From that I wrote a draft story design which was a rough version of what we would aim for. Our current gameplay was clearly a linear story experience, and I believed we could achieve a greater sense of freedom by allowing a player a choice of what game to play.
From this notion I created two different interaction models.
I then met with the team, presented my two plans. We choose plan 2 which I further developed into a more detailed version.
Audio would play a vital aspect in driving this interaction model therefore I worked with our sound designer on a script for the game which we iterated over based on feedback (script documents).
Once the various audio cues, and interaction model was implemented we went about play testing the game. I conducted play tests with over fifteen naive guests which included an audience of fellow students, professors and non-students. This feedback was then used to polish elements of our experience.
In conclusion we correctly predicted each of the three interactions, and the guest understood our story, all with no guidelines or instruction from us.
We began our project with brain storming, and research into the platform on which we were developing. We came up with several ideas including:
Darkness– Use light to guide the guest through a street.
Space Exploration– Explore the universe, and pick a planet to colonize.
Dreaming – Flying a plane, flying elephants, flowers turn to buildings (freedom from constraints).
Empty Room – Furniture place (guide them to a correct place).
Having difficulty grappling with the concept of ‘freedom’ we spoke to a member of The Entertainment Technology Faculty Jesse Schell. After meeting with Jesse Schell we honed in on an idea of a ghost boy which we would help in some manner through objects around him.
Next we thought about location, which was first a storage room due to it making sense to have many object, we then changed to a play room as it offer the potential for a ‘warmer’ environment for guests to feel comfortable.
After creating a basic room with a simple number of interactions which included:
Place a train on the train track.
Hide & Seek.
Give a hug.
We had a prototype ready for interim.
After interim our two main points of feedback were
Make the boy and game generally less ‘creepy’.
To develop our interactions.
Point 1 was a significant design challenge which we tackled by investing time into solving by:
Making our main game character look more human like.
A warm game atmosphere.
A friendly, light and clear character voice.
I decided to tackle point 2 by first analyzing what we currently had, then writing a draft story design which was a rough version of what we would aim for. Our current game play was clearly a linear story experience, and I believed we could greater the sense of freedom by allowing a player a choice of what game to play.
From this notion I created two different interaction models.
After meeting with the team, presenting the two plans and convincing them of the need to carefully design the experience, we choose plan 2 which I then further developed into a more detailed version.
Audio played a vital aspect in our experience so I worked with our sound designer on a script for the game which we iterated over three times based on feedback (script documents). In addition to audio we used a number of other techniques including:
Lighting – To direct the players focus.
Color – Brightly contrasting objects such as with the yellow train on a blue chair, and a red book on a beige floor caught the players attention.
Uniformity – A suggestive picture fragment was placed in the frame, and other similar looking puzzle pieces were placed around the level.
After implementing these features with a new interaction model we went about play testing the game. We conducted play tests with over fifteen naive guests which included an audience of fellow students, professors and non-students.
Based on the feedback we received we continued to polish elements of the game. The end result of our work was that not only did we accurately predict each of the three interactions, but the guest completely understood the story behind our world all with no guidelines or instruction from us.
Story: Jam-O-Draw was inspired by the classic etch-a-sketch game.
Design Goal: We wanted to create a multiplayer artistic experience with a fascinating reveal.
Adapting to an unfamiliar platform.
Creating an aesthetically pleasing experience using visuals and audio
Having the user interface during the experience be responsive and informative.
My contributions: My primary role on this project was as producer which involved making creative contributions, arranging meetings, coordinating our artists, programmers and sound designer to create the game in a timely manner. My programming responsibilities included assisting my fellow programmer with development, and preparing the game environment and assets.
Introduction: Seize the Sky was built during Building Virtual Worlds at Carnegie Mellons Entertainment Technology Center. The world was constructed using Oculus Rift, and Leap Motion. Using these technologies we put our guest into a virtual reality space with an ability to use a natural interface in our world.
Story: A mighty giant heads towards a town with murderous intent. A country side boy notices, and cries to Zeus for help to defeat the giant to save the city. You are Zeus, save them all!
Design Goal: Our design goal with Seize The Sky was help character A (the boy) who is afraid of character B (the giant).
Incorporating a satisfactory use of Leap motion.
Achieving our a sense of character A is afraid of character B.
My Contributions: As the lead programmer on Seize The Sky I made large contributions to the code base for this project. I also took an active part in the design process with working with the team to develop various aspects including game play, and level design.
The development process started with being assigned teams. In our first team meeting we made clear our skills, started brainstorming ideas, and kept good development processes in mind.
During brainstorming we tried using several appropriate methods, such as gesture centered brainstorming (due to our use of Leap Motion). Finally we had five initial ideas:
Help mend relationship between characters.
Play piano to make baby sleep.
Use light to guide a character home.
Keep animal safe growing to adulthood.
Hold characters hand to guide them.
With our initial ideas we further boiled them down to three concepts with the following reasoning:
Concept one was hard to conceptualize compared to our other ideas which seemed simpler and more clear.
Concept five could be incorporated into concept three.
Creating sketches of each concept we then sought out the advice of our professor Jesse Schell.
With Jesse Schells feedback we went with concept C, because we wanted to explore squeezing in Leap Motion.
We then began further conceptualizing the idea with sketches, and research into the capabilities of Leap motion and Oculus.
With this in mind we began assigning tasks to complete, considering game play, and used a scrum board to assist us in tracking tasks.
On the technical side we used a NavMesh, and simple A.I. to run the behavior of the Hunter and Deer. The behaviors of the two agents were essentially:
The deer always moved to nearest tree that has an apple.
The Hunter patrolled around fixed points, and if it came close enough to the deer it began chasing it.
The result of our hard work was the following.
We then received feedback at interim, which sadly wasn’t good…
The rules to create these portraits was simple. Each person was given a different colored pen, and was allowed to make single strokes one after another. Any hesitation resulted in having to stop drawing, and choose a name. The name was created by taking turns writing a single letter.
The following is what Charlie and I created.
We very much enjoyed making these memorable characters! Try out this exercise yourself sometime, its loads of fun!
Building Virtual Worlds at Carnegie Mellon University starts with each student being assigned a role in Round 0. Since I have a Computer Science background, my role was that of programmer; this entailed I build a world that employed a number of basic features in Unity, such as:
Loads models and textures.
Use intervals, lighting, collisions, and multiple scenes.
When considering the world, what I noticed was the amazing talent of the artists and musicians around me. It occurred to me what a shame it would be for their work not to be seen. I decided then that my virtual world would be a gallery of other peoples work. My first task was then to coordinate of assets with artists, and sound designers.
Artists were initially required to create animated lunchboxes, then dragons, and sound designers were required to create music for a clip of game play from a previously made world. I decided to meld the two by attaching audio sources to several of the artists assets that would constantly play music made by our sound designers.
The 14/15th of August is Pakistan and India’s Independence day. As part of the celebration a bunch of Carnegie Mellon (CMU) students planned to paint CMU’s famous fence in both nations colors. Some friends and I tagged along to help out, and be a part of the old CMU tradition.