Awkward Giant Robot Battles is a full body fighting where players play as giant robots fighting in a city. Partially inspired by Kaiju films, Awkward Giant Robot Battles started as a parody of the Kaiju genre where the battling robots would be so big that they would take forever to move or land a punch. The 2014 version was a two player shooting game where two players would stand and use their hands to aim and shoot eachother. The 2015 version was a single player experience for the Oculus Rift where the player would focus on smashing buildings and swatting missiles out of the sky.
The 2014 version was the most ambitious version for Oliver and I coding wise. The crux of our issue was trying to figure out what happens if two players punch eachother while their characters are being motion controlled. If we were using a console controller setup where players are pressing buttons and thumbsticks to move, we could easily separate the control from the actual physical models. Many Kinect games do this, relying on the Kinect registering “gestures” and turning those into events the game can handle. Yet we chose to go the hard route and have the players interact as rigid-bodies ingame. The solution we tried to implement had the Kinect body data rendered onscreen as a “ghost” that the real robot would always try to animate towards yet would be incumbered by a set speed and bumping into objects. Our lack of coding experience was the primary failure as I had never used Unity3D or C# before and Oliver had only used Unity for a few months.
Movement was perhaps the most interesting part of the game. Our earliest movement scheme had the players move march their legs wildly to move forward and lean in directions to turn. Though exciting and really fun to play for short bursts, this became incredibly tiring very quickly. The final solution we used has the players leaning their torso in the direction they want to move.
The end result is a split-screen shooting game where players try to shoot the limbs off of eachother. Players angle their arms to aim then do open and close their hands like a lobster to shoot.
2014 Version Above
We returned to the project again during the 2015 WearHacks-NYC hackathon and threw a bunch of Hackathon tropes at it to see what stuck. Though trying to use a Wii Remote resulted in far too much latency due to a lack of any kind of driver level support, using an Oculus Rift was incredibly engaging. Rather than focus on the task of trying to get the body on body interactions working like the last project, we focused on a single player experience.
This time, we used the Kinect to track just the hands and torso while using the Oculus’s built-in tracking to handle the head. To move the player, we ended up using an Xbox 360 Controller because though awkward and un-immersive, it was hassle free. I originally tried to make semi-detailed 3D models for the buildings, but we ended up settling with simple boxes. Oliver used Blender to create the building chunks which the game swaps with the original model when the player destroys a building.
Afterwards, I started working on making the game work without the Oculus as NYU Integrated Digital Media only had one DK2 available to check out and my laptop couldn’t handle it. To replace the turning and crouching mechanics, I made the torso control turning and had the camera bound to the actual Kinect-tracked head.
2015 Version Above
Virtual planet destruction at Integrated Digital Media showcase @NYUpoly! pic.twitter.com/lV03GwReB4
— Karl Greenberg (@KarlPGreenberg) May 13, 2016