• 512

    It’s been a long run, but it’s finally been released! I had to make a few last minute changes to the URLs, the exact wording, and branding. I’ve learned a bunch about the iOS build and publishing process as well as how ridiculously easy it is to publish to Android. This version 1.1 also comes with a new app icon that better explains that the app is from NYU and is a VR app.

    Download for Android Download for iPhone

  • Tandon Vision was created for Google Cardboard due to the incredible accessibility of VR experiences in Cardboard. Because Cardboard experiences don’t demand, and can’t create, the kind of high fidelity experiences of more expensive headsets, it allows a kind of low-fidelty yet immersive experience. The lack of cables tethering the phone mean that the user can turn in any direction. Because the cardboard only has one button, interfaces need to be primarily gaze controlled and more complex experiences need smarter interfaces capable of interpreting what users want.

    For the robot driving sequence, I ended up testing several solutions. The first idea was a first-person control scheme where you would see from a robot’s onboard camera. Any time you turned your head, the robot would turn too. To move forward, you would hold down the button. Trying it out, it started falling apart where users couldn’t see where they were going and would be very prone to motion sickness.

    The important change was switching from a 1:1 control scheme to a waypoint and route scheme with the Unity Nav-Mesh. Instead of the user piloting the robot’s every action, they would gaze and click to set their desired destination. The camera would follow from a 3rd person perspective, but would never turn on its own, so the player’s physical sense of 3D space while wearing the headset could be partially saved.

    To let players know where they were going, I added a simple compass to the UI that would point to their next destination. In addition, I used the unique aspect of Unity UI canvases that they’re not affected by lighting or fog so they’re visible from great ranges. Though it removes some of the exploration, it means no one has to aimlessly wander the sands looking for magical soil samples.

    https://www.youtube.com/watch?v=JtK9ULAqo5g

    Slow Awkward Giant Robot Battles (2014) by Matthew Conto & Oliver Garcia-Borg

    For future virtual reality projects, I really want to move away from d-pad style controls with WASD keys or thumbsticks. Room-scale, like with the Vive or by using the Kinect are really fascinating and tricking the body’s sense of movement is an open frontier for experimentation.

  • Creating the terrain was surprisingly easy using Unity Terrain. NASA has several published heightmaps including for the Gale Crater, which I ended up using. Though Unity Terrain isn’t the prettiest on its own, Unity has really power LOD optimization for it and it works exceedingly well with Unity’s Navmesh system, which I use for controlling the robot.

    After getting height data for the Gale Crater, I was playing around with lighting and decided to turn off both lighting from the sun and ambient lighting.

    Next, to match the color of Mars, I relied on photographs from the NASA Curiosity Rover. Though Unity Procedural Skyboxes are designed with atmospheric effects designed to mimic the scientific results of atmospheric scattering, I couldn’t match the nearly sepia-toned skies of Mars. I ended up using Photoshop to create a cubemap of a simple gradient made from the color of Curiosity photographs.

    Adding a faux dust-storm helped tremendously with creating a sense of place as the limits of designing for a mobile device meant that faraway details needed to be removed or minimized. I added a fog effect and reduced the draw distance to try and improve the framerates I was getting on some lower-powered hardware.

    Lastly, to create more detail, I added boulders to block off pathways and make the cliffs sharper and more imposing.