Friday, October 24, 2014

Oculus Rift Unity Integration Documentation

  Matthew Joyal                            Preethi Chimerla                       Nirav Sharda

The aim of this project is to demonstrate a pit using oculus rift and unity game engine. This document shows the steps to build this project.

1) Oculus Rift Integration in Unity4 pro:

      Oculus Rift has a very easy integration with Unity for Windows and MAC OS. You need to make a free account at and download the oculus runtime for Windows or Mac. You also need to download the Oculus Unity Integration file. Import the Oculus Unity Integration by dropping it to the asset folder in Unity and then clicking import. Now you have to add the OVR Player Controller from the prefab folder to the scene. When you are done with the scene, build it using File-> Build Settings. Then go to the folder where you saved the build and run the direct to rift exe file. This link demonstrates how to make a basic scene and run it with oculus rift.

2) Making the Scene in Unity:

    First we build a room with a plane and walls on three sides using basic cubes. The fourth wall is not inserted because we want to add a spiral staircase which serves as a pit. The room has some object like bed, tables, chairs, television, piano etc which are free assets downloaded from the Unity Asset Store. Some snapshots of the room are shown below:

Figure 1: Snapshot of the room in unity.

Figure 2: Another snapshot of the room.

3)  Importing a model file from Blender (

    Unity 3d has native support for blender files. Meaning you can create a custom model in Blender (in our case the pit room seen below) and then take the .blend file and drag and drop it into your Assets in Unity. By doing this you also get access to any materials that were applied in Blender. Another approach is to export to a .fbx file from Blender.

Figure 3: Pit room imported from Blender

4) Changing IPD,FOV and other values at runtime:

   The following table shows the keys to be pressed to change the values of IPD, FOV,rotational and speed scale multipliers etc.

Decrease Key
Increase Key
Inter Pupillary Distance
Field of View
Update Neck Position
Scale Multipliers
Rotational Multipliers

Introduction to use Kinect in Unity

Introduction to use Kinect in Unity

1. How to integrate Kinect with Unity in Windows platform.
  1. Download following files:
Kinect Wrapper package for Unity . This is a sample scene to show how to use Kinect
         in Unity. It contains a scene, some models and some C sharp codes.
  1. Install SDK and Developer Toolkit
  2. Create a new project in Unity
  3. Double click the Kinect Wrapper package for Unity to import it to the new created project.
  4. Under Scenes folder, open MainScene
  5. Plug in Kinect to computer and Run the project.

2. Brief explanation of sample package
   The sample package includes several C# source code files. We are only interested in the files
   related to gesture recognization. Kinect could recognize 20 joints of human body and provide
   position data of the 20 joints to developer. All the functions are in Kinect10.dll files. After you
   install Kinect SDK, Kinect10.dll will appear in windows/System32 folder. To use the function
   in this dll file, we need to load this dll file first in Unity. You can find codes about this in
   KinectInterop.cs. There is a function called NuiSkeletonGetNextFrame, this is a native   
   method and it provides data of skeleton. This function is called in KinectSensor.cs which save
   skeleton data to NuiSkeletonFrame skeletonFrame object. Skeleton data in this object was
   was used in SkeletonWrapper.cs which made some matrix multiplication. But be honest, we
   do not know what this matrix multiplication are doing. Need time to figure it out. Finally,
   SkeletonWrapper.cs save after-processed data to bonePos array, boneVel array and other
   similar arraies. Developer who need skeleton information just need to call this arraies since
   they are public.

3. Gesture recognization
In our work, we could recognize two basic states of users’ two hands: movement and stationary. Base on the two states, we could recognize more gestures including: Pick, Drop, Discard, Zoom in and Zoom out.

3.1 Move and Stationary
In general, the keyword is the “velocity” of two hands. We could get position data of two hands from Kinect every frame. Then we use  ((currentFramePos - previousFramePos)/deltaTime) to get velocity of two hands at every frame. It seems very simple to recognize state of hands with velocity. If current velocity is greater than 1.0(threshold we set), we consider the hands are moving. If current velocity is less than 1.0, we consider the hands are stationary. During the development process, we found that there always are confusing data appearing. For example, even users think they are keeping their hands stationary, but actually their hands are slightly moving because of shanke which people can not aware of.

To solve this issue, we evaluate the state every 10 frames. If at 8 or more frames, the velocity is greater than the threshold, we consider that users’ hands are moving. Otherwise we think their hands are stationary.

3.2 Pick
Precisely, Pick is not a gesture. If users move their hands in real world, two virtual hands also move in virtual space. If two hands collide with same object, the object will enter controlled state. In this state, users can use gesture to control the object including drop, discard, zoom in and zoom out.

3.3 Drop
In 1.1, we talk out how to recognize movement and stationary state of two hands, If two hands keep stationary, the controlled object will be dropped and users will lost control of the object.

3.4 Discard
If user quickly swipe their two hands back, such as swipe back about shoulder, the object will disappear from the scene. The logic is simple:
  1. Set velocity threshold to a big number, such as 800.
  2. If velocity of two hands are greater than 800, we know that users are moving two hands
         very quickly. And if velocity value is positive, it means hands are moving back and
         negative value means hands are moving forth. So if users are moving hands back very
         quickly, the velocity should be a value greater than 800.
  1. Destroy the controlled object.  

3.5 Zoom in and Zoom out
If users move their left hands to the left(velocity is negative value) and move their hands to the right(velocity is positive value), we consider they are zooming in the controlled object.

Similarly, If users move their left hands to the right(velocity is positive value) and move their hands to the left(velocity is negative value), we consider they are zooming out the controlled object.


4 Walking in Place:

Walking in place refers to making the user to move forward while he just walks by standing there itself. For doing this, the transform coordinates of the player are incremented every frame.

5 Hitting the Player with many cubes:

The player is hit with multiple cubes and on each hit a message is displayed as to where the cube hit the player that is at which joint the cube hit the player. Along with the message, an Image is also displayed.

5.1 Tracking the hand:

The hand of the player in the scene is tracked by a sphere that is the sphere moves in connection with the player’s hand.

5.2 Displaying the message “HI” :

Firstly when the player appears on the scene, a Hi message is displayed. After that it can be made to disappear by letting the player move his hand on his head.

Thursday, October 23, 2014

Car Navigation In Unity

Car Navigation In Unity

                                                By Anicia Dcosta, Bharath Bommana, Puja Davande

Logitech G25 Racing wheel
Logitech G25 racing wheel device used to drive a car in video racing games. It is tool for navigation. It includes steering wheel, set of pedals and shift gears. With the help of wheel and pedals user can manage to drive the simulated car.

Racing Wheel

The Racing wheel has following components:

1)Steering Wheel

2)A set of pedals

3)A shifter
            8 buttons
            1 D-pad
            A gear stick.
Car Navigation Tutorial
This tutorial will help you to move a car (rover) in the virtual space.  User will be able to move around in the Terrain (space) provided. We are using Logitech G25 racing wheel to move the car. We have created the project in unity and have connected it to G25 racing wheel for the movement.

Creation of the Terrain:
Import the standard Terrain tool from unity and with the help of the tools available we created the below Terrain. This space will be used as a track for the car movement. Create a directional light to light up the terrain.

Figure 1: Terrain

 We downloaded the car model from Unity asset store. Search for Vehicle SUV in asset store. Download the car and place it in the terrain.

Figure 2: Car Model

Wheel Colliders:
 This is special collider used for grounded vehicles. With the help of Wheel colliders we can calculate friction and wheel physics. Create an empty game object and name it as Wheel Collider. Create wheel collider under this empty object and assign it to each of the wheels of the car.  Make this object (wheel collider) as a child object of the car object.

Figure 3:Wheel Collider

  Car Movement Physics:

We use different methods to make the car moving.

SetupWheelFrictionCurve ():

 This method is called in the start method. In SetupWheelFrictionCurve() we simply create a new WheelFrictionCurve and assign it the values that we think are appropriate for our car. A WheelFrictionCurve is used by Wheel Colliders to describe friction properties of the wheel tire.

Figure 4: SetupWheelFrictionCurve

This method is used to take input from the user and apply this to the wheel movement. We are using the input from the vertical as a torque to the rear wheels and horizontal input as the steer angle to the front wheels.

Figure 5: wheelMovement
This method is called in Update function . Inside that method we check the rotation of the car. It checks if the rotation is at an angle where the car is not drivable anymore, and if it is, we add the time since the last frame to the resetTimer variable.
If this value eventually adds up to exceed the value that we have set for resetTime (5 seconds by default), we call the method FlipCarBack.
If the rotation of the car is not at a bad angle, we set the timer back to zero instead.
Figure 6: Code for CarFlipping


In FlipCarBack method we get the car back on it’s wheels and sets it’s speed to zero, so we can start driving again from a standstill.
Figure 7:CodeforFlipCarBack

Wheel Rotation:
This method calculates the rotation of each wheel from its wheel collider(rpm) which is applied to the each wheel transform. Below is the code for it.
Figure 8: Code for wheel rotation

 Braking System:
    Car can be stopped using the brake pedal in G25 racing wheel. It uses the braking torque of the wheel collider provided by unity. Below is the code.

Figure 9: Code for braking

Our Idea of gears is little basic. We took the Top Speed of the vehicle as 160 and multiplied it by a factor based on the Joystick button pressed. The factor for first gear is 0.2, second gear is 0.4, third is 0.6, fourth 0.8 and for the fifth gear it is 1.0. That means the vehicle can go at its max speed only in the fifth gear. We used simple if –else if blocks in our code to implement this. We had a hard time to detect the joystick buttons corresponding to each gear, and we were able to find joystick buttons for only first two gears.
Figure 10: Input setting for gears

Figure 11: Gears

 First Person and Third Person views:
For the First Person view, the camera needs to be set inside the vehicle and should be facing the path in front of the car. For this we position the camera carefully inside the vehicle and adjust the rotations accordingly and make the camera, a child object of the vehicle, so it can move along with the vehicle.
            The third person view, we set it up mainly to show the wheel rotation.  We need to position the camera relative to the vehicle and adjust the rotations to provide a good side-on view from the rear of the vehicle. Also make this camera a child object of vehicle, so it follows the vehicle and provide the third person view.

Figure 12: First person and third person view