Short Descriptions of the Demo Scenes


Here are short descriptions of the available demo scenes, along with the major components they utilize and demonstrate.


Scene:

Description:

AvatarDemo / KinectAvatarsDemo1

The scene shows two avatars controlled by the user, from third person perspective. The scene utilizes the KinectManager-component to manage the sensor and data, AvatarController-components to control each of the two avatars, as well as SimpleGestureListener-component to demonstrate the gesture detection process.

AvatarDemo / KinectAvatarsDemo2

It demonstrates avatar control from first-person perspective. The scene utilizes the KinectManager-component to manage the sensor and data and the AvatarController-component to control the 1st person avatar.

AvatarDemo / KinectAvatarsDemo3

This scene utilizes the AvatarControllerClassic-component, to control the avatar’s upper body, as well as offset node to make the avatar motion relative to a game object. The scene utilizes the KinectManager-component to manage the sensor and data, and AvatarControllerClassic-component to control the upper body of the avatar.

AvatarDemo / KinectAvatarsDemo4

This demo shows how to instantiate and remove avatars in the scene, to match the users in front of the camera. It utilizes the KinectManager-component to manage the sensor and data, UserAvatarMatcher-component to instantiate and remove the avatars in the scene as needed, as well as AvatarController-components to control each of the instantiated avatars.


BlobDetectionDemo / BlobDetectionDemo

The blob-detection demo shows how to detect blobs (compact areas) of pixels in the raw depth image, within the min/max distance configured for the sensor. It utilizes the KinectManager-component to manage the sensor and data, BlobDetector-component to provide the blob detection in the raw depth image, coming from the sensor, and BackgroundDepthImage-component to display the depth camera image on the scene background.


ColliderDemo / ColorColliderDemo

It demonstrates how to trigger ‘virtual touch’ between the user’s hands and virtual scene objects. The scene utilizes the KinectManager-component to manage the sensor and data, HandColorOverlayer-component to move the overlaying hand-objects with colliders, JumpTrigger-component to detect virtual object collisions, and BackgroundColorImage-component to display the color camera image on the scene background.

ColliderDemo / DepthColliderDemo2D

It shows how the user’s silhouette can interact with virtual objects in 2D-scene. This scene utilizes the KinectManager-component to manage the sensor and data, DepthSpriteViewer-component to display the user’s silhouette and create the overlaying skeleton colliders, and EggSpawner-component to spawn the virtual objects (spheres) in the scene.

ColliderDemo / DepthColliderDemo3D

It shows how the user’s silhouette can interact with virtual objects in 3D-scene. The scene utilizes the KinectManager-component to manage the sensor and data, DepthImageViewer-component to display the user’s silhouette and create the overlaying skeleton colliders, and EggSpawner-component to spawn the virtual objects (eggs) in the scene.

ColliderDemo / SkeletonColliderDemo

This scene shows how the user’s skeleton can interact with virtual objects in the scene. It utilizes the KinectManager-component to manage the sensor and data, SkeletonCollider-component to display the user’s skeleton and create bone colliders, and BallSpawner-component to spawn virtual objects in the scene.


GestureDemo / KinectGesturesDemo1

This scene demonstrates the detection of discrete gestures (swipe-left, swipe-right & swipe up in this demo), used to control a presentation cube. The scene utilizes the KinectManager-component to manage the sensor and data, CubeGestureListener-component to listen for swipe-gestures, and CubePresentationScript-component to control the presentation cube in the scene.

GestureDemo / KinectGesturesDemo2

It demonstrates the detection of continuous gestures (wheel, zoom-out & zoom-in in this demo), used to rotate and zoom a 3D model. The scene utilizes the KinectManager-component to manage the sensor and data, ModelGestureListener-component to set up and listen for wheel and zoom gestures, and ModelPresentationScript-component to control the 3D model in the scene.


MultiSceneDemo / Scene0-StartupScene, Scene1-AvatarsDemo & Scene2-GesturesDemo

This set of scenes shows how to use the KinectManager and other Kinect-related components across multiple scenes. It utilizes the KinectManager-component to manage the sensor and data, LoadFirstLevel-component in the startup scene to load the first real scene, LoadLevelWithDelay-component in the real scenes to cycle between scenes, and RefreshGestureListeners-component to refresh the list of gesture listeners for the current scene. The other Kinect-related components like AvatarController, gesture listeners, etc. are utilized in the real scenes (1 & 2), but they are related to the respective scene specifics, and not to using a single KinectManager across multiple scenes.


OverlayDemo / KinectOverlayDemo1

This is the most basic joint-overlay demo, showing how a virtual object can overlay the user’s joint (right shoulder) on screen. The scene utilizes the KinectManager-component to manage the sensor and data, JointOverlayer-component to overlay the user’s body joint with the given virtual object, and BackgroundColorImage-component to display the color camera image on the scene background.

OverlayDemo / KinectOverlayDemo2

This is a skeleton overlay demo, with green balls overlaying the body joints, and lines between them to represent the bones. The scene utilizes the KinectManager-component to manage the sensor and data, SkeletonOverlayer-component to overlay the user’s body joints with the virtual objects and lines, and BackgroundColorImage-component to display the color camera image on the scene background.


PointCloudDemo / VfxPointCloudDemo

This scene demonstrates how to combine the spatial and color data provided by the sensor with visual effect graphs to create point cloud of the environment, with or without visual effects. It utilizes the KinectManager-component to manage the sensor and data, as well as the sensor interface settings to create the attribute textures needed by the VFX graph. Please see ‘How to run VfxPointCloudDemo-scene’-section below, as well.